Jan 17 00:15:00.128889 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 16 22:25:55 -00 2026 Jan 17 00:15:00.128937 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:15:00.128960 kernel: BIOS-provided physical RAM map: Jan 17 00:15:00.128974 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 17 00:15:00.128985 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 17 00:15:00.128998 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 17 00:15:00.129014 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Jan 17 00:15:00.129028 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Jan 17 00:15:00.129039 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 17 00:15:00.129057 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 17 00:15:00.129070 kernel: NX (Execute Disable) protection: active Jan 17 00:15:00.129083 kernel: APIC: Static calls initialized Jan 17 00:15:00.129103 kernel: SMBIOS 2.8 present. Jan 17 00:15:00.129117 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Jan 17 00:15:00.129132 kernel: Hypervisor detected: KVM Jan 17 00:15:00.129151 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 17 00:15:00.129169 kernel: kvm-clock: using sched offset of 3962810768 cycles Jan 17 00:15:00.129185 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 17 00:15:00.129200 kernel: tsc: Detected 2294.604 MHz processor Jan 17 00:15:00.129215 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 00:15:00.129230 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 00:15:00.129246 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Jan 17 00:15:00.129260 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 17 00:15:00.129274 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 00:15:00.129293 kernel: ACPI: Early table checksum verification disabled Jan 17 00:15:00.129307 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Jan 17 00:15:00.129322 kernel: ACPI: RSDT 0x000000007FFE19FD 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:15:00.129336 kernel: ACPI: FACP 0x000000007FFE17E1 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:15:00.129350 kernel: ACPI: DSDT 0x000000007FFE0040 0017A1 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:15:00.129364 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jan 17 00:15:00.129379 kernel: ACPI: APIC 0x000000007FFE1855 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:15:00.129393 kernel: ACPI: HPET 0x000000007FFE18D5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:15:00.130138 kernel: ACPI: SRAT 0x000000007FFE190D 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:15:00.130175 kernel: ACPI: WAET 0x000000007FFE19D5 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:15:00.130191 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe17e1-0x7ffe1854] Jan 17 00:15:00.130206 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe17e0] Jan 17 00:15:00.130221 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jan 17 00:15:00.130235 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe1855-0x7ffe18d4] Jan 17 00:15:00.130250 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe18d5-0x7ffe190c] Jan 17 00:15:00.130265 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe190d-0x7ffe19d4] Jan 17 00:15:00.130291 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe19d5-0x7ffe19fc] Jan 17 00:15:00.130307 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 17 00:15:00.130323 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 17 00:15:00.130338 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 17 00:15:00.130354 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 17 00:15:00.130379 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Jan 17 00:15:00.130395 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Jan 17 00:15:00.130434 kernel: Zone ranges: Jan 17 00:15:00.130450 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 00:15:00.130467 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Jan 17 00:15:00.130482 kernel: Normal empty Jan 17 00:15:00.130498 kernel: Movable zone start for each node Jan 17 00:15:00.130513 kernel: Early memory node ranges Jan 17 00:15:00.130529 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 17 00:15:00.130545 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Jan 17 00:15:00.130561 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Jan 17 00:15:00.130581 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 00:15:00.130597 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 17 00:15:00.130617 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Jan 17 00:15:00.130633 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 17 00:15:00.130648 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 17 00:15:00.130664 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 17 00:15:00.130679 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 17 00:15:00.130696 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 17 00:15:00.130710 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 17 00:15:00.130731 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 17 00:15:00.130746 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 17 00:15:00.130762 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 00:15:00.130777 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 17 00:15:00.130793 kernel: TSC deadline timer available Jan 17 00:15:00.130808 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 17 00:15:00.130824 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 17 00:15:00.130840 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jan 17 00:15:00.130859 kernel: Booting paravirtualized kernel on KVM Jan 17 00:15:00.130874 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 00:15:00.130895 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 17 00:15:00.130910 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Jan 17 00:15:00.130926 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Jan 17 00:15:00.130941 kernel: pcpu-alloc: [0] 0 1 Jan 17 00:15:00.130956 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 17 00:15:00.130974 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:15:00.130990 kernel: random: crng init done Jan 17 00:15:00.131005 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 00:15:00.131025 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 17 00:15:00.131042 kernel: Fallback order for Node 0: 0 Jan 17 00:15:00.131058 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Jan 17 00:15:00.131074 kernel: Policy zone: DMA32 Jan 17 00:15:00.131089 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 00:15:00.131105 kernel: Memory: 1971212K/2096612K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 125140K reserved, 0K cma-reserved) Jan 17 00:15:00.131121 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 17 00:15:00.131135 kernel: Kernel/User page tables isolation: enabled Jan 17 00:15:00.131157 kernel: ftrace: allocating 37989 entries in 149 pages Jan 17 00:15:00.131172 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 00:15:00.131188 kernel: Dynamic Preempt: voluntary Jan 17 00:15:00.131203 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 00:15:00.131220 kernel: rcu: RCU event tracing is enabled. Jan 17 00:15:00.131238 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 17 00:15:00.131253 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 00:15:00.131269 kernel: Rude variant of Tasks RCU enabled. Jan 17 00:15:00.131284 kernel: Tracing variant of Tasks RCU enabled. Jan 17 00:15:00.131301 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 00:15:00.131322 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 17 00:15:00.131339 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 17 00:15:00.131362 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 00:15:00.131385 kernel: Console: colour VGA+ 80x25 Jan 17 00:15:00.131426 kernel: printk: console [tty0] enabled Jan 17 00:15:00.131450 kernel: printk: console [ttyS0] enabled Jan 17 00:15:00.131475 kernel: ACPI: Core revision 20230628 Jan 17 00:15:00.131497 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 17 00:15:00.131517 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 00:15:00.131543 kernel: x2apic enabled Jan 17 00:15:00.131563 kernel: APIC: Switched APIC routing to: physical x2apic Jan 17 00:15:00.131583 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 17 00:15:00.131603 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x21134be0148, max_idle_ns: 440795257049 ns Jan 17 00:15:00.131623 kernel: Calibrating delay loop (skipped) preset value.. 4589.20 BogoMIPS (lpj=2294604) Jan 17 00:15:00.131642 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 17 00:15:00.131662 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 17 00:15:00.131682 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 00:15:00.131722 kernel: Spectre V2 : Mitigation: Retpolines Jan 17 00:15:00.131743 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 17 00:15:00.131763 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 17 00:15:00.131788 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 17 00:15:00.131810 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 17 00:15:00.131831 kernel: MDS: Mitigation: Clear CPU buffers Jan 17 00:15:00.131851 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 17 00:15:00.131873 kernel: active return thunk: its_return_thunk Jan 17 00:15:00.131898 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 17 00:15:00.131923 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 17 00:15:00.131944 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 17 00:15:00.131965 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 17 00:15:00.131987 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 17 00:15:00.132008 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 17 00:15:00.132029 kernel: Freeing SMP alternatives memory: 32K Jan 17 00:15:00.132050 kernel: pid_max: default: 32768 minimum: 301 Jan 17 00:15:00.132071 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 00:15:00.132096 kernel: landlock: Up and running. Jan 17 00:15:00.132117 kernel: SELinux: Initializing. Jan 17 00:15:00.132138 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 17 00:15:00.132159 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 17 00:15:00.132180 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Jan 17 00:15:00.132202 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:15:00.132223 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:15:00.132244 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:15:00.132304 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Jan 17 00:15:00.132331 kernel: signal: max sigframe size: 1776 Jan 17 00:15:00.132356 kernel: rcu: Hierarchical SRCU implementation. Jan 17 00:15:00.132383 kernel: rcu: Max phase no-delay instances is 400. Jan 17 00:15:00.132397 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 17 00:15:00.132626 kernel: smp: Bringing up secondary CPUs ... Jan 17 00:15:00.132641 kernel: smpboot: x86: Booting SMP configuration: Jan 17 00:15:00.132651 kernel: .... node #0, CPUs: #1 Jan 17 00:15:00.132660 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 00:15:00.132678 kernel: smpboot: Max logical packages: 1 Jan 17 00:15:00.132694 kernel: smpboot: Total of 2 processors activated (9178.41 BogoMIPS) Jan 17 00:15:00.132704 kernel: devtmpfs: initialized Jan 17 00:15:00.132714 kernel: x86/mm: Memory block size: 128MB Jan 17 00:15:00.132724 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 00:15:00.132734 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 17 00:15:00.132743 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 00:15:00.132753 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 00:15:00.132763 kernel: audit: initializing netlink subsys (disabled) Jan 17 00:15:00.132772 kernel: audit: type=2000 audit(1768608897.898:1): state=initialized audit_enabled=0 res=1 Jan 17 00:15:00.132786 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 00:15:00.132795 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 00:15:00.132805 kernel: cpuidle: using governor menu Jan 17 00:15:00.132814 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 00:15:00.132824 kernel: dca service started, version 1.12.1 Jan 17 00:15:00.132833 kernel: PCI: Using configuration type 1 for base access Jan 17 00:15:00.132843 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 00:15:00.132853 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 00:15:00.132862 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 00:15:00.132876 kernel: ACPI: Added _OSI(Module Device) Jan 17 00:15:00.132885 kernel: ACPI: Added _OSI(Processor Device) Jan 17 00:15:00.132895 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 00:15:00.132904 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 00:15:00.132914 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 00:15:00.132924 kernel: ACPI: Interpreter enabled Jan 17 00:15:00.132933 kernel: ACPI: PM: (supports S0 S5) Jan 17 00:15:00.132943 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 00:15:00.132953 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 00:15:00.132965 kernel: PCI: Using E820 reservations for host bridge windows Jan 17 00:15:00.132975 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 17 00:15:00.132985 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 17 00:15:00.133255 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 17 00:15:00.133476 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 17 00:15:00.133665 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 17 00:15:00.133695 kernel: acpiphp: Slot [3] registered Jan 17 00:15:00.133719 kernel: acpiphp: Slot [4] registered Jan 17 00:15:00.133733 kernel: acpiphp: Slot [5] registered Jan 17 00:15:00.133745 kernel: acpiphp: Slot [6] registered Jan 17 00:15:00.133760 kernel: acpiphp: Slot [7] registered Jan 17 00:15:00.133814 kernel: acpiphp: Slot [8] registered Jan 17 00:15:00.133828 kernel: acpiphp: Slot [9] registered Jan 17 00:15:00.133845 kernel: acpiphp: Slot [10] registered Jan 17 00:15:00.133871 kernel: acpiphp: Slot [11] registered Jan 17 00:15:00.133896 kernel: acpiphp: Slot [12] registered Jan 17 00:15:00.133917 kernel: acpiphp: Slot [13] registered Jan 17 00:15:00.133955 kernel: acpiphp: Slot [14] registered Jan 17 00:15:00.133965 kernel: acpiphp: Slot [15] registered Jan 17 00:15:00.133974 kernel: acpiphp: Slot [16] registered Jan 17 00:15:00.133984 kernel: acpiphp: Slot [17] registered Jan 17 00:15:00.133994 kernel: acpiphp: Slot [18] registered Jan 17 00:15:00.134004 kernel: acpiphp: Slot [19] registered Jan 17 00:15:00.134013 kernel: acpiphp: Slot [20] registered Jan 17 00:15:00.134023 kernel: acpiphp: Slot [21] registered Jan 17 00:15:00.134033 kernel: acpiphp: Slot [22] registered Jan 17 00:15:00.134046 kernel: acpiphp: Slot [23] registered Jan 17 00:15:00.134056 kernel: acpiphp: Slot [24] registered Jan 17 00:15:00.134065 kernel: acpiphp: Slot [25] registered Jan 17 00:15:00.134075 kernel: acpiphp: Slot [26] registered Jan 17 00:15:00.134084 kernel: acpiphp: Slot [27] registered Jan 17 00:15:00.134094 kernel: acpiphp: Slot [28] registered Jan 17 00:15:00.134103 kernel: acpiphp: Slot [29] registered Jan 17 00:15:00.134113 kernel: acpiphp: Slot [30] registered Jan 17 00:15:00.134122 kernel: acpiphp: Slot [31] registered Jan 17 00:15:00.134132 kernel: PCI host bridge to bus 0000:00 Jan 17 00:15:00.134323 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 17 00:15:00.134466 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 17 00:15:00.134591 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 17 00:15:00.134719 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 17 00:15:00.134867 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jan 17 00:15:00.135027 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 17 00:15:00.135246 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 17 00:15:00.135545 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 17 00:15:00.135818 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jan 17 00:15:00.136049 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Jan 17 00:15:00.136237 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 17 00:15:00.136442 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 17 00:15:00.136666 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 17 00:15:00.136871 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 17 00:15:00.137090 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Jan 17 00:15:00.137260 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Jan 17 00:15:00.140619 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 17 00:15:00.140836 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 17 00:15:00.141006 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 17 00:15:00.141224 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jan 17 00:15:00.141397 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jan 17 00:15:00.141632 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Jan 17 00:15:00.141776 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Jan 17 00:15:00.141881 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jan 17 00:15:00.141982 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 17 00:15:00.142111 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 17 00:15:00.142226 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Jan 17 00:15:00.142391 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Jan 17 00:15:00.143721 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Jan 17 00:15:00.143987 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 17 00:15:00.144152 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Jan 17 00:15:00.144309 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Jan 17 00:15:00.145542 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Jan 17 00:15:00.145755 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Jan 17 00:15:00.145869 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Jan 17 00:15:00.145975 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Jan 17 00:15:00.146076 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Jan 17 00:15:00.146185 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Jan 17 00:15:00.146287 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Jan 17 00:15:00.146394 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Jan 17 00:15:00.147666 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Jan 17 00:15:00.147846 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Jan 17 00:15:00.147999 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Jan 17 00:15:00.148147 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Jan 17 00:15:00.148295 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Jan 17 00:15:00.149671 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Jan 17 00:15:00.149860 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Jan 17 00:15:00.150693 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Jan 17 00:15:00.150725 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 17 00:15:00.150747 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 17 00:15:00.150769 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 17 00:15:00.150790 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 17 00:15:00.150811 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 17 00:15:00.150833 kernel: iommu: Default domain type: Translated Jan 17 00:15:00.150863 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 00:15:00.150891 kernel: PCI: Using ACPI for IRQ routing Jan 17 00:15:00.150917 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 17 00:15:00.150950 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 17 00:15:00.150984 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Jan 17 00:15:00.151155 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 17 00:15:00.151267 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 17 00:15:00.151370 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 17 00:15:00.151391 kernel: vgaarb: loaded Jan 17 00:15:00.151402 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 17 00:15:00.151435 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 17 00:15:00.151445 kernel: clocksource: Switched to clocksource kvm-clock Jan 17 00:15:00.151455 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 00:15:00.151466 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 00:15:00.151476 kernel: pnp: PnP ACPI init Jan 17 00:15:00.151486 kernel: pnp: PnP ACPI: found 4 devices Jan 17 00:15:00.151496 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 00:15:00.151511 kernel: NET: Registered PF_INET protocol family Jan 17 00:15:00.151521 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 00:15:00.151531 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 17 00:15:00.151541 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 00:15:00.151551 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 17 00:15:00.151560 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 17 00:15:00.151570 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 17 00:15:00.151585 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 17 00:15:00.151610 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 17 00:15:00.151642 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 00:15:00.151670 kernel: NET: Registered PF_XDP protocol family Jan 17 00:15:00.151832 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 17 00:15:00.151966 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 17 00:15:00.152101 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 17 00:15:00.152235 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 17 00:15:00.152372 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jan 17 00:15:00.155707 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 17 00:15:00.155934 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 17 00:15:00.155960 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 17 00:15:00.156114 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7b0 took 43343 usecs Jan 17 00:15:00.156137 kernel: PCI: CLS 0 bytes, default 64 Jan 17 00:15:00.156152 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 17 00:15:00.156168 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x21134be0148, max_idle_ns: 440795257049 ns Jan 17 00:15:00.156185 kernel: Initialise system trusted keyrings Jan 17 00:15:00.156201 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 17 00:15:00.156217 kernel: Key type asymmetric registered Jan 17 00:15:00.156241 kernel: Asymmetric key parser 'x509' registered Jan 17 00:15:00.156258 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 00:15:00.156275 kernel: io scheduler mq-deadline registered Jan 17 00:15:00.156289 kernel: io scheduler kyber registered Jan 17 00:15:00.156303 kernel: io scheduler bfq registered Jan 17 00:15:00.156318 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 00:15:00.156335 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 17 00:15:00.156352 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 17 00:15:00.156368 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 17 00:15:00.156391 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 00:15:00.156507 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 00:15:00.156523 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 17 00:15:00.156539 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 17 00:15:00.156579 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 17 00:15:00.156842 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 17 00:15:00.156871 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 17 00:15:00.157023 kernel: rtc_cmos 00:03: registered as rtc0 Jan 17 00:15:00.157178 kernel: rtc_cmos 00:03: setting system clock to 2026-01-17T00:14:59 UTC (1768608899) Jan 17 00:15:00.157321 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 17 00:15:00.157343 kernel: intel_pstate: CPU model not supported Jan 17 00:15:00.157361 kernel: NET: Registered PF_INET6 protocol family Jan 17 00:15:00.157379 kernel: Segment Routing with IPv6 Jan 17 00:15:00.157398 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 00:15:00.158518 kernel: NET: Registered PF_PACKET protocol family Jan 17 00:15:00.158547 kernel: Key type dns_resolver registered Jan 17 00:15:00.158565 kernel: IPI shorthand broadcast: enabled Jan 17 00:15:00.158592 kernel: sched_clock: Marking stable (1443002228, 256422452)->(1772679681, -73255001) Jan 17 00:15:00.158609 kernel: registered taskstats version 1 Jan 17 00:15:00.158624 kernel: Loading compiled-in X.509 certificates Jan 17 00:15:00.158640 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: b6a847a3a522371f15b0d5425f12279a240740e4' Jan 17 00:15:00.158655 kernel: Key type .fscrypt registered Jan 17 00:15:00.158670 kernel: Key type fscrypt-provisioning registered Jan 17 00:15:00.158684 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 00:15:00.158700 kernel: ima: Allocated hash algorithm: sha1 Jan 17 00:15:00.158716 kernel: ima: No architecture policies found Jan 17 00:15:00.158738 kernel: clk: Disabling unused clocks Jan 17 00:15:00.158755 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 17 00:15:00.158771 kernel: Write protecting the kernel read-only data: 36864k Jan 17 00:15:00.158788 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 17 00:15:00.158836 kernel: Run /init as init process Jan 17 00:15:00.158856 kernel: with arguments: Jan 17 00:15:00.158875 kernel: /init Jan 17 00:15:00.158901 kernel: with environment: Jan 17 00:15:00.158924 kernel: HOME=/ Jan 17 00:15:00.158945 kernel: TERM=linux Jan 17 00:15:00.158969 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:15:00.158991 systemd[1]: Detected virtualization kvm. Jan 17 00:15:00.159008 systemd[1]: Detected architecture x86-64. Jan 17 00:15:00.159024 systemd[1]: Running in initrd. Jan 17 00:15:00.159038 systemd[1]: No hostname configured, using default hostname. Jan 17 00:15:00.159055 systemd[1]: Hostname set to . Jan 17 00:15:00.159076 systemd[1]: Initializing machine ID from VM UUID. Jan 17 00:15:00.159090 systemd[1]: Queued start job for default target initrd.target. Jan 17 00:15:00.159106 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:15:00.159120 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:15:00.159138 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 00:15:00.159153 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:15:00.159171 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 00:15:00.159187 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 00:15:00.159213 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 00:15:00.159232 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 00:15:00.159251 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:15:00.159266 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:15:00.159284 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:15:00.159301 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:15:00.159320 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:15:00.159344 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:15:00.159360 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:15:00.159380 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:15:00.160441 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 00:15:00.160474 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 00:15:00.160506 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:15:00.160567 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:15:00.160596 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:15:00.160623 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:15:00.160647 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 00:15:00.160671 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:15:00.160701 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 00:15:00.160725 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 00:15:00.160755 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:15:00.160782 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:15:00.160806 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:15:00.160830 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 00:15:00.160898 systemd-journald[186]: Collecting audit messages is disabled. Jan 17 00:15:00.160956 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:15:00.160979 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 00:15:00.161007 systemd-journald[186]: Journal started Jan 17 00:15:00.161060 systemd-journald[186]: Runtime Journal (/run/log/journal/752a11dfacaf4b9195fc78322248f968) is 4.9M, max 39.3M, 34.4M free. Jan 17 00:15:00.166473 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 00:15:00.156731 systemd-modules-load[187]: Inserted module 'overlay' Jan 17 00:15:00.265352 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 00:15:00.265429 kernel: Bridge firewalling registered Jan 17 00:15:00.265454 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:15:00.205552 systemd-modules-load[187]: Inserted module 'br_netfilter' Jan 17 00:15:00.266736 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:15:00.275903 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:15:00.290790 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:15:00.294650 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:15:00.304883 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:15:00.310516 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:15:00.324701 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:15:00.328390 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:15:00.334898 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 00:15:00.345733 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:15:00.348509 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:15:00.355650 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:15:00.365957 dracut-cmdline[216]: dracut-dracut-053 Jan 17 00:15:00.370091 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:15:00.370327 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:15:00.423007 systemd-resolved[221]: Positive Trust Anchors: Jan 17 00:15:00.423031 systemd-resolved[221]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:15:00.423087 systemd-resolved[221]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:15:00.428751 systemd-resolved[221]: Defaulting to hostname 'linux'. Jan 17 00:15:00.430526 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:15:00.432602 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:15:00.506478 kernel: SCSI subsystem initialized Jan 17 00:15:00.519452 kernel: Loading iSCSI transport class v2.0-870. Jan 17 00:15:00.534452 kernel: iscsi: registered transport (tcp) Jan 17 00:15:00.563061 kernel: iscsi: registered transport (qla4xxx) Jan 17 00:15:00.563181 kernel: QLogic iSCSI HBA Driver Jan 17 00:15:00.625615 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 00:15:00.633751 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 00:15:00.686503 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 00:15:00.686615 kernel: device-mapper: uevent: version 1.0.3 Jan 17 00:15:00.692662 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 00:15:00.744513 kernel: raid6: avx2x4 gen() 15087 MB/s Jan 17 00:15:00.762500 kernel: raid6: avx2x2 gen() 15360 MB/s Jan 17 00:15:00.781733 kernel: raid6: avx2x1 gen() 11570 MB/s Jan 17 00:15:00.781884 kernel: raid6: using algorithm avx2x2 gen() 15360 MB/s Jan 17 00:15:00.800670 kernel: raid6: .... xor() 17855 MB/s, rmw enabled Jan 17 00:15:00.800796 kernel: raid6: using avx2x2 recovery algorithm Jan 17 00:15:00.834477 kernel: xor: automatically using best checksumming function avx Jan 17 00:15:01.076460 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 00:15:01.096904 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:15:01.104873 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:15:01.141492 systemd-udevd[405]: Using default interface naming scheme 'v255'. Jan 17 00:15:01.150536 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:15:01.161084 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 00:15:01.196761 dracut-pre-trigger[412]: rd.md=0: removing MD RAID activation Jan 17 00:15:01.254556 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:15:01.264895 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:15:01.361913 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:15:01.373836 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 00:15:01.425199 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 00:15:01.431002 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:15:01.435315 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:15:01.438300 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:15:01.449762 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 00:15:01.489847 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:15:01.501586 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Jan 17 00:15:01.519486 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 17 00:15:01.529458 kernel: scsi host0: Virtio SCSI HBA Jan 17 00:15:01.539453 kernel: ACPI: bus type USB registered Jan 17 00:15:01.539548 kernel: usbcore: registered new interface driver usbfs Jan 17 00:15:01.541492 kernel: usbcore: registered new interface driver hub Jan 17 00:15:01.543785 kernel: usbcore: registered new device driver usb Jan 17 00:15:01.586742 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Jan 17 00:15:01.587107 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Jan 17 00:15:01.593220 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Jan 17 00:15:01.593918 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Jan 17 00:15:01.600184 kernel: hub 1-0:1.0: USB hub found Jan 17 00:15:01.600689 kernel: hub 1-0:1.0: 2 ports detected Jan 17 00:15:01.626526 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 00:15:01.626651 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 00:15:01.626677 kernel: GPT:9289727 != 125829119 Jan 17 00:15:01.626698 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 00:15:01.626720 kernel: GPT:9289727 != 125829119 Jan 17 00:15:01.626760 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 00:15:01.626781 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 00:15:01.661447 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Jan 17 00:15:01.664475 kernel: libata version 3.00 loaded. Jan 17 00:15:01.678882 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 17 00:15:01.686440 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Jan 17 00:15:01.686864 kernel: scsi host1: ata_piix Jan 17 00:15:01.695447 kernel: AVX2 version of gcm_enc/dec engaged. Jan 17 00:15:01.695529 kernel: AES CTR mode by8 optimization enabled Jan 17 00:15:01.699960 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:15:01.717189 kernel: scsi host2: ata_piix Jan 17 00:15:01.717858 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Jan 17 00:15:01.717888 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Jan 17 00:15:01.700154 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:15:01.716394 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:15:01.718013 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:15:01.718278 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:15:01.721136 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:15:01.733962 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:15:01.891154 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:15:01.939567 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (459) Jan 17 00:15:01.955512 kernel: BTRFS: device fsid a67b5ac0-cdfd-426d-9386-e029282f433a devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (456) Jan 17 00:15:01.961999 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 17 00:15:01.975609 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 17 00:15:02.016655 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 00:15:02.026902 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 17 00:15:02.030731 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 17 00:15:02.043979 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 00:15:02.057760 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:15:02.077967 disk-uuid[543]: Primary Header is updated. Jan 17 00:15:02.077967 disk-uuid[543]: Secondary Entries is updated. Jan 17 00:15:02.077967 disk-uuid[543]: Secondary Header is updated. Jan 17 00:15:02.097100 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 00:15:02.117457 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:15:02.123450 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 00:15:02.147507 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 00:15:03.147450 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 00:15:03.148027 disk-uuid[544]: The operation has completed successfully. Jan 17 00:15:03.240953 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 00:15:03.241134 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 00:15:03.275882 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 00:15:03.294451 sh[565]: Success Jan 17 00:15:03.320498 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 17 00:15:03.429193 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 00:15:03.433777 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 00:15:03.435283 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 00:15:03.479762 kernel: BTRFS info (device dm-0): first mount of filesystem a67b5ac0-cdfd-426d-9386-e029282f433a Jan 17 00:15:03.479892 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:15:03.481201 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 00:15:03.484633 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 00:15:03.488780 kernel: BTRFS info (device dm-0): using free space tree Jan 17 00:15:03.505328 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 00:15:03.508398 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 00:15:03.514973 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 00:15:03.524095 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 00:15:03.552539 kernel: BTRFS info (device vda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:15:03.552664 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:15:03.558061 kernel: BTRFS info (device vda6): using free space tree Jan 17 00:15:03.568665 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 00:15:03.588293 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 00:15:03.594745 kernel: BTRFS info (device vda6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:15:03.605709 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 00:15:03.619854 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 00:15:03.772378 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:15:03.796084 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:15:03.849336 systemd-networkd[750]: lo: Link UP Jan 17 00:15:03.849348 systemd-networkd[750]: lo: Gained carrier Jan 17 00:15:03.852352 systemd-networkd[750]: Enumeration completed Jan 17 00:15:03.852599 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:15:03.853169 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 17 00:15:03.853175 systemd-networkd[750]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Jan 17 00:15:03.854622 systemd[1]: Reached target network.target - Network. Jan 17 00:15:03.858079 systemd-networkd[750]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:15:03.858084 systemd-networkd[750]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:15:03.859047 systemd-networkd[750]: eth0: Link UP Jan 17 00:15:03.859053 systemd-networkd[750]: eth0: Gained carrier Jan 17 00:15:03.859066 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 17 00:15:03.874765 ignition[655]: Ignition 2.19.0 Jan 17 00:15:03.864705 systemd-networkd[750]: eth1: Link UP Jan 17 00:15:03.874780 ignition[655]: Stage: fetch-offline Jan 17 00:15:03.864711 systemd-networkd[750]: eth1: Gained carrier Jan 17 00:15:03.874867 ignition[655]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:15:03.864727 systemd-networkd[750]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:15:03.874883 ignition[655]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 00:15:03.877559 systemd-networkd[750]: eth0: DHCPv4 address 64.227.98.118/20, gateway 64.227.96.1 acquired from 169.254.169.253 Jan 17 00:15:03.875071 ignition[655]: parsed url from cmdline: "" Jan 17 00:15:03.878118 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:15:03.875077 ignition[655]: no config URL provided Jan 17 00:15:03.880646 systemd-networkd[750]: eth1: DHCPv4 address 10.124.0.16/20 acquired from 169.254.169.253 Jan 17 00:15:03.875085 ignition[655]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:15:03.889935 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 00:15:03.875102 ignition[655]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:15:03.875112 ignition[655]: failed to fetch config: resource requires networking Jan 17 00:15:03.875439 ignition[655]: Ignition finished successfully Jan 17 00:15:03.931261 ignition[758]: Ignition 2.19.0 Jan 17 00:15:03.931284 ignition[758]: Stage: fetch Jan 17 00:15:03.933122 ignition[758]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:15:03.933148 ignition[758]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 00:15:03.933351 ignition[758]: parsed url from cmdline: "" Jan 17 00:15:03.933358 ignition[758]: no config URL provided Jan 17 00:15:03.933368 ignition[758]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:15:03.933385 ignition[758]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:15:03.933437 ignition[758]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Jan 17 00:15:04.686515 ignition[758]: GET result: OK Jan 17 00:15:04.686667 ignition[758]: parsing config with SHA512: 151e6a7b6c91e84ede98c402e8ee4b91b823c07a6db7c76e10612bf421e8a091a0a35acb60fafab5d81c438e3032f5f011f361d762d4f6ced74c261877aeb60c Jan 17 00:15:04.698012 unknown[758]: fetched base config from "system" Jan 17 00:15:04.698749 ignition[758]: fetch: fetch complete Jan 17 00:15:04.698027 unknown[758]: fetched base config from "system" Jan 17 00:15:04.698759 ignition[758]: fetch: fetch passed Jan 17 00:15:04.698038 unknown[758]: fetched user config from "digitalocean" Jan 17 00:15:04.698877 ignition[758]: Ignition finished successfully Jan 17 00:15:04.708116 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 00:15:04.714784 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 00:15:04.754267 ignition[764]: Ignition 2.19.0 Jan 17 00:15:04.754297 ignition[764]: Stage: kargs Jan 17 00:15:04.754834 ignition[764]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:15:04.754855 ignition[764]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 00:15:04.758747 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 00:15:04.756727 ignition[764]: kargs: kargs passed Jan 17 00:15:04.756833 ignition[764]: Ignition finished successfully Jan 17 00:15:04.766776 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 00:15:04.800456 ignition[770]: Ignition 2.19.0 Jan 17 00:15:04.800482 ignition[770]: Stage: disks Jan 17 00:15:04.800914 ignition[770]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:15:04.800937 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 00:15:04.804765 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 00:15:04.802656 ignition[770]: disks: disks passed Jan 17 00:15:04.802763 ignition[770]: Ignition finished successfully Jan 17 00:15:04.815600 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 00:15:04.817803 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 00:15:04.819539 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:15:04.821482 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:15:04.823890 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:15:04.831868 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 00:15:04.865497 systemd-fsck[778]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 17 00:15:04.871522 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 00:15:04.882137 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 00:15:05.034458 kernel: EXT4-fs (vda9): mounted filesystem ab055cfb-d92d-4784-aa05-26ea844796bc r/w with ordered data mode. Quota mode: none. Jan 17 00:15:05.035856 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 00:15:05.038824 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 00:15:05.050638 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:15:05.055673 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 00:15:05.059960 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Jan 17 00:15:05.072447 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (786) Jan 17 00:15:05.072606 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 17 00:15:05.091981 kernel: BTRFS info (device vda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:15:05.092030 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:15:05.092050 kernel: BTRFS info (device vda6): using free space tree Jan 17 00:15:05.073751 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 00:15:05.073821 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:15:05.101146 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 00:15:05.110833 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 00:15:05.117051 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 00:15:05.120828 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:15:05.202742 coreos-metadata[789]: Jan 17 00:15:05.202 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 17 00:15:05.218307 coreos-metadata[788]: Jan 17 00:15:05.217 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 17 00:15:05.221954 initrd-setup-root[816]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 00:15:05.225255 coreos-metadata[789]: Jan 17 00:15:05.225 INFO Fetch successful Jan 17 00:15:05.233839 coreos-metadata[789]: Jan 17 00:15:05.233 INFO wrote hostname ci-4081.3.6-n-912fd252f4 to /sysroot/etc/hostname Jan 17 00:15:05.237039 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 00:15:05.242139 initrd-setup-root[823]: cut: /sysroot/etc/group: No such file or directory Jan 17 00:15:05.244720 coreos-metadata[788]: Jan 17 00:15:05.241 INFO Fetch successful Jan 17 00:15:05.255733 initrd-setup-root[831]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 00:15:05.258821 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Jan 17 00:15:05.259032 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Jan 17 00:15:05.271070 initrd-setup-root[839]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 00:15:05.452083 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 00:15:05.458969 systemd-networkd[750]: eth1: Gained IPv6LL Jan 17 00:15:05.462698 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 00:15:05.466908 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 00:15:05.486845 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 00:15:05.491004 kernel: BTRFS info (device vda6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:15:05.520031 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 00:15:05.522572 systemd-networkd[750]: eth0: Gained IPv6LL Jan 17 00:15:05.550094 ignition[907]: INFO : Ignition 2.19.0 Jan 17 00:15:05.554018 ignition[907]: INFO : Stage: mount Jan 17 00:15:05.554018 ignition[907]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:15:05.554018 ignition[907]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 00:15:05.557396 ignition[907]: INFO : mount: mount passed Jan 17 00:15:05.557396 ignition[907]: INFO : Ignition finished successfully Jan 17 00:15:05.557365 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 00:15:05.575807 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 00:15:06.043888 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:15:06.073437 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (918) Jan 17 00:15:06.077544 kernel: BTRFS info (device vda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:15:06.077651 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:15:06.079809 kernel: BTRFS info (device vda6): using free space tree Jan 17 00:15:06.086488 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 00:15:06.090833 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:15:06.129502 ignition[935]: INFO : Ignition 2.19.0 Jan 17 00:15:06.129502 ignition[935]: INFO : Stage: files Jan 17 00:15:06.131379 ignition[935]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:15:06.131379 ignition[935]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 00:15:06.133755 ignition[935]: DEBUG : files: compiled without relabeling support, skipping Jan 17 00:15:06.133755 ignition[935]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 00:15:06.133755 ignition[935]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 00:15:06.139222 ignition[935]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 00:15:06.140582 ignition[935]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 00:15:06.141700 unknown[935]: wrote ssh authorized keys file for user: core Jan 17 00:15:06.143020 ignition[935]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 00:15:06.144285 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 17 00:15:06.145713 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 17 00:15:06.200824 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 17 00:15:06.275467 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 17 00:15:06.275467 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 17 00:15:06.275467 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 00:15:06.275467 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:15:06.275467 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:15:06.275467 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:15:06.284259 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:15:06.284259 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:15:06.284259 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:15:06.284259 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:15:06.284259 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:15:06.284259 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 17 00:15:06.284259 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 17 00:15:06.284259 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 17 00:15:06.284259 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 17 00:15:06.786634 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 17 00:15:07.911701 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 17 00:15:07.914315 ignition[935]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 17 00:15:07.914315 ignition[935]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:15:07.914315 ignition[935]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:15:07.914315 ignition[935]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 17 00:15:07.914315 ignition[935]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 17 00:15:07.923628 ignition[935]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 00:15:07.923628 ignition[935]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:15:07.923628 ignition[935]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:15:07.923628 ignition[935]: INFO : files: files passed Jan 17 00:15:07.923628 ignition[935]: INFO : Ignition finished successfully Jan 17 00:15:07.918621 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 00:15:07.928846 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 00:15:07.942714 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 00:15:07.951849 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 00:15:07.952021 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 00:15:07.962130 initrd-setup-root-after-ignition[963]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:15:07.962130 initrd-setup-root-after-ignition[963]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:15:07.965891 initrd-setup-root-after-ignition[966]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:15:07.968520 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:15:07.970763 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 00:15:07.977756 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 00:15:08.028754 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 00:15:08.028926 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 00:15:08.031419 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 00:15:08.032803 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 00:15:08.034646 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 00:15:08.042707 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 00:15:08.063919 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:15:08.069612 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 00:15:08.097598 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:15:08.098576 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:15:08.100715 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 00:15:08.102400 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 00:15:08.102621 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:15:08.104688 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 00:15:08.105845 systemd[1]: Stopped target basic.target - Basic System. Jan 17 00:15:08.107613 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 00:15:08.109119 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:15:08.110760 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 00:15:08.112609 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 00:15:08.114344 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:15:08.116082 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 00:15:08.117732 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 00:15:08.119564 systemd[1]: Stopped target swap.target - Swaps. Jan 17 00:15:08.121178 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 00:15:08.121369 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:15:08.123326 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:15:08.124490 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:15:08.126001 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 00:15:08.126169 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:15:08.127756 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 00:15:08.127993 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 00:15:08.130253 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 00:15:08.130520 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:15:08.132655 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 00:15:08.132850 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 00:15:08.134539 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 17 00:15:08.134823 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 00:15:08.145288 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 00:15:08.147761 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 00:15:08.149958 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 00:15:08.150796 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:15:08.155265 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 00:15:08.155588 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:15:08.169313 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 00:15:08.169493 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 00:15:08.186008 ignition[987]: INFO : Ignition 2.19.0 Jan 17 00:15:08.189575 ignition[987]: INFO : Stage: umount Jan 17 00:15:08.189575 ignition[987]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:15:08.189575 ignition[987]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 00:15:08.194458 ignition[987]: INFO : umount: umount passed Jan 17 00:15:08.194458 ignition[987]: INFO : Ignition finished successfully Jan 17 00:15:08.197936 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 00:15:08.198897 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 00:15:08.199049 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 00:15:08.201981 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 00:15:08.202148 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 00:15:08.205803 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 00:15:08.205892 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 00:15:08.222924 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 00:15:08.223019 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 00:15:08.241447 systemd[1]: Stopped target network.target - Network. Jan 17 00:15:08.243028 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 00:15:08.243141 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:15:08.244982 systemd[1]: Stopped target paths.target - Path Units. Jan 17 00:15:08.246499 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 00:15:08.246588 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:15:08.263349 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 00:15:08.265151 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 00:15:08.266951 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 00:15:08.267041 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:15:08.268543 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 00:15:08.268614 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:15:08.270221 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 00:15:08.270314 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 00:15:08.272197 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 00:15:08.272434 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 00:15:08.274120 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 00:15:08.276046 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 00:15:08.278498 systemd-networkd[750]: eth0: DHCPv6 lease lost Jan 17 00:15:08.279343 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 00:15:08.279519 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 00:15:08.283541 systemd-networkd[750]: eth1: DHCPv6 lease lost Jan 17 00:15:08.285827 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 00:15:08.286226 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 00:15:08.291114 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 00:15:08.291380 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 00:15:08.296270 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 00:15:08.296368 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:15:08.297450 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 00:15:08.297557 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 00:15:08.306716 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 00:15:08.307807 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 00:15:08.307944 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:15:08.312011 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 00:15:08.312125 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:15:08.314227 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 00:15:08.314346 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 00:15:08.316219 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 00:15:08.316374 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:15:08.321679 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:15:08.346365 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 00:15:08.346742 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:15:08.348662 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 00:15:08.348744 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 00:15:08.350008 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 00:15:08.350084 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:15:08.351915 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 00:15:08.352007 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:15:08.354383 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 00:15:08.354500 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 00:15:08.356369 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:15:08.356496 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:15:08.366753 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 00:15:08.367787 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 00:15:08.367911 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:15:08.370063 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 17 00:15:08.370176 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:15:08.372370 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 00:15:08.372512 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:15:08.373517 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:15:08.373603 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:15:08.376673 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 00:15:08.376846 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 00:15:08.386119 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 00:15:08.386336 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 00:15:08.389343 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 00:15:08.400760 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 00:15:08.415813 systemd[1]: Switching root. Jan 17 00:15:08.480783 systemd-journald[186]: Journal stopped Jan 17 00:15:10.159902 systemd-journald[186]: Received SIGTERM from PID 1 (systemd). Jan 17 00:15:10.160030 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 00:15:10.160068 kernel: SELinux: policy capability open_perms=1 Jan 17 00:15:10.160099 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 00:15:10.160134 kernel: SELinux: policy capability always_check_network=0 Jan 17 00:15:10.160158 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 00:15:10.160209 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 00:15:10.160230 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 00:15:10.160260 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 00:15:10.160283 kernel: audit: type=1403 audit(1768608908.686:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 00:15:10.160303 systemd[1]: Successfully loaded SELinux policy in 61.497ms. Jan 17 00:15:10.160324 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 21.458ms. Jan 17 00:15:10.160340 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:15:10.160354 systemd[1]: Detected virtualization kvm. Jan 17 00:15:10.160372 systemd[1]: Detected architecture x86-64. Jan 17 00:15:10.160385 systemd[1]: Detected first boot. Jan 17 00:15:10.160404 systemd[1]: Hostname set to . Jan 17 00:15:10.163397 systemd[1]: Initializing machine ID from VM UUID. Jan 17 00:15:10.163505 zram_generator::config[1032]: No configuration found. Jan 17 00:15:10.163540 systemd[1]: Populated /etc with preset unit settings. Jan 17 00:15:10.163565 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 17 00:15:10.163593 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 17 00:15:10.163627 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 17 00:15:10.163656 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 00:15:10.163683 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 00:15:10.163711 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 00:15:10.163738 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 00:15:10.163765 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 00:15:10.163792 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 00:15:10.163819 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 00:15:10.163846 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 00:15:10.163876 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:15:10.163903 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:15:10.163930 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 00:15:10.163956 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 00:15:10.163983 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 00:15:10.164009 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:15:10.164035 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 17 00:15:10.164063 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:15:10.164090 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 17 00:15:10.164121 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 17 00:15:10.164149 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 17 00:15:10.164177 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 00:15:10.164223 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:15:10.164257 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:15:10.164286 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:15:10.164319 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:15:10.164353 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 00:15:10.164384 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 00:15:10.164444 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:15:10.164473 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:15:10.164501 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:15:10.164529 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 00:15:10.164557 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 00:15:10.164584 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 00:15:10.164617 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 00:15:10.164645 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:15:10.164673 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 00:15:10.164700 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 00:15:10.164727 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 00:15:10.164757 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 00:15:10.164785 systemd[1]: Reached target machines.target - Containers. Jan 17 00:15:10.164812 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 00:15:10.164842 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:15:10.166470 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:15:10.166523 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 00:15:10.166552 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:15:10.166580 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:15:10.166608 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:15:10.166635 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 00:15:10.166662 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:15:10.166691 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 00:15:10.166726 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 17 00:15:10.166755 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 17 00:15:10.166783 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 17 00:15:10.166818 systemd[1]: Stopped systemd-fsck-usr.service. Jan 17 00:15:10.166842 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:15:10.166861 kernel: fuse: init (API version 7.39) Jan 17 00:15:10.166881 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:15:10.166905 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 00:15:10.166933 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 00:15:10.166966 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:15:10.166994 systemd[1]: verity-setup.service: Deactivated successfully. Jan 17 00:15:10.167021 systemd[1]: Stopped verity-setup.service. Jan 17 00:15:10.167049 kernel: ACPI: bus type drm_connector registered Jan 17 00:15:10.167076 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:15:10.167103 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 00:15:10.167131 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 00:15:10.167159 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 00:15:10.167190 kernel: loop: module loaded Jan 17 00:15:10.167215 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 00:15:10.167243 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 00:15:10.167277 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 00:15:10.167304 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:15:10.167338 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 00:15:10.167366 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 00:15:10.169507 systemd-journald[1104]: Collecting audit messages is disabled. Jan 17 00:15:10.169568 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:15:10.169586 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:15:10.169608 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:15:10.169622 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:15:10.169635 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:15:10.169650 systemd-journald[1104]: Journal started Jan 17 00:15:10.169683 systemd-journald[1104]: Runtime Journal (/run/log/journal/752a11dfacaf4b9195fc78322248f968) is 4.9M, max 39.3M, 34.4M free. Jan 17 00:15:09.646522 systemd[1]: Queued start job for default target multi-user.target. Jan 17 00:15:09.673264 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 17 00:15:10.172503 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:15:09.673923 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 17 00:15:10.176429 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:15:10.179484 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 00:15:10.179748 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 00:15:10.181119 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:15:10.181340 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:15:10.182788 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 00:15:10.184470 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:15:10.185914 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 00:15:10.187251 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 00:15:10.207589 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 00:15:10.219541 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 00:15:10.226632 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 00:15:10.229730 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 00:15:10.229798 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:15:10.233184 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 00:15:10.245073 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 00:15:10.256795 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 00:15:10.259243 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:15:10.264224 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 00:15:10.270757 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 00:15:10.271838 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:15:10.277739 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 00:15:10.279088 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:15:10.281663 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:15:10.291662 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 00:15:10.296685 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 00:15:10.301785 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 00:15:10.306864 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 00:15:10.308429 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 00:15:10.351657 systemd-journald[1104]: Time spent on flushing to /var/log/journal/752a11dfacaf4b9195fc78322248f968 is 66.073ms for 987 entries. Jan 17 00:15:10.351657 systemd-journald[1104]: System Journal (/var/log/journal/752a11dfacaf4b9195fc78322248f968) is 8.0M, max 195.6M, 187.6M free. Jan 17 00:15:10.444899 systemd-journald[1104]: Received client request to flush runtime journal. Jan 17 00:15:10.444998 kernel: loop0: detected capacity change from 0 to 140768 Jan 17 00:15:10.376749 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:15:10.384710 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 00:15:10.399060 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:15:10.436876 udevadm[1158]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 17 00:15:10.447505 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 00:15:10.451571 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 00:15:10.453856 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 00:15:10.466683 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 00:15:10.489447 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 00:15:10.513638 systemd-tmpfiles[1152]: ACLs are not supported, ignoring. Jan 17 00:15:10.513669 systemd-tmpfiles[1152]: ACLs are not supported, ignoring. Jan 17 00:15:10.519670 kernel: loop1: detected capacity change from 0 to 8 Jan 17 00:15:10.522018 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 00:15:10.524962 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 00:15:10.538226 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:15:10.555654 kernel: loop2: detected capacity change from 0 to 224512 Jan 17 00:15:10.553595 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 00:15:10.643447 kernel: loop3: detected capacity change from 0 to 142488 Jan 17 00:15:10.656187 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 00:15:10.665850 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:15:10.715531 kernel: loop4: detected capacity change from 0 to 140768 Jan 17 00:15:10.730572 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Jan 17 00:15:10.733242 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Jan 17 00:15:10.744451 kernel: loop5: detected capacity change from 0 to 8 Jan 17 00:15:10.753518 kernel: loop6: detected capacity change from 0 to 224512 Jan 17 00:15:10.757694 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:15:10.779450 kernel: loop7: detected capacity change from 0 to 142488 Jan 17 00:15:10.810790 (sd-merge)[1178]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Jan 17 00:15:10.812978 (sd-merge)[1178]: Merged extensions into '/usr'. Jan 17 00:15:10.827253 systemd[1]: Reloading requested from client PID 1151 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 00:15:10.827521 systemd[1]: Reloading... Jan 17 00:15:11.046451 zram_generator::config[1205]: No configuration found. Jan 17 00:15:11.272629 ldconfig[1146]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 00:15:11.330895 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:15:11.418004 systemd[1]: Reloading finished in 587 ms. Jan 17 00:15:11.447049 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 00:15:11.449702 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 00:15:11.468793 systemd[1]: Starting ensure-sysext.service... Jan 17 00:15:11.473819 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:15:11.499667 systemd[1]: Reloading requested from client PID 1248 ('systemctl') (unit ensure-sysext.service)... Jan 17 00:15:11.499695 systemd[1]: Reloading... Jan 17 00:15:11.524844 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 00:15:11.529334 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 00:15:11.535461 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 00:15:11.537031 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Jan 17 00:15:11.538782 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Jan 17 00:15:11.546863 systemd-tmpfiles[1249]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:15:11.547672 systemd-tmpfiles[1249]: Skipping /boot Jan 17 00:15:11.563688 systemd-tmpfiles[1249]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:15:11.563895 systemd-tmpfiles[1249]: Skipping /boot Jan 17 00:15:11.688495 zram_generator::config[1276]: No configuration found. Jan 17 00:15:11.911834 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:15:12.014445 systemd[1]: Reloading finished in 514 ms. Jan 17 00:15:12.044459 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 00:15:12.050327 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:15:12.065749 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:15:12.070059 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 00:15:12.075726 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 00:15:12.090080 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:15:12.096751 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:15:12.108824 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 00:15:12.130930 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 00:15:12.136744 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:15:12.137050 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:15:12.146931 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:15:12.152885 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:15:12.163975 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:15:12.165133 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:15:12.165353 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:15:12.172217 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:15:12.173732 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:15:12.174710 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:15:12.174871 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:15:12.184670 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:15:12.185038 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:15:12.194563 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:15:12.196734 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:15:12.197011 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:15:12.206531 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 00:15:12.209494 systemd[1]: Finished ensure-sysext.service. Jan 17 00:15:12.220967 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 00:15:12.224290 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:15:12.225501 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:15:12.245870 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:15:12.256928 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 17 00:15:12.263900 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 00:15:12.264923 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 00:15:12.266055 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:15:12.268528 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:15:12.270483 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:15:12.270801 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:15:12.273374 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:15:12.274900 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:15:12.291541 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 00:15:12.297958 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:15:12.304151 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 00:15:12.330273 systemd-udevd[1330]: Using default interface naming scheme 'v255'. Jan 17 00:15:12.333869 augenrules[1363]: No rules Jan 17 00:15:12.339491 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:15:12.362305 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 00:15:12.376827 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:15:12.388673 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:15:12.562861 systemd-resolved[1325]: Positive Trust Anchors: Jan 17 00:15:12.562878 systemd-resolved[1325]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:15:12.562917 systemd-resolved[1325]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:15:12.575269 systemd-resolved[1325]: Using system hostname 'ci-4081.3.6-n-912fd252f4'. Jan 17 00:15:12.578153 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:15:12.579725 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:15:12.580756 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 17 00:15:12.581734 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 00:15:12.621503 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 17 00:15:12.634951 systemd-networkd[1375]: lo: Link UP Jan 17 00:15:12.635516 systemd-networkd[1375]: lo: Gained carrier Jan 17 00:15:12.639125 systemd-networkd[1375]: Enumeration completed Jan 17 00:15:12.640120 systemd-networkd[1375]: eth0: Configuring with /run/systemd/network/10-96:da:a5:e5:4e:72.network. Jan 17 00:15:12.641537 systemd-networkd[1375]: eth0: Link UP Jan 17 00:15:12.641550 systemd-networkd[1375]: eth0: Gained carrier Jan 17 00:15:12.646727 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Jan 17 00:15:12.648561 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:15:12.648822 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:15:12.651441 systemd-timesyncd[1351]: Network configuration changed, trying to establish connection. Jan 17 00:15:12.674441 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1385) Jan 17 00:15:12.664477 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:15:12.678503 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:15:12.683678 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:15:12.686211 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:15:12.686278 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 00:15:12.686302 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:15:12.686595 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:15:12.698940 systemd[1]: Reached target network.target - Network. Jan 17 00:15:12.721688 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 00:15:12.725906 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:15:12.726425 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:15:12.728381 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:15:12.728912 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:15:12.732463 kernel: ISO 9660 Extensions: RRIP_1991A Jan 17 00:15:12.736395 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Jan 17 00:15:12.742082 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:15:12.742506 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:15:12.751003 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:15:12.752581 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:15:12.783576 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 17 00:15:12.800623 kernel: ACPI: button: Power Button [PWRF] Jan 17 00:15:12.811437 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 17 00:15:12.838773 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 17 00:15:12.853311 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 00:15:12.853977 systemd-networkd[1375]: eth1: Configuring with /run/systemd/network/10-16:eb:a3:d9:b7:47.network. Jan 17 00:15:12.855141 systemd-networkd[1375]: eth1: Link UP Jan 17 00:15:12.855148 systemd-networkd[1375]: eth1: Gained carrier Jan 17 00:15:12.855592 systemd-timesyncd[1351]: Network configuration changed, trying to establish connection. Jan 17 00:15:12.861781 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 00:15:12.863351 systemd-timesyncd[1351]: Network configuration changed, trying to establish connection. Jan 17 00:15:12.865589 systemd-timesyncd[1351]: Network configuration changed, trying to establish connection. Jan 17 00:15:12.901163 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 00:15:12.949529 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 17 00:15:12.958192 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 17 00:15:12.962468 kernel: mousedev: PS/2 mouse device common for all mice Jan 17 00:15:12.980562 kernel: Console: switching to colour dummy device 80x25 Jan 17 00:15:12.990912 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 17 00:15:12.990994 kernel: [drm] features: -context_init Jan 17 00:15:12.994442 kernel: [drm] number of scanouts: 1 Jan 17 00:15:12.998503 kernel: [drm] number of cap sets: 0 Jan 17 00:15:12.999576 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:15:13.015962 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jan 17 00:15:13.018696 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:15:13.018961 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:15:13.026047 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:15:13.038438 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 17 00:15:13.042684 kernel: Console: switching to colour frame buffer device 128x48 Jan 17 00:15:13.064453 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 17 00:15:13.078838 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:15:13.080578 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:15:13.115952 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:15:13.178208 kernel: EDAC MC: Ver: 3.0.0 Jan 17 00:15:13.212494 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 00:15:13.219754 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 00:15:13.220713 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:15:13.239962 lvm[1430]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:15:13.281364 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 00:15:13.283039 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:15:13.283210 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:15:13.283480 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 00:15:13.283651 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 00:15:13.284064 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 00:15:13.285014 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 00:15:13.286834 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 00:15:13.287793 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 00:15:13.287851 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:15:13.288047 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:15:13.290509 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 00:15:13.293173 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 00:15:13.301522 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 00:15:13.303913 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 00:15:13.304980 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 00:15:13.308193 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:15:13.308837 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:15:13.309391 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:15:13.310096 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:15:13.317722 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 00:15:13.323339 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 17 00:15:13.329496 lvm[1436]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:15:13.333951 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 00:15:13.346522 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 00:15:13.351127 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 00:15:13.353028 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 00:15:13.365672 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 00:15:13.375341 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 17 00:15:13.383786 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 00:15:13.400739 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 00:15:13.409370 coreos-metadata[1438]: Jan 17 00:15:13.409 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 17 00:15:13.410265 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 00:15:13.411476 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 00:15:13.412278 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 00:15:13.420949 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 00:15:13.426888 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 00:15:13.431506 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 00:15:13.448684 coreos-metadata[1438]: Jan 17 00:15:13.447 INFO Fetch successful Jan 17 00:15:13.454390 jq[1440]: false Jan 17 00:15:13.451707 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 00:15:13.453563 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 00:15:13.473148 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 00:15:13.473516 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 00:15:13.506911 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 00:15:13.510070 dbus-daemon[1439]: [system] SELinux support is enabled Jan 17 00:15:13.507216 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 00:15:13.510759 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 00:15:13.518208 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 00:15:13.518272 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 00:15:13.521812 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 00:15:13.521946 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Jan 17 00:15:13.521978 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 00:15:13.530908 (ntainerd)[1465]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 00:15:13.537455 jq[1456]: true Jan 17 00:15:13.545081 tar[1458]: linux-amd64/LICENSE Jan 17 00:15:13.545081 tar[1458]: linux-amd64/helm Jan 17 00:15:13.557017 extend-filesystems[1441]: Found loop4 Jan 17 00:15:13.557017 extend-filesystems[1441]: Found loop5 Jan 17 00:15:13.557017 extend-filesystems[1441]: Found loop6 Jan 17 00:15:13.557017 extend-filesystems[1441]: Found loop7 Jan 17 00:15:13.557017 extend-filesystems[1441]: Found vda Jan 17 00:15:13.557017 extend-filesystems[1441]: Found vda1 Jan 17 00:15:13.557017 extend-filesystems[1441]: Found vda2 Jan 17 00:15:13.557017 extend-filesystems[1441]: Found vda3 Jan 17 00:15:13.557017 extend-filesystems[1441]: Found usr Jan 17 00:15:13.557017 extend-filesystems[1441]: Found vda4 Jan 17 00:15:13.557017 extend-filesystems[1441]: Found vda6 Jan 17 00:15:13.557017 extend-filesystems[1441]: Found vda7 Jan 17 00:15:13.557017 extend-filesystems[1441]: Found vda9 Jan 17 00:15:13.557017 extend-filesystems[1441]: Checking size of /dev/vda9 Jan 17 00:15:13.618898 update_engine[1452]: I20260117 00:15:13.602528 1452 main.cc:92] Flatcar Update Engine starting Jan 17 00:15:13.636185 jq[1474]: true Jan 17 00:15:13.631068 systemd[1]: Started update-engine.service - Update Engine. Jan 17 00:15:13.641750 update_engine[1452]: I20260117 00:15:13.635075 1452 update_check_scheduler.cc:74] Next update check in 6m35s Jan 17 00:15:13.639746 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 00:15:13.678464 extend-filesystems[1441]: Resized partition /dev/vda9 Jan 17 00:15:13.682319 extend-filesystems[1485]: resize2fs 1.47.1 (20-May-2024) Jan 17 00:15:13.706606 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Jan 17 00:15:13.688986 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 17 00:15:13.690029 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 00:15:13.772371 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1383) Jan 17 00:15:13.861703 systemd-logind[1450]: New seat seat0. Jan 17 00:15:13.888466 systemd-logind[1450]: Watching system buttons on /dev/input/event1 (Power Button) Jan 17 00:15:13.888501 systemd-logind[1450]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 17 00:15:13.888850 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 00:15:13.918283 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 17 00:15:13.943690 locksmithd[1481]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 00:15:13.993559 extend-filesystems[1485]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 17 00:15:13.993559 extend-filesystems[1485]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 17 00:15:13.993559 extend-filesystems[1485]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 17 00:15:14.017581 extend-filesystems[1441]: Resized filesystem in /dev/vda9 Jan 17 00:15:14.017581 extend-filesystems[1441]: Found vdb Jan 17 00:15:13.995279 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 00:15:14.024282 bash[1501]: Updated "/home/core/.ssh/authorized_keys" Jan 17 00:15:13.995614 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 00:15:14.041315 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 00:15:14.045334 systemd-networkd[1375]: eth1: Gained IPv6LL Jan 17 00:15:14.048629 systemd-timesyncd[1351]: Network configuration changed, trying to establish connection. Jan 17 00:15:14.055865 systemd[1]: Starting sshkeys.service... Jan 17 00:15:14.058302 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 00:15:14.063892 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 00:15:14.074992 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:15:14.078962 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 00:15:14.134761 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 17 00:15:14.147284 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 17 00:15:14.164945 sshd_keygen[1476]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 00:15:14.238101 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 00:15:14.281268 coreos-metadata[1519]: Jan 17 00:15:14.280 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 17 00:15:14.300776 coreos-metadata[1519]: Jan 17 00:15:14.299 INFO Fetch successful Jan 17 00:15:14.333286 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 00:15:14.338493 unknown[1519]: wrote ssh authorized keys file for user: core Jan 17 00:15:14.352115 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 00:15:14.420612 containerd[1465]: time="2026-01-17T00:15:14.419231827Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 00:15:14.423744 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 00:15:14.431499 update-ssh-keys[1537]: Updated "/home/core/.ssh/authorized_keys" Jan 17 00:15:14.423964 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 00:15:14.434655 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 17 00:15:14.449031 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 00:15:14.451569 systemd[1]: Finished sshkeys.service. Jan 17 00:15:14.491940 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 00:15:14.502802 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 00:15:14.518043 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 17 00:15:14.521008 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 00:15:14.546774 containerd[1465]: time="2026-01-17T00:15:14.546680869Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:15:14.549335 containerd[1465]: time="2026-01-17T00:15:14.549252610Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:15:14.549335 containerd[1465]: time="2026-01-17T00:15:14.549324466Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 00:15:14.549680 containerd[1465]: time="2026-01-17T00:15:14.549354937Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 00:15:14.549680 containerd[1465]: time="2026-01-17T00:15:14.549616463Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 00:15:14.549680 containerd[1465]: time="2026-01-17T00:15:14.549650411Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 00:15:14.549820 containerd[1465]: time="2026-01-17T00:15:14.549743770Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:15:14.549820 containerd[1465]: time="2026-01-17T00:15:14.549766784Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:15:14.552376 containerd[1465]: time="2026-01-17T00:15:14.550045777Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:15:14.552376 containerd[1465]: time="2026-01-17T00:15:14.550081932Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 00:15:14.552376 containerd[1465]: time="2026-01-17T00:15:14.550104752Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:15:14.552376 containerd[1465]: time="2026-01-17T00:15:14.550125472Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 00:15:14.552376 containerd[1465]: time="2026-01-17T00:15:14.550538104Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:15:14.552376 containerd[1465]: time="2026-01-17T00:15:14.550877058Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:15:14.552376 containerd[1465]: time="2026-01-17T00:15:14.551077326Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:15:14.552376 containerd[1465]: time="2026-01-17T00:15:14.551102254Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 00:15:14.552376 containerd[1465]: time="2026-01-17T00:15:14.551237780Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 00:15:14.552376 containerd[1465]: time="2026-01-17T00:15:14.551313521Z" level=info msg="metadata content store policy set" policy=shared Jan 17 00:15:14.571452 containerd[1465]: time="2026-01-17T00:15:14.568341445Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 00:15:14.571452 containerd[1465]: time="2026-01-17T00:15:14.568493022Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 00:15:14.571452 containerd[1465]: time="2026-01-17T00:15:14.568525091Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 00:15:14.571452 containerd[1465]: time="2026-01-17T00:15:14.568880713Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 00:15:14.571452 containerd[1465]: time="2026-01-17T00:15:14.568955549Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 00:15:14.571452 containerd[1465]: time="2026-01-17T00:15:14.570585129Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 00:15:14.572020 containerd[1465]: time="2026-01-17T00:15:14.571799779Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 00:15:14.572107 containerd[1465]: time="2026-01-17T00:15:14.572072773Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 00:15:14.572343 containerd[1465]: time="2026-01-17T00:15:14.572113625Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 00:15:14.572343 containerd[1465]: time="2026-01-17T00:15:14.572147725Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 00:15:14.572343 containerd[1465]: time="2026-01-17T00:15:14.572180667Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 00:15:14.572343 containerd[1465]: time="2026-01-17T00:15:14.572245846Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 00:15:14.572343 containerd[1465]: time="2026-01-17T00:15:14.572269822Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 00:15:14.572343 containerd[1465]: time="2026-01-17T00:15:14.572301291Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 00:15:14.572606 containerd[1465]: time="2026-01-17T00:15:14.572346080Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 00:15:14.572606 containerd[1465]: time="2026-01-17T00:15:14.572375539Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 00:15:14.572606 containerd[1465]: time="2026-01-17T00:15:14.572402999Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 00:15:14.572606 containerd[1465]: time="2026-01-17T00:15:14.572450311Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 00:15:14.572606 containerd[1465]: time="2026-01-17T00:15:14.572490662Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 00:15:14.572606 containerd[1465]: time="2026-01-17T00:15:14.572521331Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 00:15:14.572606 containerd[1465]: time="2026-01-17T00:15:14.572550061Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 00:15:14.572606 containerd[1465]: time="2026-01-17T00:15:14.572576484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 00:15:14.572955 containerd[1465]: time="2026-01-17T00:15:14.572622436Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 00:15:14.572955 containerd[1465]: time="2026-01-17T00:15:14.572657260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 00:15:14.572955 containerd[1465]: time="2026-01-17T00:15:14.572684806Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 00:15:14.572955 containerd[1465]: time="2026-01-17T00:15:14.572713733Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 00:15:14.572955 containerd[1465]: time="2026-01-17T00:15:14.572744473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 00:15:14.572955 containerd[1465]: time="2026-01-17T00:15:14.572776095Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 00:15:14.572955 containerd[1465]: time="2026-01-17T00:15:14.572802281Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 00:15:14.572955 containerd[1465]: time="2026-01-17T00:15:14.572829309Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 00:15:14.572955 containerd[1465]: time="2026-01-17T00:15:14.572858187Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 00:15:14.572955 containerd[1465]: time="2026-01-17T00:15:14.572891063Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 00:15:14.572955 containerd[1465]: time="2026-01-17T00:15:14.572935994Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 00:15:14.574775 containerd[1465]: time="2026-01-17T00:15:14.572962531Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 00:15:14.574775 containerd[1465]: time="2026-01-17T00:15:14.572987863Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 00:15:14.574775 containerd[1465]: time="2026-01-17T00:15:14.573076771Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 00:15:14.574775 containerd[1465]: time="2026-01-17T00:15:14.573118413Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 00:15:14.574775 containerd[1465]: time="2026-01-17T00:15:14.573145069Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 00:15:14.574775 containerd[1465]: time="2026-01-17T00:15:14.573172229Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 00:15:14.574775 containerd[1465]: time="2026-01-17T00:15:14.573195721Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 00:15:14.574775 containerd[1465]: time="2026-01-17T00:15:14.573219105Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 00:15:14.574775 containerd[1465]: time="2026-01-17T00:15:14.573243489Z" level=info msg="NRI interface is disabled by configuration." Jan 17 00:15:14.574775 containerd[1465]: time="2026-01-17T00:15:14.573267425Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 00:15:14.579031 containerd[1465]: time="2026-01-17T00:15:14.578145641Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 00:15:14.579031 containerd[1465]: time="2026-01-17T00:15:14.578331559Z" level=info msg="Connect containerd service" Jan 17 00:15:14.579031 containerd[1465]: time="2026-01-17T00:15:14.578449018Z" level=info msg="using legacy CRI server" Jan 17 00:15:14.579031 containerd[1465]: time="2026-01-17T00:15:14.578465839Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 00:15:14.579031 containerd[1465]: time="2026-01-17T00:15:14.578626679Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 00:15:14.582539 containerd[1465]: time="2026-01-17T00:15:14.582223953Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 00:15:14.584946 containerd[1465]: time="2026-01-17T00:15:14.582895533Z" level=info msg="Start subscribing containerd event" Jan 17 00:15:14.584946 containerd[1465]: time="2026-01-17T00:15:14.583067674Z" level=info msg="Start recovering state" Jan 17 00:15:14.584946 containerd[1465]: time="2026-01-17T00:15:14.583201934Z" level=info msg="Start event monitor" Jan 17 00:15:14.584946 containerd[1465]: time="2026-01-17T00:15:14.583221724Z" level=info msg="Start snapshots syncer" Jan 17 00:15:14.584946 containerd[1465]: time="2026-01-17T00:15:14.583240690Z" level=info msg="Start cni network conf syncer for default" Jan 17 00:15:14.584946 containerd[1465]: time="2026-01-17T00:15:14.583254870Z" level=info msg="Start streaming server" Jan 17 00:15:14.584946 containerd[1465]: time="2026-01-17T00:15:14.584076576Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 00:15:14.584946 containerd[1465]: time="2026-01-17T00:15:14.584168691Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 00:15:14.584387 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 00:15:14.586747 containerd[1465]: time="2026-01-17T00:15:14.586549392Z" level=info msg="containerd successfully booted in 0.169985s" Jan 17 00:15:14.611037 systemd-networkd[1375]: eth0: Gained IPv6LL Jan 17 00:15:14.612541 systemd-timesyncd[1351]: Network configuration changed, trying to establish connection. Jan 17 00:15:15.033624 tar[1458]: linux-amd64/README.md Jan 17 00:15:15.065596 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 17 00:15:15.392254 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 00:15:15.402756 systemd[1]: Started sshd@0-64.227.98.118:22-4.153.228.146:49436.service - OpenSSH per-connection server daemon (4.153.228.146:49436). Jan 17 00:15:15.887513 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:15:15.889197 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 00:15:15.894748 systemd[1]: Startup finished in 1.625s (kernel) + 8.909s (initrd) + 7.265s (userspace) = 17.801s. Jan 17 00:15:15.902548 sshd[1556]: Accepted publickey for core from 4.153.228.146 port 49436 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:15:15.906741 (kubelet)[1563]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:15:15.909352 sshd[1556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:15:15.943552 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 00:15:15.956081 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 00:15:15.973512 systemd-logind[1450]: New session 1 of user core. Jan 17 00:15:15.987467 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 00:15:15.999186 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 00:15:16.015507 (systemd)[1570]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 00:15:16.230305 systemd[1570]: Queued start job for default target default.target. Jan 17 00:15:16.236205 systemd[1570]: Created slice app.slice - User Application Slice. Jan 17 00:15:16.236257 systemd[1570]: Reached target paths.target - Paths. Jan 17 00:15:16.236284 systemd[1570]: Reached target timers.target - Timers. Jan 17 00:15:16.238438 systemd[1570]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 00:15:16.269111 systemd[1570]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 00:15:16.269397 systemd[1570]: Reached target sockets.target - Sockets. Jan 17 00:15:16.269582 systemd[1570]: Reached target basic.target - Basic System. Jan 17 00:15:16.269670 systemd[1570]: Reached target default.target - Main User Target. Jan 17 00:15:16.269709 systemd[1570]: Startup finished in 242ms. Jan 17 00:15:16.269857 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 00:15:16.277163 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 00:15:16.624099 systemd[1]: Started sshd@1-64.227.98.118:22-4.153.228.146:49448.service - OpenSSH per-connection server daemon (4.153.228.146:49448). Jan 17 00:15:16.816514 kubelet[1563]: E0117 00:15:16.816438 1563 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:15:16.819826 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:15:16.820167 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:15:16.820878 systemd[1]: kubelet.service: Consumed 1.513s CPU time. Jan 17 00:15:17.054883 sshd[1585]: Accepted publickey for core from 4.153.228.146 port 49448 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:15:17.057690 sshd[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:15:17.067273 systemd-logind[1450]: New session 2 of user core. Jan 17 00:15:17.076756 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 00:15:17.361189 sshd[1585]: pam_unix(sshd:session): session closed for user core Jan 17 00:15:17.366358 systemd[1]: sshd@1-64.227.98.118:22-4.153.228.146:49448.service: Deactivated successfully. Jan 17 00:15:17.368816 systemd[1]: session-2.scope: Deactivated successfully. Jan 17 00:15:17.370866 systemd-logind[1450]: Session 2 logged out. Waiting for processes to exit. Jan 17 00:15:17.372750 systemd-logind[1450]: Removed session 2. Jan 17 00:15:17.460910 systemd[1]: Started sshd@2-64.227.98.118:22-4.153.228.146:49464.service - OpenSSH per-connection server daemon (4.153.228.146:49464). Jan 17 00:15:17.925033 sshd[1594]: Accepted publickey for core from 4.153.228.146 port 49464 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:15:17.927241 sshd[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:15:17.934115 systemd-logind[1450]: New session 3 of user core. Jan 17 00:15:17.941270 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 00:15:18.259793 sshd[1594]: pam_unix(sshd:session): session closed for user core Jan 17 00:15:18.266209 systemd[1]: sshd@2-64.227.98.118:22-4.153.228.146:49464.service: Deactivated successfully. Jan 17 00:15:18.268328 systemd[1]: session-3.scope: Deactivated successfully. Jan 17 00:15:18.269307 systemd-logind[1450]: Session 3 logged out. Waiting for processes to exit. Jan 17 00:15:18.270914 systemd-logind[1450]: Removed session 3. Jan 17 00:15:18.350929 systemd[1]: Started sshd@3-64.227.98.118:22-4.153.228.146:49478.service - OpenSSH per-connection server daemon (4.153.228.146:49478). Jan 17 00:15:18.806014 sshd[1601]: Accepted publickey for core from 4.153.228.146 port 49478 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:15:18.808093 sshd[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:15:18.814794 systemd-logind[1450]: New session 4 of user core. Jan 17 00:15:18.822817 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 00:15:19.137638 sshd[1601]: pam_unix(sshd:session): session closed for user core Jan 17 00:15:19.141319 systemd[1]: sshd@3-64.227.98.118:22-4.153.228.146:49478.service: Deactivated successfully. Jan 17 00:15:19.143578 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 00:15:19.146108 systemd-logind[1450]: Session 4 logged out. Waiting for processes to exit. Jan 17 00:15:19.147657 systemd-logind[1450]: Removed session 4. Jan 17 00:15:19.225896 systemd[1]: Started sshd@4-64.227.98.118:22-4.153.228.146:49482.service - OpenSSH per-connection server daemon (4.153.228.146:49482). Jan 17 00:15:19.677828 sshd[1608]: Accepted publickey for core from 4.153.228.146 port 49482 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:15:19.678853 sshd[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:15:19.687617 systemd-logind[1450]: New session 5 of user core. Jan 17 00:15:19.696779 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 00:15:19.949199 sudo[1611]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 00:15:19.949603 sudo[1611]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:15:19.966021 sudo[1611]: pam_unix(sudo:session): session closed for user root Jan 17 00:15:20.038006 sshd[1608]: pam_unix(sshd:session): session closed for user core Jan 17 00:15:20.043829 systemd-logind[1450]: Session 5 logged out. Waiting for processes to exit. Jan 17 00:15:20.045040 systemd[1]: sshd@4-64.227.98.118:22-4.153.228.146:49482.service: Deactivated successfully. Jan 17 00:15:20.048114 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 00:15:20.049857 systemd-logind[1450]: Removed session 5. Jan 17 00:15:20.113808 systemd[1]: Started sshd@5-64.227.98.118:22-4.153.228.146:49490.service - OpenSSH per-connection server daemon (4.153.228.146:49490). Jan 17 00:15:20.504753 sshd[1616]: Accepted publickey for core from 4.153.228.146 port 49490 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:15:20.507032 sshd[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:15:20.516455 systemd-logind[1450]: New session 6 of user core. Jan 17 00:15:20.522700 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 00:15:20.735963 sudo[1620]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 00:15:20.736453 sudo[1620]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:15:20.742861 sudo[1620]: pam_unix(sudo:session): session closed for user root Jan 17 00:15:20.752523 sudo[1619]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 00:15:20.752982 sudo[1619]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:15:20.774919 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 00:15:20.778906 auditctl[1623]: No rules Jan 17 00:15:20.780115 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 00:15:20.780368 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 00:15:20.792060 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:15:20.830135 augenrules[1641]: No rules Jan 17 00:15:20.831555 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:15:20.833683 sudo[1619]: pam_unix(sudo:session): session closed for user root Jan 17 00:15:20.895113 sshd[1616]: pam_unix(sshd:session): session closed for user core Jan 17 00:15:20.901014 systemd[1]: sshd@5-64.227.98.118:22-4.153.228.146:49490.service: Deactivated successfully. Jan 17 00:15:20.903148 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 00:15:20.904154 systemd-logind[1450]: Session 6 logged out. Waiting for processes to exit. Jan 17 00:15:20.905393 systemd-logind[1450]: Removed session 6. Jan 17 00:15:20.992019 systemd[1]: Started sshd@6-64.227.98.118:22-4.153.228.146:49492.service - OpenSSH per-connection server daemon (4.153.228.146:49492). Jan 17 00:15:21.428352 sshd[1649]: Accepted publickey for core from 4.153.228.146 port 49492 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:15:21.430493 sshd[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:15:21.437819 systemd-logind[1450]: New session 7 of user core. Jan 17 00:15:21.445804 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 00:15:21.683268 sudo[1652]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 00:15:21.683942 sudo[1652]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:15:22.150892 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 17 00:15:22.151357 (dockerd)[1668]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 17 00:15:22.706183 dockerd[1668]: time="2026-01-17T00:15:22.706095195Z" level=info msg="Starting up" Jan 17 00:15:22.895916 systemd[1]: var-lib-docker-metacopy\x2dcheck458745796-merged.mount: Deactivated successfully. Jan 17 00:15:22.922292 dockerd[1668]: time="2026-01-17T00:15:22.922223274Z" level=info msg="Loading containers: start." Jan 17 00:15:23.104450 kernel: Initializing XFRM netlink socket Jan 17 00:15:23.146277 systemd-timesyncd[1351]: Network configuration changed, trying to establish connection. Jan 17 00:15:23.208334 systemd-timesyncd[1351]: Contacted time server 23.159.16.194:123 (2.flatcar.pool.ntp.org). Jan 17 00:15:23.209001 systemd-timesyncd[1351]: Initial clock synchronization to Sat 2026-01-17 00:15:23.213023 UTC. Jan 17 00:15:23.232332 systemd-networkd[1375]: docker0: Link UP Jan 17 00:15:23.268448 dockerd[1668]: time="2026-01-17T00:15:23.268158050Z" level=info msg="Loading containers: done." Jan 17 00:15:23.299626 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck708025183-merged.mount: Deactivated successfully. Jan 17 00:15:23.304388 dockerd[1668]: time="2026-01-17T00:15:23.303511699Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 17 00:15:23.304388 dockerd[1668]: time="2026-01-17T00:15:23.303687727Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 17 00:15:23.304388 dockerd[1668]: time="2026-01-17T00:15:23.303936353Z" level=info msg="Daemon has completed initialization" Jan 17 00:15:23.371472 dockerd[1668]: time="2026-01-17T00:15:23.371060048Z" level=info msg="API listen on /run/docker.sock" Jan 17 00:15:23.371344 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 17 00:15:24.394750 containerd[1465]: time="2026-01-17T00:15:24.394258231Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 17 00:15:25.257048 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2834808575.mount: Deactivated successfully. Jan 17 00:15:26.890195 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 00:15:26.896871 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:15:27.148833 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:15:27.162423 (kubelet)[1880]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:15:27.239634 kubelet[1880]: E0117 00:15:27.239158 1880 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:15:27.243761 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:15:27.243965 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:15:27.456275 containerd[1465]: time="2026-01-17T00:15:27.455754500Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:15:27.458881 containerd[1465]: time="2026-01-17T00:15:27.458817438Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=29070647" Jan 17 00:15:27.462437 containerd[1465]: time="2026-01-17T00:15:27.460974291Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:15:27.467885 containerd[1465]: time="2026-01-17T00:15:27.467814159Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:15:27.470280 containerd[1465]: time="2026-01-17T00:15:27.470216550Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 3.075893208s" Jan 17 00:15:27.470489 containerd[1465]: time="2026-01-17T00:15:27.470463815Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 17 00:15:27.471307 containerd[1465]: time="2026-01-17T00:15:27.471276898Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 17 00:15:29.914121 containerd[1465]: time="2026-01-17T00:15:29.914041430Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:15:29.919892 containerd[1465]: time="2026-01-17T00:15:29.919549022Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24993354" Jan 17 00:15:29.924140 containerd[1465]: time="2026-01-17T00:15:29.922852456Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:15:29.931309 containerd[1465]: time="2026-01-17T00:15:29.930308039Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:15:29.932053 containerd[1465]: time="2026-01-17T00:15:29.932011075Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 2.460697614s" Jan 17 00:15:29.932053 containerd[1465]: time="2026-01-17T00:15:29.932052662Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 17 00:15:29.933652 containerd[1465]: time="2026-01-17T00:15:29.933602591Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 17 00:15:29.936955 systemd-resolved[1325]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Jan 17 00:15:31.907245 containerd[1465]: time="2026-01-17T00:15:31.907174145Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:15:31.909968 containerd[1465]: time="2026-01-17T00:15:31.909364724Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19405076" Jan 17 00:15:31.912225 containerd[1465]: time="2026-01-17T00:15:31.912146630Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:15:31.918633 containerd[1465]: time="2026-01-17T00:15:31.918530537Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:15:31.920866 containerd[1465]: time="2026-01-17T00:15:31.920650156Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 1.986987752s" Jan 17 00:15:31.920866 containerd[1465]: time="2026-01-17T00:15:31.920719281Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 17 00:15:31.922493 containerd[1465]: time="2026-01-17T00:15:31.922447716Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 17 00:15:33.043124 systemd-resolved[1325]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Jan 17 00:15:33.429466 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2227926887.mount: Deactivated successfully. Jan 17 00:15:34.098474 containerd[1465]: time="2026-01-17T00:15:34.098053579Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:15:34.101389 containerd[1465]: time="2026-01-17T00:15:34.101237684Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31161899" Jan 17 00:15:34.104206 containerd[1465]: time="2026-01-17T00:15:34.104101448Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:15:34.109294 containerd[1465]: time="2026-01-17T00:15:34.109228611Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:15:34.110853 containerd[1465]: time="2026-01-17T00:15:34.110479041Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 2.187806736s" Jan 17 00:15:34.110853 containerd[1465]: time="2026-01-17T00:15:34.110524395Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 17 00:15:34.111326 containerd[1465]: time="2026-01-17T00:15:34.111182390Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 17 00:15:34.764448 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3911414602.mount: Deactivated successfully. Jan 17 00:15:36.055007 containerd[1465]: time="2026-01-17T00:15:36.054934251Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:15:36.057607 containerd[1465]: time="2026-01-17T00:15:36.057524445Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jan 17 00:15:36.060360 containerd[1465]: time="2026-01-17T00:15:36.060263879Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:15:36.067698 containerd[1465]: time="2026-01-17T00:15:36.067599352Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:15:36.070644 containerd[1465]: time="2026-01-17T00:15:36.070371022Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.959148298s" Jan 17 00:15:36.070644 containerd[1465]: time="2026-01-17T00:15:36.070462404Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 17 00:15:36.072003 containerd[1465]: time="2026-01-17T00:15:36.071647692Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 17 00:15:36.120232 systemd-resolved[1325]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Jan 17 00:15:36.856518 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3895710821.mount: Deactivated successfully. Jan 17 00:15:36.878728 containerd[1465]: time="2026-01-17T00:15:36.878627896Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:15:36.881996 containerd[1465]: time="2026-01-17T00:15:36.881891566Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 17 00:15:36.885659 containerd[1465]: time="2026-01-17T00:15:36.885539794Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:15:36.895038 containerd[1465]: time="2026-01-17T00:15:36.894937092Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:15:36.896553 containerd[1465]: time="2026-01-17T00:15:36.896285980Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 824.585633ms" Jan 17 00:15:36.896553 containerd[1465]: time="2026-01-17T00:15:36.896344051Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 17 00:15:36.897306 containerd[1465]: time="2026-01-17T00:15:36.897018973Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 17 00:15:37.390482 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 17 00:15:37.410722 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:15:37.610730 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:15:37.612839 (kubelet)[1967]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:15:37.692206 kubelet[1967]: E0117 00:15:37.691385 1967 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:15:37.696987 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:15:37.697312 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:15:37.805489 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1104162619.mount: Deactivated successfully. Jan 17 00:15:40.099462 containerd[1465]: time="2026-01-17T00:15:40.099068742Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:15:40.101979 containerd[1465]: time="2026-01-17T00:15:40.101901441Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Jan 17 00:15:40.105108 containerd[1465]: time="2026-01-17T00:15:40.105021295Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:15:40.113192 containerd[1465]: time="2026-01-17T00:15:40.112101365Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:15:40.114553 containerd[1465]: time="2026-01-17T00:15:40.114370779Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.21729794s" Jan 17 00:15:40.114553 containerd[1465]: time="2026-01-17T00:15:40.114530082Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 17 00:15:43.509166 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:15:43.519079 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:15:43.593353 systemd[1]: Reloading requested from client PID 2056 ('systemctl') (unit session-7.scope)... Jan 17 00:15:43.593619 systemd[1]: Reloading... Jan 17 00:15:43.782449 zram_generator::config[2098]: No configuration found. Jan 17 00:15:43.979919 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:15:44.143842 systemd[1]: Reloading finished in 549 ms. Jan 17 00:15:44.218682 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:15:44.232310 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:15:44.233577 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 00:15:44.233902 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:15:44.237346 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:15:44.437718 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:15:44.439056 (kubelet)[2151]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:15:44.515168 kubelet[2151]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:15:44.515168 kubelet[2151]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:15:44.515168 kubelet[2151]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:15:44.515168 kubelet[2151]: I0117 00:15:44.513560 2151 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:15:45.174127 kubelet[2151]: I0117 00:15:45.174055 2151 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 17 00:15:45.175436 kubelet[2151]: I0117 00:15:45.174334 2151 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:15:45.175436 kubelet[2151]: I0117 00:15:45.174744 2151 server.go:954] "Client rotation is on, will bootstrap in background" Jan 17 00:15:45.219885 kubelet[2151]: E0117 00:15:45.219832 2151 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://64.227.98.118:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 64.227.98.118:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:15:45.226241 kubelet[2151]: I0117 00:15:45.226192 2151 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:15:45.247785 kubelet[2151]: E0117 00:15:45.247730 2151 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:15:45.248002 kubelet[2151]: I0117 00:15:45.247984 2151 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 17 00:15:45.252537 kubelet[2151]: I0117 00:15:45.252488 2151 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 00:15:45.253070 kubelet[2151]: I0117 00:15:45.253027 2151 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:15:45.253463 kubelet[2151]: I0117 00:15:45.253172 2151 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-912fd252f4","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 17 00:15:45.254706 kubelet[2151]: I0117 00:15:45.254666 2151 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:15:45.254909 kubelet[2151]: I0117 00:15:45.254895 2151 container_manager_linux.go:304] "Creating device plugin manager" Jan 17 00:15:45.256941 kubelet[2151]: I0117 00:15:45.256906 2151 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:15:45.263194 kubelet[2151]: I0117 00:15:45.263143 2151 kubelet.go:446] "Attempting to sync node with API server" Jan 17 00:15:45.263642 kubelet[2151]: I0117 00:15:45.263511 2151 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:15:45.263642 kubelet[2151]: I0117 00:15:45.263547 2151 kubelet.go:352] "Adding apiserver pod source" Jan 17 00:15:45.263642 kubelet[2151]: I0117 00:15:45.263562 2151 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:15:45.272499 kubelet[2151]: W0117 00:15:45.271839 2151 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://64.227.98.118:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-912fd252f4&limit=500&resourceVersion=0": dial tcp 64.227.98.118:6443: connect: connection refused Jan 17 00:15:45.272499 kubelet[2151]: E0117 00:15:45.271943 2151 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://64.227.98.118:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-912fd252f4&limit=500&resourceVersion=0\": dial tcp 64.227.98.118:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:15:45.273440 kubelet[2151]: W0117 00:15:45.272880 2151 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://64.227.98.118:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 64.227.98.118:6443: connect: connection refused Jan 17 00:15:45.273440 kubelet[2151]: E0117 00:15:45.272950 2151 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://64.227.98.118:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 64.227.98.118:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:15:45.273440 kubelet[2151]: I0117 00:15:45.273065 2151 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:15:45.278128 kubelet[2151]: I0117 00:15:45.278084 2151 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 00:15:45.280114 kubelet[2151]: W0117 00:15:45.278387 2151 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 00:15:45.280114 kubelet[2151]: I0117 00:15:45.279825 2151 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 17 00:15:45.280114 kubelet[2151]: I0117 00:15:45.279876 2151 server.go:1287] "Started kubelet" Jan 17 00:15:45.289896 kubelet[2151]: I0117 00:15:45.289859 2151 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:15:45.293974 kubelet[2151]: E0117 00:15:45.289522 2151 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://64.227.98.118:6443/api/v1/namespaces/default/events\": dial tcp 64.227.98.118:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-n-912fd252f4.188b5c7719fbb3e0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-912fd252f4,UID:ci-4081.3.6-n-912fd252f4,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-912fd252f4,},FirstTimestamp:2026-01-17 00:15:45.279841248 +0000 UTC m=+0.833827733,LastTimestamp:2026-01-17 00:15:45.279841248 +0000 UTC m=+0.833827733,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-912fd252f4,}" Jan 17 00:15:45.297969 kubelet[2151]: I0117 00:15:45.296363 2151 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:15:45.298148 kubelet[2151]: I0117 00:15:45.298012 2151 server.go:479] "Adding debug handlers to kubelet server" Jan 17 00:15:45.299308 kubelet[2151]: I0117 00:15:45.299217 2151 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:15:45.299657 kubelet[2151]: I0117 00:15:45.299624 2151 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:15:45.301119 kubelet[2151]: I0117 00:15:45.299827 2151 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 17 00:15:45.301119 kubelet[2151]: E0117 00:15:45.300129 2151 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-912fd252f4\" not found" Jan 17 00:15:45.301119 kubelet[2151]: I0117 00:15:45.300555 2151 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 17 00:15:45.301119 kubelet[2151]: I0117 00:15:45.300624 2151 reconciler.go:26] "Reconciler: start to sync state" Jan 17 00:15:45.301119 kubelet[2151]: I0117 00:15:45.301089 2151 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:15:45.304401 kubelet[2151]: W0117 00:15:45.304331 2151 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://64.227.98.118:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.227.98.118:6443: connect: connection refused Jan 17 00:15:45.304401 kubelet[2151]: E0117 00:15:45.304398 2151 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://64.227.98.118:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 64.227.98.118:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:15:45.304589 kubelet[2151]: E0117 00:15:45.304487 2151 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.227.98.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-912fd252f4?timeout=10s\": dial tcp 64.227.98.118:6443: connect: connection refused" interval="200ms" Jan 17 00:15:45.305187 kubelet[2151]: E0117 00:15:45.305155 2151 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 00:15:45.305322 kubelet[2151]: I0117 00:15:45.305309 2151 factory.go:221] Registration of the systemd container factory successfully Jan 17 00:15:45.305425 kubelet[2151]: I0117 00:15:45.305394 2151 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:15:45.309621 kubelet[2151]: I0117 00:15:45.309585 2151 factory.go:221] Registration of the containerd container factory successfully Jan 17 00:15:45.345282 kubelet[2151]: I0117 00:15:45.345243 2151 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:15:45.345686 kubelet[2151]: I0117 00:15:45.345667 2151 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:15:45.345790 kubelet[2151]: I0117 00:15:45.345781 2151 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:15:45.346083 kubelet[2151]: I0117 00:15:45.346026 2151 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 00:15:45.348228 kubelet[2151]: I0117 00:15:45.348171 2151 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 00:15:45.348228 kubelet[2151]: I0117 00:15:45.348209 2151 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 17 00:15:45.348767 kubelet[2151]: I0117 00:15:45.348244 2151 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:15:45.348767 kubelet[2151]: I0117 00:15:45.348257 2151 kubelet.go:2382] "Starting kubelet main sync loop" Jan 17 00:15:45.348767 kubelet[2151]: E0117 00:15:45.348335 2151 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 00:15:45.354882 kubelet[2151]: I0117 00:15:45.354455 2151 policy_none.go:49] "None policy: Start" Jan 17 00:15:45.354882 kubelet[2151]: I0117 00:15:45.354509 2151 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 17 00:15:45.354882 kubelet[2151]: I0117 00:15:45.354531 2151 state_mem.go:35] "Initializing new in-memory state store" Jan 17 00:15:45.360615 kubelet[2151]: W0117 00:15:45.360553 2151 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://64.227.98.118:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.227.98.118:6443: connect: connection refused Jan 17 00:15:45.361000 kubelet[2151]: E0117 00:15:45.360954 2151 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://64.227.98.118:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 64.227.98.118:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:15:45.367441 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 17 00:15:45.383484 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 17 00:15:45.389744 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 17 00:15:45.402177 kubelet[2151]: I0117 00:15:45.402112 2151 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 00:15:45.402451 kubelet[2151]: E0117 00:15:45.402428 2151 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-912fd252f4\" not found" Jan 17 00:15:45.402708 kubelet[2151]: I0117 00:15:45.402493 2151 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:15:45.402708 kubelet[2151]: I0117 00:15:45.402583 2151 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:15:45.406417 kubelet[2151]: E0117 00:15:45.406256 2151 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:15:45.406417 kubelet[2151]: E0117 00:15:45.406325 2151 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.6-n-912fd252f4\" not found" Jan 17 00:15:45.407076 kubelet[2151]: I0117 00:15:45.406942 2151 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:15:45.462133 systemd[1]: Created slice kubepods-burstable-pod526f8ac8bd51f015603a009e47b36863.slice - libcontainer container kubepods-burstable-pod526f8ac8bd51f015603a009e47b36863.slice. Jan 17 00:15:45.481095 kubelet[2151]: E0117 00:15:45.481004 2151 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-912fd252f4\" not found" node="ci-4081.3.6-n-912fd252f4" Jan 17 00:15:45.490050 systemd[1]: Created slice kubepods-burstable-pod1fce75466750330e2fb70af1100aad78.slice - libcontainer container kubepods-burstable-pod1fce75466750330e2fb70af1100aad78.slice. Jan 17 00:15:45.493499 kubelet[2151]: E0117 00:15:45.493462 2151 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-912fd252f4\" not found" node="ci-4081.3.6-n-912fd252f4" Jan 17 00:15:45.496154 systemd[1]: Created slice kubepods-burstable-podad72706345ffb42411acd4678b557860.slice - libcontainer container kubepods-burstable-podad72706345ffb42411acd4678b557860.slice. Jan 17 00:15:45.498286 kubelet[2151]: E0117 00:15:45.498250 2151 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-912fd252f4\" not found" node="ci-4081.3.6-n-912fd252f4" Jan 17 00:15:45.502389 kubelet[2151]: I0117 00:15:45.502266 2151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1fce75466750330e2fb70af1100aad78-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-912fd252f4\" (UID: \"1fce75466750330e2fb70af1100aad78\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-912fd252f4" Jan 17 00:15:45.502389 kubelet[2151]: I0117 00:15:45.502393 2151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1fce75466750330e2fb70af1100aad78-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-912fd252f4\" (UID: \"1fce75466750330e2fb70af1100aad78\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-912fd252f4" Jan 17 00:15:45.502389 kubelet[2151]: I0117 00:15:45.502449 2151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/526f8ac8bd51f015603a009e47b36863-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-912fd252f4\" (UID: \"526f8ac8bd51f015603a009e47b36863\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-912fd252f4" Jan 17 00:15:45.502389 kubelet[2151]: I0117 00:15:45.502506 2151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1fce75466750330e2fb70af1100aad78-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-912fd252f4\" (UID: \"1fce75466750330e2fb70af1100aad78\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-912fd252f4" Jan 17 00:15:45.502389 kubelet[2151]: I0117 00:15:45.502566 2151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1fce75466750330e2fb70af1100aad78-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-912fd252f4\" (UID: \"1fce75466750330e2fb70af1100aad78\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-912fd252f4" Jan 17 00:15:45.503757 kubelet[2151]: I0117 00:15:45.502597 2151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1fce75466750330e2fb70af1100aad78-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-912fd252f4\" (UID: \"1fce75466750330e2fb70af1100aad78\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-912fd252f4" Jan 17 00:15:45.503757 kubelet[2151]: I0117 00:15:45.502659 2151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ad72706345ffb42411acd4678b557860-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-912fd252f4\" (UID: \"ad72706345ffb42411acd4678b557860\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-912fd252f4" Jan 17 00:15:45.503757 kubelet[2151]: I0117 00:15:45.503205 2151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/526f8ac8bd51f015603a009e47b36863-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-912fd252f4\" (UID: \"526f8ac8bd51f015603a009e47b36863\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-912fd252f4" Jan 17 00:15:45.503757 kubelet[2151]: I0117 00:15:45.503240 2151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/526f8ac8bd51f015603a009e47b36863-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-912fd252f4\" (UID: \"526f8ac8bd51f015603a009e47b36863\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-912fd252f4" Jan 17 00:15:45.505482 kubelet[2151]: E0117 00:15:45.505135 2151 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.227.98.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-912fd252f4?timeout=10s\": dial tcp 64.227.98.118:6443: connect: connection refused" interval="400ms" Jan 17 00:15:45.505482 kubelet[2151]: I0117 00:15:45.505281 2151 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-912fd252f4" Jan 17 00:15:45.505764 kubelet[2151]: E0117 00:15:45.505721 2151 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://64.227.98.118:6443/api/v1/nodes\": dial tcp 64.227.98.118:6443: connect: connection refused" node="ci-4081.3.6-n-912fd252f4" Jan 17 00:15:45.707278 kubelet[2151]: I0117 00:15:45.707212 2151 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-912fd252f4" Jan 17 00:15:45.708053 kubelet[2151]: E0117 00:15:45.708004 2151 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://64.227.98.118:6443/api/v1/nodes\": dial tcp 64.227.98.118:6443: connect: connection refused" node="ci-4081.3.6-n-912fd252f4" Jan 17 00:15:45.782953 kubelet[2151]: E0117 00:15:45.782768 2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:15:45.787054 containerd[1465]: time="2026-01-17T00:15:45.786997986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-912fd252f4,Uid:526f8ac8bd51f015603a009e47b36863,Namespace:kube-system,Attempt:0,}" Jan 17 00:15:45.794661 kubelet[2151]: E0117 00:15:45.794324 2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:15:45.799495 kubelet[2151]: E0117 00:15:45.799464 2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:15:45.801741 containerd[1465]: time="2026-01-17T00:15:45.801341081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-912fd252f4,Uid:1fce75466750330e2fb70af1100aad78,Namespace:kube-system,Attempt:0,}" Jan 17 00:15:45.801741 containerd[1465]: time="2026-01-17T00:15:45.801432409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-912fd252f4,Uid:ad72706345ffb42411acd4678b557860,Namespace:kube-system,Attempt:0,}" Jan 17 00:15:45.906347 kubelet[2151]: E0117 00:15:45.906279 2151 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.227.98.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-912fd252f4?timeout=10s\": dial tcp 64.227.98.118:6443: connect: connection refused" interval="800ms" Jan 17 00:15:46.109289 kubelet[2151]: I0117 00:15:46.109162 2151 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-912fd252f4" Jan 17 00:15:46.109982 kubelet[2151]: E0117 00:15:46.109583 2151 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://64.227.98.118:6443/api/v1/nodes\": dial tcp 64.227.98.118:6443: connect: connection refused" node="ci-4081.3.6-n-912fd252f4" Jan 17 00:15:46.194748 kubelet[2151]: W0117 00:15:46.194662 2151 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://64.227.98.118:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.227.98.118:6443: connect: connection refused Jan 17 00:15:46.195006 kubelet[2151]: E0117 00:15:46.194718 2151 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://64.227.98.118:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 64.227.98.118:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:15:46.304620 kubelet[2151]: W0117 00:15:46.304516 2151 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://64.227.98.118:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-912fd252f4&limit=500&resourceVersion=0": dial tcp 64.227.98.118:6443: connect: connection refused Jan 17 00:15:46.304805 kubelet[2151]: E0117 00:15:46.304626 2151 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://64.227.98.118:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-912fd252f4&limit=500&resourceVersion=0\": dial tcp 64.227.98.118:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:15:46.415833 kubelet[2151]: W0117 00:15:46.415663 2151 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://64.227.98.118:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 64.227.98.118:6443: connect: connection refused Jan 17 00:15:46.415833 kubelet[2151]: E0117 00:15:46.415768 2151 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://64.227.98.118:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 64.227.98.118:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:15:46.510085 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount163060998.mount: Deactivated successfully. Jan 17 00:15:46.533502 containerd[1465]: time="2026-01-17T00:15:46.532812664Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:15:46.538660 containerd[1465]: time="2026-01-17T00:15:46.538400740Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:15:46.541767 containerd[1465]: time="2026-01-17T00:15:46.541651294Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:15:46.545088 containerd[1465]: time="2026-01-17T00:15:46.544882758Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:15:46.547377 containerd[1465]: time="2026-01-17T00:15:46.547306496Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:15:46.549680 containerd[1465]: time="2026-01-17T00:15:46.549602193Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 17 00:15:46.551630 containerd[1465]: time="2026-01-17T00:15:46.551510800Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:15:46.557681 containerd[1465]: time="2026-01-17T00:15:46.557584031Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:15:46.559138 containerd[1465]: time="2026-01-17T00:15:46.559073991Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 757.556774ms" Jan 17 00:15:46.565479 containerd[1465]: time="2026-01-17T00:15:46.564604872Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 763.119406ms" Jan 17 00:15:46.565948 containerd[1465]: time="2026-01-17T00:15:46.565901440Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 778.802616ms" Jan 17 00:15:46.707314 kubelet[2151]: E0117 00:15:46.707136 2151 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.227.98.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-912fd252f4?timeout=10s\": dial tcp 64.227.98.118:6443: connect: connection refused" interval="1.6s" Jan 17 00:15:46.787981 containerd[1465]: time="2026-01-17T00:15:46.787621711Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:15:46.787981 containerd[1465]: time="2026-01-17T00:15:46.787709447Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:15:46.787981 containerd[1465]: time="2026-01-17T00:15:46.787739721Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:15:46.787981 containerd[1465]: time="2026-01-17T00:15:46.787902798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:15:46.798443 kubelet[2151]: W0117 00:15:46.798007 2151 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://64.227.98.118:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.227.98.118:6443: connect: connection refused Jan 17 00:15:46.798443 kubelet[2151]: E0117 00:15:46.798087 2151 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://64.227.98.118:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 64.227.98.118:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:15:46.799283 containerd[1465]: time="2026-01-17T00:15:46.797459389Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:15:46.799283 containerd[1465]: time="2026-01-17T00:15:46.797526115Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:15:46.799283 containerd[1465]: time="2026-01-17T00:15:46.797541902Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:15:46.799283 containerd[1465]: time="2026-01-17T00:15:46.798247851Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:15:46.807313 containerd[1465]: time="2026-01-17T00:15:46.806660379Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:15:46.808397 containerd[1465]: time="2026-01-17T00:15:46.807638833Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:15:46.808397 containerd[1465]: time="2026-01-17T00:15:46.807663363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:15:46.808397 containerd[1465]: time="2026-01-17T00:15:46.808033241Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:15:46.840976 systemd[1]: Started cri-containerd-6bc0169e835167e2767b20b07460c87478d9ac49387558f38cf947838e85b56b.scope - libcontainer container 6bc0169e835167e2767b20b07460c87478d9ac49387558f38cf947838e85b56b. Jan 17 00:15:46.844011 systemd[1]: Started cri-containerd-afee548116836a1a490959cac540215f27d372c856322f388638ed02a5aa84eb.scope - libcontainer container afee548116836a1a490959cac540215f27d372c856322f388638ed02a5aa84eb. Jan 17 00:15:46.850810 systemd[1]: Started cri-containerd-c2a3aa10c9aac8f7eeee9a8a69547fab48f9716d9b70ed423db15c2094df5547.scope - libcontainer container c2a3aa10c9aac8f7eeee9a8a69547fab48f9716d9b70ed423db15c2094df5547. Jan 17 00:15:46.915229 kubelet[2151]: I0117 00:15:46.911323 2151 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-912fd252f4" Jan 17 00:15:46.915229 kubelet[2151]: E0117 00:15:46.911773 2151 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://64.227.98.118:6443/api/v1/nodes\": dial tcp 64.227.98.118:6443: connect: connection refused" node="ci-4081.3.6-n-912fd252f4" Jan 17 00:15:46.925448 containerd[1465]: time="2026-01-17T00:15:46.923800401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-912fd252f4,Uid:1fce75466750330e2fb70af1100aad78,Namespace:kube-system,Attempt:0,} returns sandbox id \"c2a3aa10c9aac8f7eeee9a8a69547fab48f9716d9b70ed423db15c2094df5547\"" Jan 17 00:15:46.933459 kubelet[2151]: E0117 00:15:46.931330 2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:15:46.938467 containerd[1465]: time="2026-01-17T00:15:46.938288123Z" level=info msg="CreateContainer within sandbox \"c2a3aa10c9aac8f7eeee9a8a69547fab48f9716d9b70ed423db15c2094df5547\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 17 00:15:46.967521 containerd[1465]: time="2026-01-17T00:15:46.966585559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-912fd252f4,Uid:526f8ac8bd51f015603a009e47b36863,Namespace:kube-system,Attempt:0,} returns sandbox id \"6bc0169e835167e2767b20b07460c87478d9ac49387558f38cf947838e85b56b\"" Jan 17 00:15:46.969313 kubelet[2151]: E0117 00:15:46.969281 2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:15:46.972311 containerd[1465]: time="2026-01-17T00:15:46.972259824Z" level=info msg="CreateContainer within sandbox \"6bc0169e835167e2767b20b07460c87478d9ac49387558f38cf947838e85b56b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 17 00:15:46.981705 containerd[1465]: time="2026-01-17T00:15:46.981622432Z" level=info msg="CreateContainer within sandbox \"c2a3aa10c9aac8f7eeee9a8a69547fab48f9716d9b70ed423db15c2094df5547\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0da6b8fb9b53bf9dba0396073cc5d3dd2abd15892d2b07cf10d72be88809624b\"" Jan 17 00:15:46.983111 containerd[1465]: time="2026-01-17T00:15:46.983054951Z" level=info msg="StartContainer for \"0da6b8fb9b53bf9dba0396073cc5d3dd2abd15892d2b07cf10d72be88809624b\"" Jan 17 00:15:46.983883 containerd[1465]: time="2026-01-17T00:15:46.983799692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-912fd252f4,Uid:ad72706345ffb42411acd4678b557860,Namespace:kube-system,Attempt:0,} returns sandbox id \"afee548116836a1a490959cac540215f27d372c856322f388638ed02a5aa84eb\"" Jan 17 00:15:46.985547 kubelet[2151]: E0117 00:15:46.984932 2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:15:46.987726 containerd[1465]: time="2026-01-17T00:15:46.987667326Z" level=info msg="CreateContainer within sandbox \"afee548116836a1a490959cac540215f27d372c856322f388638ed02a5aa84eb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 17 00:15:47.030155 containerd[1465]: time="2026-01-17T00:15:47.030009466Z" level=info msg="CreateContainer within sandbox \"6bc0169e835167e2767b20b07460c87478d9ac49387558f38cf947838e85b56b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3de9e95d329e049636fb4b41f5180a1cbbc566ef0de1b9297225e67e63836029\"" Jan 17 00:15:47.030725 containerd[1465]: time="2026-01-17T00:15:47.030690614Z" level=info msg="StartContainer for \"3de9e95d329e049636fb4b41f5180a1cbbc566ef0de1b9297225e67e63836029\"" Jan 17 00:15:47.032363 systemd[1]: Started cri-containerd-0da6b8fb9b53bf9dba0396073cc5d3dd2abd15892d2b07cf10d72be88809624b.scope - libcontainer container 0da6b8fb9b53bf9dba0396073cc5d3dd2abd15892d2b07cf10d72be88809624b. Jan 17 00:15:47.043434 containerd[1465]: time="2026-01-17T00:15:47.043128198Z" level=info msg="CreateContainer within sandbox \"afee548116836a1a490959cac540215f27d372c856322f388638ed02a5aa84eb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1dc03d653ea8d04a3debcf1c9021a5e205ef624fdaad7a2e7928dcbd11b1b68c\"" Jan 17 00:15:47.046596 containerd[1465]: time="2026-01-17T00:15:47.046223940Z" level=info msg="StartContainer for \"1dc03d653ea8d04a3debcf1c9021a5e205ef624fdaad7a2e7928dcbd11b1b68c\"" Jan 17 00:15:47.099914 systemd[1]: Started cri-containerd-1dc03d653ea8d04a3debcf1c9021a5e205ef624fdaad7a2e7928dcbd11b1b68c.scope - libcontainer container 1dc03d653ea8d04a3debcf1c9021a5e205ef624fdaad7a2e7928dcbd11b1b68c. Jan 17 00:15:47.116634 systemd[1]: Started cri-containerd-3de9e95d329e049636fb4b41f5180a1cbbc566ef0de1b9297225e67e63836029.scope - libcontainer container 3de9e95d329e049636fb4b41f5180a1cbbc566ef0de1b9297225e67e63836029. Jan 17 00:15:47.146801 containerd[1465]: time="2026-01-17T00:15:47.146740280Z" level=info msg="StartContainer for \"0da6b8fb9b53bf9dba0396073cc5d3dd2abd15892d2b07cf10d72be88809624b\" returns successfully" Jan 17 00:15:47.216515 containerd[1465]: time="2026-01-17T00:15:47.216382782Z" level=info msg="StartContainer for \"3de9e95d329e049636fb4b41f5180a1cbbc566ef0de1b9297225e67e63836029\" returns successfully" Jan 17 00:15:47.231518 containerd[1465]: time="2026-01-17T00:15:47.229754918Z" level=info msg="StartContainer for \"1dc03d653ea8d04a3debcf1c9021a5e205ef624fdaad7a2e7928dcbd11b1b68c\" returns successfully" Jan 17 00:15:47.257705 kubelet[2151]: E0117 00:15:47.257654 2151 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://64.227.98.118:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 64.227.98.118:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:15:47.370573 kubelet[2151]: E0117 00:15:47.370198 2151 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-912fd252f4\" not found" node="ci-4081.3.6-n-912fd252f4" Jan 17 00:15:47.370573 kubelet[2151]: E0117 00:15:47.370432 2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:15:47.375532 kubelet[2151]: E0117 00:15:47.375074 2151 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-912fd252f4\" not found" node="ci-4081.3.6-n-912fd252f4" Jan 17 00:15:47.375532 kubelet[2151]: E0117 00:15:47.375265 2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:15:47.379439 kubelet[2151]: E0117 00:15:47.378628 2151 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-912fd252f4\" not found" node="ci-4081.3.6-n-912fd252f4" Jan 17 00:15:47.379655 kubelet[2151]: E0117 00:15:47.379632 2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:15:48.382059 kubelet[2151]: E0117 00:15:48.382007 2151 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-912fd252f4\" not found" node="ci-4081.3.6-n-912fd252f4" Jan 17 00:15:48.383193 kubelet[2151]: E0117 00:15:48.382190 2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:15:48.383193 kubelet[2151]: E0117 00:15:48.382581 2151 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-912fd252f4\" not found" node="ci-4081.3.6-n-912fd252f4" Jan 17 00:15:48.383193 kubelet[2151]: E0117 00:15:48.382726 2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:15:48.514998 kubelet[2151]: I0117 00:15:48.513948 2151 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-912fd252f4" Jan 17 00:15:50.416790 kubelet[2151]: E0117 00:15:50.416748 2151 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-912fd252f4\" not found" node="ci-4081.3.6-n-912fd252f4" Jan 17 00:15:50.417374 kubelet[2151]: E0117 00:15:50.416916 2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:15:50.972391 kubelet[2151]: E0117 00:15:50.971899 2151 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.6-n-912fd252f4\" not found" node="ci-4081.3.6-n-912fd252f4" Jan 17 00:15:50.998447 kubelet[2151]: I0117 00:15:50.998330 2151 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-912fd252f4" Jan 17 00:15:51.002013 kubelet[2151]: I0117 00:15:51.001531 2151 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-912fd252f4" Jan 17 00:15:51.041615 kubelet[2151]: E0117 00:15:51.041230 2151 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081.3.6-n-912fd252f4.188b5c7719fbb3e0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-912fd252f4,UID:ci-4081.3.6-n-912fd252f4,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-912fd252f4,},FirstTimestamp:2026-01-17 00:15:45.279841248 +0000 UTC m=+0.833827733,LastTimestamp:2026-01-17 00:15:45.279841248 +0000 UTC m=+0.833827733,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-912fd252f4,}" Jan 17 00:15:51.062430 kubelet[2151]: E0117 00:15:51.062367 2151 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-912fd252f4\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-n-912fd252f4" Jan 17 00:15:51.062430 kubelet[2151]: I0117 00:15:51.062429 2151 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-912fd252f4" Jan 17 00:15:51.067943 kubelet[2151]: E0117 00:15:51.067880 2151 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-912fd252f4\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-912fd252f4" Jan 17 00:15:51.067943 kubelet[2151]: I0117 00:15:51.067920 2151 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-912fd252f4" Jan 17 00:15:51.070527 kubelet[2151]: E0117 00:15:51.070441 2151 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-912fd252f4\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.6-n-912fd252f4" Jan 17 00:15:51.277641 kubelet[2151]: I0117 00:15:51.276489 2151 apiserver.go:52] "Watching apiserver" Jan 17 00:15:51.300865 kubelet[2151]: I0117 00:15:51.300770 2151 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 17 00:15:53.320116 kubelet[2151]: I0117 00:15:53.320039 2151 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-912fd252f4" Jan 17 00:15:53.334932 kubelet[2151]: W0117 00:15:53.334847 2151 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 00:15:53.335506 kubelet[2151]: E0117 00:15:53.335471 2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:15:53.390912 kubelet[2151]: E0117 00:15:53.390855 2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:15:53.429601 systemd[1]: Reloading requested from client PID 2425 ('systemctl') (unit session-7.scope)... Jan 17 00:15:53.429628 systemd[1]: Reloading... Jan 17 00:15:53.590449 zram_generator::config[2467]: No configuration found. Jan 17 00:15:53.786397 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:15:53.958483 systemd[1]: Reloading finished in 527 ms. Jan 17 00:15:54.019542 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:15:54.036166 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 00:15:54.036549 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:15:54.037100 systemd[1]: kubelet.service: Consumed 1.389s CPU time, 127.0M memory peak, 0B memory swap peak. Jan 17 00:15:54.042950 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:15:54.243272 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:15:54.259174 (kubelet)[2515]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:15:54.358632 kubelet[2515]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:15:54.358632 kubelet[2515]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:15:54.358632 kubelet[2515]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:15:54.358632 kubelet[2515]: I0117 00:15:54.358256 2515 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:15:54.371251 kubelet[2515]: I0117 00:15:54.371169 2515 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 17 00:15:54.371251 kubelet[2515]: I0117 00:15:54.371225 2515 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:15:54.373624 kubelet[2515]: I0117 00:15:54.371929 2515 server.go:954] "Client rotation is on, will bootstrap in background" Jan 17 00:15:54.377613 kubelet[2515]: I0117 00:15:54.377562 2515 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 17 00:15:54.381286 kubelet[2515]: I0117 00:15:54.381230 2515 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:15:54.386284 kubelet[2515]: E0117 00:15:54.386239 2515 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:15:54.386488 kubelet[2515]: I0117 00:15:54.386465 2515 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 17 00:15:54.398743 kubelet[2515]: I0117 00:15:54.398694 2515 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 00:15:54.399090 kubelet[2515]: I0117 00:15:54.398965 2515 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:15:54.399299 kubelet[2515]: I0117 00:15:54.399020 2515 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-912fd252f4","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 17 00:15:54.399496 kubelet[2515]: I0117 00:15:54.399316 2515 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:15:54.399496 kubelet[2515]: I0117 00:15:54.399335 2515 container_manager_linux.go:304] "Creating device plugin manager" Jan 17 00:15:54.399496 kubelet[2515]: I0117 00:15:54.399434 2515 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:15:54.401443 kubelet[2515]: I0117 00:15:54.400734 2515 kubelet.go:446] "Attempting to sync node with API server" Jan 17 00:15:54.401443 kubelet[2515]: I0117 00:15:54.400770 2515 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:15:54.401443 kubelet[2515]: I0117 00:15:54.400797 2515 kubelet.go:352] "Adding apiserver pod source" Jan 17 00:15:54.401443 kubelet[2515]: I0117 00:15:54.400811 2515 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:15:54.405377 kubelet[2515]: I0117 00:15:54.405325 2515 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:15:54.406451 kubelet[2515]: I0117 00:15:54.406121 2515 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 00:15:54.407890 kubelet[2515]: I0117 00:15:54.407866 2515 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 17 00:15:54.408053 kubelet[2515]: I0117 00:15:54.408032 2515 server.go:1287] "Started kubelet" Jan 17 00:15:54.412199 kubelet[2515]: I0117 00:15:54.412119 2515 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:15:54.417476 kubelet[2515]: I0117 00:15:54.416023 2515 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:15:54.417476 kubelet[2515]: I0117 00:15:54.416515 2515 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:15:54.417476 kubelet[2515]: I0117 00:15:54.417373 2515 server.go:479] "Adding debug handlers to kubelet server" Jan 17 00:15:54.430156 kubelet[2515]: I0117 00:15:54.429901 2515 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:15:54.445769 kubelet[2515]: I0117 00:15:54.445099 2515 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:15:54.452023 kubelet[2515]: I0117 00:15:54.451599 2515 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 17 00:15:54.452938 kubelet[2515]: E0117 00:15:54.452594 2515 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-912fd252f4\" not found" Jan 17 00:15:54.462945 kubelet[2515]: I0117 00:15:54.462915 2515 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 17 00:15:54.464548 kubelet[2515]: I0117 00:15:54.463511 2515 reconciler.go:26] "Reconciler: start to sync state" Jan 17 00:15:54.468362 kubelet[2515]: E0117 00:15:54.468222 2515 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 00:15:54.470798 kubelet[2515]: I0117 00:15:54.469853 2515 factory.go:221] Registration of the containerd container factory successfully Jan 17 00:15:54.470798 kubelet[2515]: I0117 00:15:54.469883 2515 factory.go:221] Registration of the systemd container factory successfully Jan 17 00:15:54.470798 kubelet[2515]: I0117 00:15:54.470012 2515 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:15:54.472996 kubelet[2515]: I0117 00:15:54.472814 2515 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 00:15:54.478724 kubelet[2515]: I0117 00:15:54.476768 2515 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 00:15:54.478724 kubelet[2515]: I0117 00:15:54.478200 2515 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 17 00:15:54.478724 kubelet[2515]: I0117 00:15:54.478246 2515 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:15:54.478724 kubelet[2515]: I0117 00:15:54.478257 2515 kubelet.go:2382] "Starting kubelet main sync loop" Jan 17 00:15:54.478724 kubelet[2515]: E0117 00:15:54.478333 2515 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 00:15:54.562630 kubelet[2515]: I0117 00:15:54.562583 2515 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:15:54.562630 kubelet[2515]: I0117 00:15:54.562620 2515 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:15:54.562863 kubelet[2515]: I0117 00:15:54.562654 2515 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:15:54.563016 kubelet[2515]: I0117 00:15:54.562978 2515 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 00:15:54.564132 kubelet[2515]: I0117 00:15:54.563026 2515 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 17 00:15:54.564132 kubelet[2515]: I0117 00:15:54.563085 2515 policy_none.go:49] "None policy: Start" Jan 17 00:15:54.564132 kubelet[2515]: I0117 00:15:54.563105 2515 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 17 00:15:54.564132 kubelet[2515]: I0117 00:15:54.563126 2515 state_mem.go:35] "Initializing new in-memory state store" Jan 17 00:15:54.564132 kubelet[2515]: I0117 00:15:54.563340 2515 state_mem.go:75] "Updated machine memory state" Jan 17 00:15:54.572705 kubelet[2515]: I0117 00:15:54.571134 2515 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 00:15:54.572705 kubelet[2515]: I0117 00:15:54.571348 2515 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:15:54.572705 kubelet[2515]: I0117 00:15:54.571361 2515 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:15:54.572705 kubelet[2515]: I0117 00:15:54.571948 2515 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:15:54.577660 kubelet[2515]: E0117 00:15:54.577122 2515 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:15:54.585154 kubelet[2515]: I0117 00:15:54.584779 2515 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-912fd252f4" Jan 17 00:15:54.589005 kubelet[2515]: I0117 00:15:54.588687 2515 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-912fd252f4" Jan 17 00:15:54.589005 kubelet[2515]: I0117 00:15:54.588961 2515 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-912fd252f4" Jan 17 00:15:54.602147 kubelet[2515]: W0117 00:15:54.601394 2515 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 00:15:54.608491 kubelet[2515]: W0117 00:15:54.607226 2515 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 00:15:54.610480 kubelet[2515]: W0117 00:15:54.609230 2515 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 00:15:54.610480 kubelet[2515]: E0117 00:15:54.609318 2515 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-912fd252f4\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-912fd252f4" Jan 17 00:15:54.666959 kubelet[2515]: I0117 00:15:54.666858 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/526f8ac8bd51f015603a009e47b36863-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-912fd252f4\" (UID: \"526f8ac8bd51f015603a009e47b36863\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-912fd252f4" Jan 17 00:15:54.667713 kubelet[2515]: I0117 00:15:54.667391 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1fce75466750330e2fb70af1100aad78-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-912fd252f4\" (UID: \"1fce75466750330e2fb70af1100aad78\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-912fd252f4" Jan 17 00:15:54.667713 kubelet[2515]: I0117 00:15:54.667469 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1fce75466750330e2fb70af1100aad78-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-912fd252f4\" (UID: \"1fce75466750330e2fb70af1100aad78\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-912fd252f4" Jan 17 00:15:54.667713 kubelet[2515]: I0117 00:15:54.667503 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1fce75466750330e2fb70af1100aad78-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-912fd252f4\" (UID: \"1fce75466750330e2fb70af1100aad78\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-912fd252f4" Jan 17 00:15:54.667713 kubelet[2515]: I0117 00:15:54.667531 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ad72706345ffb42411acd4678b557860-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-912fd252f4\" (UID: \"ad72706345ffb42411acd4678b557860\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-912fd252f4" Jan 17 00:15:54.667713 kubelet[2515]: I0117 00:15:54.667558 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/526f8ac8bd51f015603a009e47b36863-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-912fd252f4\" (UID: \"526f8ac8bd51f015603a009e47b36863\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-912fd252f4" Jan 17 00:15:54.668066 kubelet[2515]: I0117 00:15:54.667583 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/526f8ac8bd51f015603a009e47b36863-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-912fd252f4\" (UID: \"526f8ac8bd51f015603a009e47b36863\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-912fd252f4" Jan 17 00:15:54.668066 kubelet[2515]: I0117 00:15:54.667612 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1fce75466750330e2fb70af1100aad78-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-912fd252f4\" (UID: \"1fce75466750330e2fb70af1100aad78\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-912fd252f4" Jan 17 00:15:54.668066 kubelet[2515]: I0117 00:15:54.667640 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1fce75466750330e2fb70af1100aad78-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-912fd252f4\" (UID: \"1fce75466750330e2fb70af1100aad78\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-912fd252f4" Jan 17 00:15:54.688346 kubelet[2515]: I0117 00:15:54.687937 2515 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-912fd252f4" Jan 17 00:15:54.702282 kubelet[2515]: I0117 00:15:54.702209 2515 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.6-n-912fd252f4" Jan 17 00:15:54.702523 kubelet[2515]: I0117 00:15:54.702335 2515 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-912fd252f4" Jan 17 00:15:54.902637 kubelet[2515]: E0117 00:15:54.902580 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:15:54.907960 kubelet[2515]: E0117 00:15:54.907814 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:15:54.911341 kubelet[2515]: E0117 00:15:54.910556 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:15:55.420326 kubelet[2515]: I0117 00:15:55.419136 2515 apiserver.go:52] "Watching apiserver" Jan 17 00:15:55.465594 kubelet[2515]: I0117 00:15:55.464530 2515 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 17 00:15:55.528540 kubelet[2515]: E0117 00:15:55.527015 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:15:55.528540 kubelet[2515]: I0117 00:15:55.527757 2515 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-912fd252f4" Jan 17 00:15:55.530696 kubelet[2515]: I0117 00:15:55.530155 2515 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-912fd252f4" Jan 17 00:15:55.546816 kubelet[2515]: W0117 00:15:55.546784 2515 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 00:15:55.548559 kubelet[2515]: E0117 00:15:55.547175 2515 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-912fd252f4\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.6-n-912fd252f4" Jan 17 00:15:55.548559 kubelet[2515]: E0117 00:15:55.547455 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:15:55.551831 kubelet[2515]: W0117 00:15:55.551148 2515 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 00:15:55.551831 kubelet[2515]: E0117 00:15:55.551224 2515 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-912fd252f4\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.6-n-912fd252f4" Jan 17 00:15:55.551831 kubelet[2515]: E0117 00:15:55.551425 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:15:55.604157 kubelet[2515]: I0117 00:15:55.603852 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-912fd252f4" podStartSLOduration=2.603824372 podStartE2EDuration="2.603824372s" podCreationTimestamp="2026-01-17 00:15:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:15:55.586362107 +0000 UTC m=+1.317319473" watchObservedRunningTime="2026-01-17 00:15:55.603824372 +0000 UTC m=+1.334781734" Jan 17 00:15:55.604157 kubelet[2515]: I0117 00:15:55.604076 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.6-n-912fd252f4" podStartSLOduration=1.6040583659999998 podStartE2EDuration="1.604058366s" podCreationTimestamp="2026-01-17 00:15:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:15:55.603964354 +0000 UTC m=+1.334921718" watchObservedRunningTime="2026-01-17 00:15:55.604058366 +0000 UTC m=+1.335015729" Jan 17 00:15:56.529958 kubelet[2515]: E0117 00:15:56.529923 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:15:56.530675 kubelet[2515]: E0117 00:15:56.530174 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:15:56.530675 kubelet[2515]: E0117 00:15:56.530368 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:15:57.533350 kubelet[2515]: E0117 00:15:57.533303 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:15:57.853464 kubelet[2515]: E0117 00:15:57.853232 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:15:57.876828 kubelet[2515]: I0117 00:15:57.876732 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.6-n-912fd252f4" podStartSLOduration=3.876659244 podStartE2EDuration="3.876659244s" podCreationTimestamp="2026-01-17 00:15:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:15:55.625113962 +0000 UTC m=+1.356071327" watchObservedRunningTime="2026-01-17 00:15:57.876659244 +0000 UTC m=+3.607616608" Jan 17 00:15:58.535766 kubelet[2515]: E0117 00:15:58.535724 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:15:59.099277 update_engine[1452]: I20260117 00:15:59.099030 1452 update_attempter.cc:509] Updating boot flags... Jan 17 00:15:59.165910 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2571) Jan 17 00:15:59.248279 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2572) Jan 17 00:15:59.537628 kubelet[2515]: E0117 00:15:59.537585 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:15:59.871680 kubelet[2515]: I0117 00:15:59.871557 2515 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 17 00:15:59.873362 containerd[1465]: time="2026-01-17T00:15:59.873297560Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 00:15:59.874145 kubelet[2515]: I0117 00:15:59.873870 2515 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 17 00:16:00.848554 systemd[1]: Created slice kubepods-besteffort-pod259f2e8a_e088_4b94_b345_243ddb78f8f4.slice - libcontainer container kubepods-besteffort-pod259f2e8a_e088_4b94_b345_243ddb78f8f4.slice. Jan 17 00:16:00.916913 kubelet[2515]: I0117 00:16:00.916846 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/259f2e8a-e088-4b94-b345-243ddb78f8f4-xtables-lock\") pod \"kube-proxy-vr2ss\" (UID: \"259f2e8a-e088-4b94-b345-243ddb78f8f4\") " pod="kube-system/kube-proxy-vr2ss" Jan 17 00:16:00.916913 kubelet[2515]: I0117 00:16:00.916903 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/259f2e8a-e088-4b94-b345-243ddb78f8f4-kube-proxy\") pod \"kube-proxy-vr2ss\" (UID: \"259f2e8a-e088-4b94-b345-243ddb78f8f4\") " pod="kube-system/kube-proxy-vr2ss" Jan 17 00:16:00.916913 kubelet[2515]: I0117 00:16:00.916932 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/259f2e8a-e088-4b94-b345-243ddb78f8f4-lib-modules\") pod \"kube-proxy-vr2ss\" (UID: \"259f2e8a-e088-4b94-b345-243ddb78f8f4\") " pod="kube-system/kube-proxy-vr2ss" Jan 17 00:16:00.916913 kubelet[2515]: I0117 00:16:00.916959 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftwd7\" (UniqueName: \"kubernetes.io/projected/259f2e8a-e088-4b94-b345-243ddb78f8f4-kube-api-access-ftwd7\") pod \"kube-proxy-vr2ss\" (UID: \"259f2e8a-e088-4b94-b345-243ddb78f8f4\") " pod="kube-system/kube-proxy-vr2ss" Jan 17 00:16:01.009283 kubelet[2515]: W0117 00:16:01.009229 2515 reflector.go:569] object-"tigera-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081.3.6-n-912fd252f4" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ci-4081.3.6-n-912fd252f4' and this object Jan 17 00:16:01.009481 kubelet[2515]: E0117 00:16:01.009293 2515 reflector.go:166] "Unhandled Error" err="object-\"tigera-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4081.3.6-n-912fd252f4\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ci-4081.3.6-n-912fd252f4' and this object" logger="UnhandledError" Jan 17 00:16:01.009481 kubelet[2515]: I0117 00:16:01.009373 2515 status_manager.go:890] "Failed to get status for pod" podUID="b2696b99-a326-4b3d-b45d-804160110e70" pod="tigera-operator/tigera-operator-7dcd859c48-5pnr5" err="pods \"tigera-operator-7dcd859c48-5pnr5\" is forbidden: User \"system:node:ci-4081.3.6-n-912fd252f4\" cannot get resource \"pods\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ci-4081.3.6-n-912fd252f4' and this object" Jan 17 00:16:01.010009 kubelet[2515]: W0117 00:16:01.009966 2515 reflector.go:569] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:ci-4081.3.6-n-912fd252f4" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ci-4081.3.6-n-912fd252f4' and this object Jan 17 00:16:01.010115 kubelet[2515]: E0117 00:16:01.010015 2515 reflector.go:166] "Unhandled Error" err="object-\"tigera-operator\"/\"kubernetes-services-endpoint\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kubernetes-services-endpoint\" is forbidden: User \"system:node:ci-4081.3.6-n-912fd252f4\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ci-4081.3.6-n-912fd252f4' and this object" logger="UnhandledError" Jan 17 00:16:01.017011 systemd[1]: Created slice kubepods-besteffort-podb2696b99_a326_4b3d_b45d_804160110e70.slice - libcontainer container kubepods-besteffort-podb2696b99_a326_4b3d_b45d_804160110e70.slice. Jan 17 00:16:01.019865 kubelet[2515]: I0117 00:16:01.017798 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b2696b99-a326-4b3d-b45d-804160110e70-var-lib-calico\") pod \"tigera-operator-7dcd859c48-5pnr5\" (UID: \"b2696b99-a326-4b3d-b45d-804160110e70\") " pod="tigera-operator/tigera-operator-7dcd859c48-5pnr5" Jan 17 00:16:01.019865 kubelet[2515]: I0117 00:16:01.017908 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjrdb\" (UniqueName: \"kubernetes.io/projected/b2696b99-a326-4b3d-b45d-804160110e70-kube-api-access-vjrdb\") pod \"tigera-operator-7dcd859c48-5pnr5\" (UID: \"b2696b99-a326-4b3d-b45d-804160110e70\") " pod="tigera-operator/tigera-operator-7dcd859c48-5pnr5" Jan 17 00:16:01.165993 kubelet[2515]: E0117 00:16:01.165938 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:16:01.167594 containerd[1465]: time="2026-01-17T00:16:01.166884902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vr2ss,Uid:259f2e8a-e088-4b94-b345-243ddb78f8f4,Namespace:kube-system,Attempt:0,}" Jan 17 00:16:01.221274 containerd[1465]: time="2026-01-17T00:16:01.219377746Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:16:01.221274 containerd[1465]: time="2026-01-17T00:16:01.219506715Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:16:01.221274 containerd[1465]: time="2026-01-17T00:16:01.219532046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:16:01.221274 containerd[1465]: time="2026-01-17T00:16:01.219670948Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:16:01.259771 systemd[1]: Started cri-containerd-01eccfa5d4dbb60237fc5e91a567b05d79b2c9a25567fef951cb14d696eca0e5.scope - libcontainer container 01eccfa5d4dbb60237fc5e91a567b05d79b2c9a25567fef951cb14d696eca0e5. Jan 17 00:16:01.297002 containerd[1465]: time="2026-01-17T00:16:01.296886729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vr2ss,Uid:259f2e8a-e088-4b94-b345-243ddb78f8f4,Namespace:kube-system,Attempt:0,} returns sandbox id \"01eccfa5d4dbb60237fc5e91a567b05d79b2c9a25567fef951cb14d696eca0e5\"" Jan 17 00:16:01.298791 kubelet[2515]: E0117 00:16:01.298756 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:16:01.303848 containerd[1465]: time="2026-01-17T00:16:01.303522000Z" level=info msg="CreateContainer within sandbox \"01eccfa5d4dbb60237fc5e91a567b05d79b2c9a25567fef951cb14d696eca0e5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 00:16:01.354939 containerd[1465]: time="2026-01-17T00:16:01.354742280Z" level=info msg="CreateContainer within sandbox \"01eccfa5d4dbb60237fc5e91a567b05d79b2c9a25567fef951cb14d696eca0e5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"acd35a5c8b123344be22d11f7816da7004c8374d8dc4ce6a01d84e6123c7c31b\"" Jan 17 00:16:01.356789 containerd[1465]: time="2026-01-17T00:16:01.355844587Z" level=info msg="StartContainer for \"acd35a5c8b123344be22d11f7816da7004c8374d8dc4ce6a01d84e6123c7c31b\"" Jan 17 00:16:01.401228 systemd[1]: Started cri-containerd-acd35a5c8b123344be22d11f7816da7004c8374d8dc4ce6a01d84e6123c7c31b.scope - libcontainer container acd35a5c8b123344be22d11f7816da7004c8374d8dc4ce6a01d84e6123c7c31b. Jan 17 00:16:01.457581 containerd[1465]: time="2026-01-17T00:16:01.457100983Z" level=info msg="StartContainer for \"acd35a5c8b123344be22d11f7816da7004c8374d8dc4ce6a01d84e6123c7c31b\" returns successfully" Jan 17 00:16:01.544965 kubelet[2515]: E0117 00:16:01.544772 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:16:01.567483 kubelet[2515]: I0117 00:16:01.565024 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vr2ss" podStartSLOduration=1.564998085 podStartE2EDuration="1.564998085s" podCreationTimestamp="2026-01-17 00:16:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:16:01.564871643 +0000 UTC m=+7.295829006" watchObservedRunningTime="2026-01-17 00:16:01.564998085 +0000 UTC m=+7.295955449" Jan 17 00:16:02.044747 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1790974762.mount: Deactivated successfully. Jan 17 00:16:02.128612 kubelet[2515]: E0117 00:16:02.128529 2515 projected.go:288] Couldn't get configMap tigera-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 17 00:16:02.128612 kubelet[2515]: E0117 00:16:02.128620 2515 projected.go:194] Error preparing data for projected volume kube-api-access-vjrdb for pod tigera-operator/tigera-operator-7dcd859c48-5pnr5: failed to sync configmap cache: timed out waiting for the condition Jan 17 00:16:02.129472 kubelet[2515]: E0117 00:16:02.128752 2515 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b2696b99-a326-4b3d-b45d-804160110e70-kube-api-access-vjrdb podName:b2696b99-a326-4b3d-b45d-804160110e70 nodeName:}" failed. No retries permitted until 2026-01-17 00:16:02.628718246 +0000 UTC m=+8.359675608 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-vjrdb" (UniqueName: "kubernetes.io/projected/b2696b99-a326-4b3d-b45d-804160110e70-kube-api-access-vjrdb") pod "tigera-operator-7dcd859c48-5pnr5" (UID: "b2696b99-a326-4b3d-b45d-804160110e70") : failed to sync configmap cache: timed out waiting for the condition Jan 17 00:16:02.826698 containerd[1465]: time="2026-01-17T00:16:02.826633984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-5pnr5,Uid:b2696b99-a326-4b3d-b45d-804160110e70,Namespace:tigera-operator,Attempt:0,}" Jan 17 00:16:02.901320 containerd[1465]: time="2026-01-17T00:16:02.900827217Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:16:02.901320 containerd[1465]: time="2026-01-17T00:16:02.900947364Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:16:02.901320 containerd[1465]: time="2026-01-17T00:16:02.900973419Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:16:02.901320 containerd[1465]: time="2026-01-17T00:16:02.901158944Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:16:02.954856 systemd[1]: Started cri-containerd-839a945d450def8bc7c761bd9a4d711ab4e6a97ca65d597188bb6fef2c5126f3.scope - libcontainer container 839a945d450def8bc7c761bd9a4d711ab4e6a97ca65d597188bb6fef2c5126f3. Jan 17 00:16:03.027394 containerd[1465]: time="2026-01-17T00:16:03.027214763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-5pnr5,Uid:b2696b99-a326-4b3d-b45d-804160110e70,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"839a945d450def8bc7c761bd9a4d711ab4e6a97ca65d597188bb6fef2c5126f3\"" Jan 17 00:16:03.031078 containerd[1465]: time="2026-01-17T00:16:03.030688535Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 17 00:16:03.039914 systemd[1]: run-containerd-runc-k8s.io-839a945d450def8bc7c761bd9a4d711ab4e6a97ca65d597188bb6fef2c5126f3-runc.veAJIQ.mount: Deactivated successfully. Jan 17 00:16:04.899951 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2661849232.mount: Deactivated successfully. Jan 17 00:16:05.814364 kubelet[2515]: E0117 00:16:05.814214 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:16:06.603079 kubelet[2515]: E0117 00:16:06.602901 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:16:06.675650 containerd[1465]: time="2026-01-17T00:16:06.675561471Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:16:06.684283 containerd[1465]: time="2026-01-17T00:16:06.684168870Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Jan 17 00:16:06.687811 containerd[1465]: time="2026-01-17T00:16:06.687709877Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:16:06.694365 containerd[1465]: time="2026-01-17T00:16:06.693868874Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:16:06.695553 containerd[1465]: time="2026-01-17T00:16:06.695471371Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 3.664725401s" Jan 17 00:16:06.695820 containerd[1465]: time="2026-01-17T00:16:06.695795456Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 17 00:16:06.701292 containerd[1465]: time="2026-01-17T00:16:06.701223291Z" level=info msg="CreateContainer within sandbox \"839a945d450def8bc7c761bd9a4d711ab4e6a97ca65d597188bb6fef2c5126f3\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 17 00:16:06.728188 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2179757685.mount: Deactivated successfully. Jan 17 00:16:06.738473 containerd[1465]: time="2026-01-17T00:16:06.738240750Z" level=info msg="CreateContainer within sandbox \"839a945d450def8bc7c761bd9a4d711ab4e6a97ca65d597188bb6fef2c5126f3\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"2c2b0b70767f7ea50890cc54019446d9160ece83307ba332126da6a13619f494\"" Jan 17 00:16:06.739267 containerd[1465]: time="2026-01-17T00:16:06.739121831Z" level=info msg="StartContainer for \"2c2b0b70767f7ea50890cc54019446d9160ece83307ba332126da6a13619f494\"" Jan 17 00:16:06.784939 systemd[1]: run-containerd-runc-k8s.io-2c2b0b70767f7ea50890cc54019446d9160ece83307ba332126da6a13619f494-runc.3wLpZp.mount: Deactivated successfully. Jan 17 00:16:06.799740 systemd[1]: Started cri-containerd-2c2b0b70767f7ea50890cc54019446d9160ece83307ba332126da6a13619f494.scope - libcontainer container 2c2b0b70767f7ea50890cc54019446d9160ece83307ba332126da6a13619f494. Jan 17 00:16:06.842188 containerd[1465]: time="2026-01-17T00:16:06.842050764Z" level=info msg="StartContainer for \"2c2b0b70767f7ea50890cc54019446d9160ece83307ba332126da6a13619f494\" returns successfully" Jan 17 00:16:15.006732 sudo[1652]: pam_unix(sudo:session): session closed for user root Jan 17 00:16:15.082205 sshd[1649]: pam_unix(sshd:session): session closed for user core Jan 17 00:16:15.090401 systemd[1]: sshd@6-64.227.98.118:22-4.153.228.146:49492.service: Deactivated successfully. Jan 17 00:16:15.100367 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 00:16:15.102211 systemd[1]: session-7.scope: Consumed 6.654s CPU time, 147.6M memory peak, 0B memory swap peak. Jan 17 00:16:15.108146 systemd-logind[1450]: Session 7 logged out. Waiting for processes to exit. Jan 17 00:16:15.111739 systemd-logind[1450]: Removed session 7. Jan 17 00:16:23.584167 kubelet[2515]: I0117 00:16:23.583832 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-5pnr5" podStartSLOduration=19.915542753 podStartE2EDuration="23.583779697s" podCreationTimestamp="2026-01-17 00:16:00 +0000 UTC" firstStartedPulling="2026-01-17 00:16:03.02968765 +0000 UTC m=+8.760645000" lastFinishedPulling="2026-01-17 00:16:06.69792459 +0000 UTC m=+12.428881944" observedRunningTime="2026-01-17 00:16:07.577036732 +0000 UTC m=+13.307994095" watchObservedRunningTime="2026-01-17 00:16:23.583779697 +0000 UTC m=+29.314737050" Jan 17 00:16:23.610021 systemd[1]: Created slice kubepods-besteffort-podf4220b77_2779_4e40_989c_6ce237ba8ad9.slice - libcontainer container kubepods-besteffort-podf4220b77_2779_4e40_989c_6ce237ba8ad9.slice. Jan 17 00:16:23.683531 kubelet[2515]: I0117 00:16:23.683449 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f4220b77-2779-4e40-989c-6ce237ba8ad9-tigera-ca-bundle\") pod \"calico-typha-648dfd57f5-t4xqc\" (UID: \"f4220b77-2779-4e40-989c-6ce237ba8ad9\") " pod="calico-system/calico-typha-648dfd57f5-t4xqc" Jan 17 00:16:23.683531 kubelet[2515]: I0117 00:16:23.683518 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqcwn\" (UniqueName: \"kubernetes.io/projected/f4220b77-2779-4e40-989c-6ce237ba8ad9-kube-api-access-sqcwn\") pod \"calico-typha-648dfd57f5-t4xqc\" (UID: \"f4220b77-2779-4e40-989c-6ce237ba8ad9\") " pod="calico-system/calico-typha-648dfd57f5-t4xqc" Jan 17 00:16:23.683800 kubelet[2515]: I0117 00:16:23.683550 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f4220b77-2779-4e40-989c-6ce237ba8ad9-typha-certs\") pod \"calico-typha-648dfd57f5-t4xqc\" (UID: \"f4220b77-2779-4e40-989c-6ce237ba8ad9\") " pod="calico-system/calico-typha-648dfd57f5-t4xqc" Jan 17 00:16:23.741677 systemd[1]: Created slice kubepods-besteffort-pod4ec71d72_1df7_4c87_8b16_fb4197792521.slice - libcontainer container kubepods-besteffort-pod4ec71d72_1df7_4c87_8b16_fb4197792521.slice. Jan 17 00:16:23.787202 kubelet[2515]: I0117 00:16:23.787137 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/4ec71d72-1df7-4c87-8b16-fb4197792521-cni-log-dir\") pod \"calico-node-c7wxp\" (UID: \"4ec71d72-1df7-4c87-8b16-fb4197792521\") " pod="calico-system/calico-node-c7wxp" Jan 17 00:16:23.788065 kubelet[2515]: I0117 00:16:23.787240 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/4ec71d72-1df7-4c87-8b16-fb4197792521-flexvol-driver-host\") pod \"calico-node-c7wxp\" (UID: \"4ec71d72-1df7-4c87-8b16-fb4197792521\") " pod="calico-system/calico-node-c7wxp" Jan 17 00:16:23.788065 kubelet[2515]: I0117 00:16:23.787280 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/4ec71d72-1df7-4c87-8b16-fb4197792521-node-certs\") pod \"calico-node-c7wxp\" (UID: \"4ec71d72-1df7-4c87-8b16-fb4197792521\") " pod="calico-system/calico-node-c7wxp" Jan 17 00:16:23.788065 kubelet[2515]: I0117 00:16:23.787303 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4ec71d72-1df7-4c87-8b16-fb4197792521-tigera-ca-bundle\") pod \"calico-node-c7wxp\" (UID: \"4ec71d72-1df7-4c87-8b16-fb4197792521\") " pod="calico-system/calico-node-c7wxp" Jan 17 00:16:23.788065 kubelet[2515]: I0117 00:16:23.787364 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4ec71d72-1df7-4c87-8b16-fb4197792521-xtables-lock\") pod \"calico-node-c7wxp\" (UID: \"4ec71d72-1df7-4c87-8b16-fb4197792521\") " pod="calico-system/calico-node-c7wxp" Jan 17 00:16:23.788065 kubelet[2515]: I0117 00:16:23.787393 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nb9cz\" (UniqueName: \"kubernetes.io/projected/4ec71d72-1df7-4c87-8b16-fb4197792521-kube-api-access-nb9cz\") pod \"calico-node-c7wxp\" (UID: \"4ec71d72-1df7-4c87-8b16-fb4197792521\") " pod="calico-system/calico-node-c7wxp" Jan 17 00:16:23.791462 kubelet[2515]: I0117 00:16:23.791145 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4ec71d72-1df7-4c87-8b16-fb4197792521-lib-modules\") pod \"calico-node-c7wxp\" (UID: \"4ec71d72-1df7-4c87-8b16-fb4197792521\") " pod="calico-system/calico-node-c7wxp" Jan 17 00:16:23.791462 kubelet[2515]: I0117 00:16:23.791210 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/4ec71d72-1df7-4c87-8b16-fb4197792521-cni-net-dir\") pod \"calico-node-c7wxp\" (UID: \"4ec71d72-1df7-4c87-8b16-fb4197792521\") " pod="calico-system/calico-node-c7wxp" Jan 17 00:16:23.791462 kubelet[2515]: I0117 00:16:23.791238 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/4ec71d72-1df7-4c87-8b16-fb4197792521-policysync\") pod \"calico-node-c7wxp\" (UID: \"4ec71d72-1df7-4c87-8b16-fb4197792521\") " pod="calico-system/calico-node-c7wxp" Jan 17 00:16:23.791462 kubelet[2515]: I0117 00:16:23.791271 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/4ec71d72-1df7-4c87-8b16-fb4197792521-cni-bin-dir\") pod \"calico-node-c7wxp\" (UID: \"4ec71d72-1df7-4c87-8b16-fb4197792521\") " pod="calico-system/calico-node-c7wxp" Jan 17 00:16:23.791462 kubelet[2515]: I0117 00:16:23.791297 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4ec71d72-1df7-4c87-8b16-fb4197792521-var-lib-calico\") pod \"calico-node-c7wxp\" (UID: \"4ec71d72-1df7-4c87-8b16-fb4197792521\") " pod="calico-system/calico-node-c7wxp" Jan 17 00:16:23.792213 kubelet[2515]: I0117 00:16:23.791326 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/4ec71d72-1df7-4c87-8b16-fb4197792521-var-run-calico\") pod \"calico-node-c7wxp\" (UID: \"4ec71d72-1df7-4c87-8b16-fb4197792521\") " pod="calico-system/calico-node-c7wxp" Jan 17 00:16:23.895011 kubelet[2515]: E0117 00:16:23.894969 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:23.895278 kubelet[2515]: W0117 00:16:23.895074 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:23.895832 kubelet[2515]: E0117 00:16:23.895783 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:23.904651 kubelet[2515]: E0117 00:16:23.904607 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:23.904651 kubelet[2515]: W0117 00:16:23.904642 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:23.904924 kubelet[2515]: E0117 00:16:23.904674 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:23.928168 kubelet[2515]: E0117 00:16:23.927606 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:16:23.932307 kubelet[2515]: E0117 00:16:23.932277 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:23.932597 kubelet[2515]: W0117 00:16:23.932562 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:23.932699 kubelet[2515]: E0117 00:16:23.932686 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:23.932758 containerd[1465]: time="2026-01-17T00:16:23.932679298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-648dfd57f5-t4xqc,Uid:f4220b77-2779-4e40-989c-6ce237ba8ad9,Namespace:calico-system,Attempt:0,}" Jan 17 00:16:23.939190 kubelet[2515]: E0117 00:16:23.939125 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lfkr9" podUID="74b48e50-ea55-46c5-84cf-509f72a7af13" Jan 17 00:16:23.945464 kubelet[2515]: I0117 00:16:23.944711 2515 status_manager.go:890] "Failed to get status for pod" podUID="74b48e50-ea55-46c5-84cf-509f72a7af13" pod="calico-system/csi-node-driver-lfkr9" err="pods \"csi-node-driver-lfkr9\" is forbidden: User \"system:node:ci-4081.3.6-n-912fd252f4\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081.3.6-n-912fd252f4' and this object" Jan 17 00:16:23.969972 kubelet[2515]: E0117 00:16:23.969925 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:23.971710 kubelet[2515]: W0117 00:16:23.971460 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:23.971710 kubelet[2515]: E0117 00:16:23.971519 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:23.973174 kubelet[2515]: E0117 00:16:23.972488 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:23.973174 kubelet[2515]: W0117 00:16:23.972523 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:23.973174 kubelet[2515]: E0117 00:16:23.972549 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:23.978284 kubelet[2515]: E0117 00:16:23.975230 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:23.978284 kubelet[2515]: W0117 00:16:23.975481 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:23.978284 kubelet[2515]: E0117 00:16:23.975603 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:23.980475 kubelet[2515]: E0117 00:16:23.980171 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:23.980475 kubelet[2515]: W0117 00:16:23.980205 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:23.980475 kubelet[2515]: E0117 00:16:23.980241 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:23.982471 kubelet[2515]: E0117 00:16:23.982102 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:23.982471 kubelet[2515]: W0117 00:16:23.982132 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:23.982471 kubelet[2515]: E0117 00:16:23.982167 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:23.985639 kubelet[2515]: E0117 00:16:23.984697 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:23.985639 kubelet[2515]: W0117 00:16:23.984733 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:23.985639 kubelet[2515]: E0117 00:16:23.984762 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:23.985844 kubelet[2515]: E0117 00:16:23.985703 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:23.985844 kubelet[2515]: W0117 00:16:23.985721 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:23.985844 kubelet[2515]: E0117 00:16:23.985743 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:23.986542 kubelet[2515]: E0117 00:16:23.986214 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:23.986542 kubelet[2515]: W0117 00:16:23.986231 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:23.986542 kubelet[2515]: E0117 00:16:23.986282 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:23.986931 kubelet[2515]: E0117 00:16:23.986908 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:23.986931 kubelet[2515]: W0117 00:16:23.986925 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:23.987039 kubelet[2515]: E0117 00:16:23.986944 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:23.987210 kubelet[2515]: E0117 00:16:23.987191 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:23.987210 kubelet[2515]: W0117 00:16:23.987204 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:23.987338 kubelet[2515]: E0117 00:16:23.987214 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:23.987988 kubelet[2515]: E0117 00:16:23.987400 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:23.987988 kubelet[2515]: W0117 00:16:23.987439 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:23.987988 kubelet[2515]: E0117 00:16:23.987450 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:23.987988 kubelet[2515]: E0117 00:16:23.987661 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:23.987988 kubelet[2515]: W0117 00:16:23.987670 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:23.987988 kubelet[2515]: E0117 00:16:23.987679 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:23.987988 kubelet[2515]: E0117 00:16:23.987914 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:23.987988 kubelet[2515]: W0117 00:16:23.987925 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:23.987988 kubelet[2515]: E0117 00:16:23.987934 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:23.988780 kubelet[2515]: E0117 00:16:23.988217 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:23.988780 kubelet[2515]: W0117 00:16:23.988227 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:23.988780 kubelet[2515]: E0117 00:16:23.988236 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:23.988780 kubelet[2515]: E0117 00:16:23.988472 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:23.988780 kubelet[2515]: W0117 00:16:23.988481 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:23.988780 kubelet[2515]: E0117 00:16:23.988490 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:23.989043 kubelet[2515]: E0117 00:16:23.988836 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:23.989043 kubelet[2515]: W0117 00:16:23.988846 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:23.989043 kubelet[2515]: E0117 00:16:23.988856 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:23.989167 kubelet[2515]: E0117 00:16:23.989103 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:23.989167 kubelet[2515]: W0117 00:16:23.989111 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:23.989167 kubelet[2515]: E0117 00:16:23.989121 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:23.990434 kubelet[2515]: E0117 00:16:23.989335 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:23.990434 kubelet[2515]: W0117 00:16:23.989348 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:23.990434 kubelet[2515]: E0117 00:16:23.989357 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:23.990434 kubelet[2515]: E0117 00:16:23.989646 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:23.990434 kubelet[2515]: W0117 00:16:23.989655 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:23.990434 kubelet[2515]: E0117 00:16:23.989665 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:23.990434 kubelet[2515]: E0117 00:16:23.989837 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:23.990434 kubelet[2515]: W0117 00:16:23.989844 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:23.990434 kubelet[2515]: E0117 00:16:23.989852 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:23.996028 kubelet[2515]: E0117 00:16:23.995537 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:23.996028 kubelet[2515]: W0117 00:16:23.995571 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:23.996028 kubelet[2515]: E0117 00:16:23.995599 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:23.996028 kubelet[2515]: I0117 00:16:23.995638 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/74b48e50-ea55-46c5-84cf-509f72a7af13-varrun\") pod \"csi-node-driver-lfkr9\" (UID: \"74b48e50-ea55-46c5-84cf-509f72a7af13\") " pod="calico-system/csi-node-driver-lfkr9" Jan 17 00:16:23.998974 kubelet[2515]: E0117 00:16:23.998895 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:23.999136 kubelet[2515]: W0117 00:16:23.998924 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:23.999136 kubelet[2515]: E0117 00:16:23.999040 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:23.999136 kubelet[2515]: I0117 00:16:23.999081 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/74b48e50-ea55-46c5-84cf-509f72a7af13-socket-dir\") pod \"csi-node-driver-lfkr9\" (UID: \"74b48e50-ea55-46c5-84cf-509f72a7af13\") " pod="calico-system/csi-node-driver-lfkr9" Jan 17 00:16:24.003222 kubelet[2515]: E0117 00:16:24.002758 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:24.003222 kubelet[2515]: W0117 00:16:24.002799 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:24.003222 kubelet[2515]: E0117 00:16:24.002826 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:24.003222 kubelet[2515]: E0117 00:16:24.003222 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:24.003566 kubelet[2515]: W0117 00:16:24.003260 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:24.003566 kubelet[2515]: E0117 00:16:24.003293 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:24.005442 kubelet[2515]: E0117 00:16:24.004866 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:24.005442 kubelet[2515]: W0117 00:16:24.004894 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:24.005442 kubelet[2515]: E0117 00:16:24.004939 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:24.005673 kubelet[2515]: I0117 00:16:24.005589 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/74b48e50-ea55-46c5-84cf-509f72a7af13-kubelet-dir\") pod \"csi-node-driver-lfkr9\" (UID: \"74b48e50-ea55-46c5-84cf-509f72a7af13\") " pod="calico-system/csi-node-driver-lfkr9" Jan 17 00:16:24.006730 kubelet[2515]: E0117 00:16:24.006462 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:24.006730 kubelet[2515]: W0117 00:16:24.006484 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:24.006730 kubelet[2515]: E0117 00:16:24.006721 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:24.008385 kubelet[2515]: E0117 00:16:24.007841 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:24.008385 kubelet[2515]: W0117 00:16:24.007862 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:24.008385 kubelet[2515]: E0117 00:16:24.008286 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:24.009142 kubelet[2515]: E0117 00:16:24.009048 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:24.009142 kubelet[2515]: W0117 00:16:24.009074 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:24.009710 kubelet[2515]: E0117 00:16:24.009678 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:24.010001 containerd[1465]: time="2026-01-17T00:16:24.009687742Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:16:24.012023 kubelet[2515]: I0117 00:16:24.010308 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/74b48e50-ea55-46c5-84cf-509f72a7af13-registration-dir\") pod \"csi-node-driver-lfkr9\" (UID: \"74b48e50-ea55-46c5-84cf-509f72a7af13\") " pod="calico-system/csi-node-driver-lfkr9" Jan 17 00:16:24.012023 kubelet[2515]: E0117 00:16:24.011166 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:24.012023 kubelet[2515]: W0117 00:16:24.011187 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:24.012023 kubelet[2515]: E0117 00:16:24.011216 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:24.012457 containerd[1465]: time="2026-01-17T00:16:24.009891430Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:16:24.013168 kubelet[2515]: E0117 00:16:24.013140 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:24.013168 kubelet[2515]: W0117 00:16:24.013164 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:24.013305 kubelet[2515]: E0117 00:16:24.013263 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:24.013403 containerd[1465]: time="2026-01-17T00:16:24.013034462Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:16:24.015612 containerd[1465]: time="2026-01-17T00:16:24.013580441Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:16:24.015729 kubelet[2515]: E0117 00:16:24.015672 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:24.015729 kubelet[2515]: W0117 00:16:24.015687 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:24.016672 kubelet[2515]: E0117 00:16:24.016639 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:24.016781 kubelet[2515]: I0117 00:16:24.016687 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj7nr\" (UniqueName: \"kubernetes.io/projected/74b48e50-ea55-46c5-84cf-509f72a7af13-kube-api-access-jj7nr\") pod \"csi-node-driver-lfkr9\" (UID: \"74b48e50-ea55-46c5-84cf-509f72a7af13\") " pod="calico-system/csi-node-driver-lfkr9" Jan 17 00:16:24.017774 kubelet[2515]: E0117 00:16:24.017079 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:24.017774 kubelet[2515]: W0117 00:16:24.017100 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:24.017774 kubelet[2515]: E0117 00:16:24.017118 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:24.018671 kubelet[2515]: E0117 00:16:24.018642 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:24.018671 kubelet[2515]: W0117 00:16:24.018663 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:24.019453 kubelet[2515]: E0117 00:16:24.019062 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:24.020279 kubelet[2515]: E0117 00:16:24.020249 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:24.020279 kubelet[2515]: W0117 00:16:24.020269 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:24.020475 kubelet[2515]: E0117 00:16:24.020288 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:24.023001 kubelet[2515]: E0117 00:16:24.022800 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:24.023001 kubelet[2515]: W0117 00:16:24.022822 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:24.023001 kubelet[2515]: E0117 00:16:24.022842 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:24.053660 kubelet[2515]: E0117 00:16:24.053599 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:16:24.056693 systemd[1]: Started cri-containerd-ffd43193171647919133818a09b9189fb7d2bcd196fddc17ce774ed31336e270.scope - libcontainer container ffd43193171647919133818a09b9189fb7d2bcd196fddc17ce774ed31336e270. Jan 17 00:16:24.058672 containerd[1465]: time="2026-01-17T00:16:24.058223919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-c7wxp,Uid:4ec71d72-1df7-4c87-8b16-fb4197792521,Namespace:calico-system,Attempt:0,}" Jan 17 00:16:24.120457 kubelet[2515]: E0117 00:16:24.120336 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:24.120457 kubelet[2515]: W0117 00:16:24.120450 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:24.120968 kubelet[2515]: E0117 00:16:24.120487 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:24.123671 kubelet[2515]: E0117 00:16:24.123608 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:24.123671 kubelet[2515]: W0117 00:16:24.123658 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:24.123885 kubelet[2515]: E0117 00:16:24.123701 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:24.125263 kubelet[2515]: E0117 00:16:24.125201 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:24.125263 kubelet[2515]: W0117 00:16:24.125254 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:24.125510 kubelet[2515]: E0117 00:16:24.125334 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:24.127469 kubelet[2515]: E0117 00:16:24.126103 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:24.127469 kubelet[2515]: W0117 00:16:24.126131 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:24.127469 kubelet[2515]: E0117 00:16:24.126357 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:24.128719 containerd[1465]: time="2026-01-17T00:16:24.128574884Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:16:24.129038 containerd[1465]: time="2026-01-17T00:16:24.128692348Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:16:24.129038 containerd[1465]: time="2026-01-17T00:16:24.128716256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:16:24.129038 containerd[1465]: time="2026-01-17T00:16:24.128856759Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:16:24.129815 kubelet[2515]: E0117 00:16:24.129786 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:24.129815 kubelet[2515]: W0117 00:16:24.129814 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:24.129983 kubelet[2515]: E0117 00:16:24.129842 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:24.138921 kubelet[2515]: E0117 00:16:24.138600 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:24.138921 kubelet[2515]: W0117 00:16:24.138739 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:24.140181 kubelet[2515]: E0117 00:16:24.138811 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:24.141597 kubelet[2515]: E0117 00:16:24.140590 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:24.141597 kubelet[2515]: W0117 00:16:24.140614 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:24.141597 kubelet[2515]: E0117 00:16:24.141012 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:24.142769 kubelet[2515]: E0117 00:16:24.142072 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:24.142769 kubelet[2515]: W0117 00:16:24.142455 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:24.142769 kubelet[2515]: E0117 00:16:24.142489 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:24.143498 kubelet[2515]: E0117 00:16:24.143281 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:24.143498 kubelet[2515]: W0117 00:16:24.143303 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:24.143880 kubelet[2515]: E0117 00:16:24.143399 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:24.144256 kubelet[2515]: E0117 00:16:24.144051 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:24.144256 kubelet[2515]: W0117 00:16:24.144067 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:24.144663 kubelet[2515]: E0117 00:16:24.144354 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:24.144663 kubelet[2515]: E0117 00:16:24.144549 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:24.144663 kubelet[2515]: W0117 00:16:24.144562 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:24.144979 kubelet[2515]: E0117 00:16:24.144952 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:24.145461 kubelet[2515]: E0117 00:16:24.145112 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:24.145461 kubelet[2515]: W0117 00:16:24.145145 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:24.145461 kubelet[2515]: E0117 00:16:24.145260 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:24.145943 kubelet[2515]: E0117 00:16:24.145562 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:24.145943 kubelet[2515]: W0117 00:16:24.145573 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:24.145943 kubelet[2515]: E0117 00:16:24.145600 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:24.146539 kubelet[2515]: E0117 00:16:24.146309 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:24.146539 kubelet[2515]: W0117 00:16:24.146323 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:24.146539 kubelet[2515]: E0117 00:16:24.146338 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:24.147172 kubelet[2515]: E0117 00:16:24.146936 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:24.147172 kubelet[2515]: W0117 00:16:24.146950 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:24.147172 kubelet[2515]: E0117 00:16:24.146964 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:24.147682 kubelet[2515]: E0117 00:16:24.147561 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:24.147682 kubelet[2515]: W0117 00:16:24.147578 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:24.147682 kubelet[2515]: E0117 00:16:24.147603 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:24.149256 kubelet[2515]: E0117 00:16:24.149211 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:24.149256 kubelet[2515]: W0117 00:16:24.149241 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:24.149756 kubelet[2515]: E0117 00:16:24.149263 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:24.151911 kubelet[2515]: E0117 00:16:24.151880 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:24.151911 kubelet[2515]: W0117 00:16:24.151908 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:24.152132 kubelet[2515]: E0117 00:16:24.151941 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:24.152667 kubelet[2515]: E0117 00:16:24.152525 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:24.152667 kubelet[2515]: W0117 00:16:24.152549 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:24.152667 kubelet[2515]: E0117 00:16:24.152594 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:24.154157 kubelet[2515]: E0117 00:16:24.153911 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:24.154157 kubelet[2515]: W0117 00:16:24.153937 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:24.154157 kubelet[2515]: E0117 00:16:24.154088 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:24.155479 kubelet[2515]: E0117 00:16:24.155355 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:24.155732 kubelet[2515]: W0117 00:16:24.155582 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:24.156651 kubelet[2515]: E0117 00:16:24.155943 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:24.157156 kubelet[2515]: E0117 00:16:24.157133 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:24.157156 kubelet[2515]: W0117 00:16:24.157153 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:24.157950 kubelet[2515]: E0117 00:16:24.157359 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:24.158441 kubelet[2515]: E0117 00:16:24.158221 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:24.158441 kubelet[2515]: W0117 00:16:24.158349 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:24.158441 kubelet[2515]: E0117 00:16:24.158377 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:24.162502 kubelet[2515]: E0117 00:16:24.161579 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:24.162502 kubelet[2515]: W0117 00:16:24.161610 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:24.162502 kubelet[2515]: E0117 00:16:24.161633 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:24.164218 kubelet[2515]: E0117 00:16:24.163108 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:24.164218 kubelet[2515]: W0117 00:16:24.163131 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:24.164218 kubelet[2515]: E0117 00:16:24.163154 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:24.176263 kubelet[2515]: E0117 00:16:24.176220 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:24.176590 kubelet[2515]: W0117 00:16:24.176486 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:24.176590 kubelet[2515]: E0117 00:16:24.176516 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:24.201728 systemd[1]: Started cri-containerd-6808e0cad28ed51cfa234c6cace5443320491fb28d1b189d58b60d3bc27a8c56.scope - libcontainer container 6808e0cad28ed51cfa234c6cace5443320491fb28d1b189d58b60d3bc27a8c56. Jan 17 00:16:24.247460 containerd[1465]: time="2026-01-17T00:16:24.247172281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-648dfd57f5-t4xqc,Uid:f4220b77-2779-4e40-989c-6ce237ba8ad9,Namespace:calico-system,Attempt:0,} returns sandbox id \"ffd43193171647919133818a09b9189fb7d2bcd196fddc17ce774ed31336e270\"" Jan 17 00:16:24.257291 kubelet[2515]: E0117 00:16:24.257243 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:16:24.261914 containerd[1465]: time="2026-01-17T00:16:24.261618537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-c7wxp,Uid:4ec71d72-1df7-4c87-8b16-fb4197792521,Namespace:calico-system,Attempt:0,} returns sandbox id \"6808e0cad28ed51cfa234c6cace5443320491fb28d1b189d58b60d3bc27a8c56\"" Jan 17 00:16:24.263562 containerd[1465]: time="2026-01-17T00:16:24.263501787Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 17 00:16:24.263890 kubelet[2515]: E0117 00:16:24.263676 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:16:25.479548 kubelet[2515]: E0117 00:16:25.479463 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lfkr9" podUID="74b48e50-ea55-46c5-84cf-509f72a7af13" Jan 17 00:16:25.770974 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3498173092.mount: Deactivated successfully. Jan 17 00:16:26.914721 containerd[1465]: time="2026-01-17T00:16:26.914654281Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:16:26.921626 containerd[1465]: time="2026-01-17T00:16:26.921226843Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Jan 17 00:16:26.925521 containerd[1465]: time="2026-01-17T00:16:26.925456730Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:16:26.932321 containerd[1465]: time="2026-01-17T00:16:26.931998561Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:16:26.933101 containerd[1465]: time="2026-01-17T00:16:26.933046916Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.669349362s" Jan 17 00:16:26.933101 containerd[1465]: time="2026-01-17T00:16:26.933094825Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 17 00:16:26.939198 containerd[1465]: time="2026-01-17T00:16:26.938496728Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 17 00:16:26.973555 containerd[1465]: time="2026-01-17T00:16:26.973311680Z" level=info msg="CreateContainer within sandbox \"ffd43193171647919133818a09b9189fb7d2bcd196fddc17ce774ed31336e270\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 17 00:16:27.031962 containerd[1465]: time="2026-01-17T00:16:27.031878294Z" level=info msg="CreateContainer within sandbox \"ffd43193171647919133818a09b9189fb7d2bcd196fddc17ce774ed31336e270\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"dab1e3b2edd47ff96d2f4f27836d3879fe447d608ee60c516410b35fd69a6bb8\"" Jan 17 00:16:27.032656 containerd[1465]: time="2026-01-17T00:16:27.032607766Z" level=info msg="StartContainer for \"dab1e3b2edd47ff96d2f4f27836d3879fe447d608ee60c516410b35fd69a6bb8\"" Jan 17 00:16:27.133818 systemd[1]: Started cri-containerd-dab1e3b2edd47ff96d2f4f27836d3879fe447d608ee60c516410b35fd69a6bb8.scope - libcontainer container dab1e3b2edd47ff96d2f4f27836d3879fe447d608ee60c516410b35fd69a6bb8. Jan 17 00:16:27.209845 containerd[1465]: time="2026-01-17T00:16:27.209086855Z" level=info msg="StartContainer for \"dab1e3b2edd47ff96d2f4f27836d3879fe447d608ee60c516410b35fd69a6bb8\" returns successfully" Jan 17 00:16:27.479732 kubelet[2515]: E0117 00:16:27.479391 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lfkr9" podUID="74b48e50-ea55-46c5-84cf-509f72a7af13" Jan 17 00:16:27.650472 kubelet[2515]: E0117 00:16:27.649771 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:16:27.728456 kubelet[2515]: E0117 00:16:27.728381 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:27.728456 kubelet[2515]: W0117 00:16:27.728443 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:27.729226 kubelet[2515]: E0117 00:16:27.728477 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:27.729226 kubelet[2515]: E0117 00:16:27.728945 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:27.729226 kubelet[2515]: W0117 00:16:27.728971 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:27.729226 kubelet[2515]: E0117 00:16:27.728999 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:27.731348 kubelet[2515]: E0117 00:16:27.729781 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:27.731348 kubelet[2515]: W0117 00:16:27.729796 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:27.731348 kubelet[2515]: E0117 00:16:27.729814 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:27.731348 kubelet[2515]: E0117 00:16:27.730649 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:27.731348 kubelet[2515]: W0117 00:16:27.730663 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:27.731348 kubelet[2515]: E0117 00:16:27.730678 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:27.733302 kubelet[2515]: E0117 00:16:27.731381 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:27.733302 kubelet[2515]: W0117 00:16:27.731393 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:27.733302 kubelet[2515]: E0117 00:16:27.731449 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:27.733302 kubelet[2515]: E0117 00:16:27.731711 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:27.733302 kubelet[2515]: W0117 00:16:27.731722 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:27.733302 kubelet[2515]: E0117 00:16:27.731733 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:27.733674 kubelet[2515]: E0117 00:16:27.733624 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:27.733674 kubelet[2515]: W0117 00:16:27.733637 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:27.733674 kubelet[2515]: E0117 00:16:27.733651 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:27.734306 kubelet[2515]: E0117 00:16:27.733927 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:27.734306 kubelet[2515]: W0117 00:16:27.733937 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:27.734306 kubelet[2515]: E0117 00:16:27.733948 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:27.734306 kubelet[2515]: E0117 00:16:27.734249 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:27.734306 kubelet[2515]: W0117 00:16:27.734258 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:27.734306 kubelet[2515]: E0117 00:16:27.734269 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:27.734772 kubelet[2515]: E0117 00:16:27.734745 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:27.734772 kubelet[2515]: W0117 00:16:27.734765 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:27.735538 kubelet[2515]: E0117 00:16:27.734780 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:27.735538 kubelet[2515]: E0117 00:16:27.735514 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:27.735538 kubelet[2515]: W0117 00:16:27.735526 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:27.735538 kubelet[2515]: E0117 00:16:27.735539 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:27.735905 kubelet[2515]: E0117 00:16:27.735779 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:27.735905 kubelet[2515]: W0117 00:16:27.735789 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:27.735905 kubelet[2515]: E0117 00:16:27.735800 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:27.736959 kubelet[2515]: E0117 00:16:27.736936 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:27.736959 kubelet[2515]: W0117 00:16:27.736954 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:27.737262 kubelet[2515]: E0117 00:16:27.736968 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:27.737654 kubelet[2515]: E0117 00:16:27.737632 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:27.737654 kubelet[2515]: W0117 00:16:27.737647 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:27.737809 kubelet[2515]: E0117 00:16:27.737659 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:27.738948 kubelet[2515]: E0117 00:16:27.738894 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:27.738948 kubelet[2515]: W0117 00:16:27.738911 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:27.738948 kubelet[2515]: E0117 00:16:27.738926 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:27.772891 kubelet[2515]: E0117 00:16:27.772847 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:27.772891 kubelet[2515]: W0117 00:16:27.772882 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:27.773140 kubelet[2515]: E0117 00:16:27.772914 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:27.773457 kubelet[2515]: E0117 00:16:27.773438 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:27.773457 kubelet[2515]: W0117 00:16:27.773456 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:27.773627 kubelet[2515]: E0117 00:16:27.773484 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:27.773859 kubelet[2515]: E0117 00:16:27.773836 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:27.773859 kubelet[2515]: W0117 00:16:27.773852 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:27.774006 kubelet[2515]: E0117 00:16:27.773866 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:27.774443 kubelet[2515]: E0117 00:16:27.774387 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:27.774443 kubelet[2515]: W0117 00:16:27.774404 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:27.774443 kubelet[2515]: E0117 00:16:27.774432 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:27.775622 kubelet[2515]: E0117 00:16:27.775595 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:27.775622 kubelet[2515]: W0117 00:16:27.775613 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:27.775622 kubelet[2515]: E0117 00:16:27.775631 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:27.775976 kubelet[2515]: E0117 00:16:27.775903 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:27.775976 kubelet[2515]: W0117 00:16:27.775912 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:27.775976 kubelet[2515]: E0117 00:16:27.775930 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:27.776361 kubelet[2515]: E0117 00:16:27.776343 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:27.776454 kubelet[2515]: W0117 00:16:27.776361 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:27.776522 kubelet[2515]: E0117 00:16:27.776464 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:27.776746 kubelet[2515]: E0117 00:16:27.776732 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:27.776818 kubelet[2515]: W0117 00:16:27.776746 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:27.776950 kubelet[2515]: E0117 00:16:27.776916 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:27.777252 kubelet[2515]: E0117 00:16:27.777229 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:27.777252 kubelet[2515]: W0117 00:16:27.777244 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:27.777399 kubelet[2515]: E0117 00:16:27.777262 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:27.777819 kubelet[2515]: E0117 00:16:27.777795 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:27.777819 kubelet[2515]: W0117 00:16:27.777812 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:27.777951 kubelet[2515]: E0117 00:16:27.777829 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:27.778172 kubelet[2515]: E0117 00:16:27.778158 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:27.778279 kubelet[2515]: W0117 00:16:27.778265 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:27.778440 kubelet[2515]: E0117 00:16:27.778419 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:27.778886 kubelet[2515]: E0117 00:16:27.778860 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:27.778886 kubelet[2515]: W0117 00:16:27.778877 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:27.778974 kubelet[2515]: E0117 00:16:27.778894 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:27.779305 kubelet[2515]: E0117 00:16:27.779284 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:27.779305 kubelet[2515]: W0117 00:16:27.779298 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:27.779663 kubelet[2515]: E0117 00:16:27.779628 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:27.779806 kubelet[2515]: E0117 00:16:27.779784 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:27.779806 kubelet[2515]: W0117 00:16:27.779799 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:27.779898 kubelet[2515]: E0117 00:16:27.779815 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:27.780450 kubelet[2515]: E0117 00:16:27.780295 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:27.780450 kubelet[2515]: W0117 00:16:27.780315 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:27.780450 kubelet[2515]: E0117 00:16:27.780335 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:27.781460 kubelet[2515]: E0117 00:16:27.780873 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:27.781460 kubelet[2515]: W0117 00:16:27.780893 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:27.781460 kubelet[2515]: E0117 00:16:27.780918 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:27.781623 kubelet[2515]: E0117 00:16:27.781590 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:27.781623 kubelet[2515]: W0117 00:16:27.781604 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:27.781623 kubelet[2515]: E0117 00:16:27.781618 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:27.782619 kubelet[2515]: E0117 00:16:27.782596 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:16:27.782732 kubelet[2515]: W0117 00:16:27.782716 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:16:27.782803 kubelet[2515]: E0117 00:16:27.782790 2515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:16:28.433284 containerd[1465]: time="2026-01-17T00:16:28.433214016Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:16:28.435771 containerd[1465]: time="2026-01-17T00:16:28.435615546Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Jan 17 00:16:28.439483 containerd[1465]: time="2026-01-17T00:16:28.439192260Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:16:28.446653 containerd[1465]: time="2026-01-17T00:16:28.446594257Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:16:28.448153 containerd[1465]: time="2026-01-17T00:16:28.448005460Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.509393992s" Jan 17 00:16:28.448153 containerd[1465]: time="2026-01-17T00:16:28.448055436Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 17 00:16:28.453388 containerd[1465]: time="2026-01-17T00:16:28.453115709Z" level=info msg="CreateContainer within sandbox \"6808e0cad28ed51cfa234c6cace5443320491fb28d1b189d58b60d3bc27a8c56\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 17 00:16:28.486319 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount988226099.mount: Deactivated successfully. Jan 17 00:16:28.514734 containerd[1465]: time="2026-01-17T00:16:28.514669014Z" level=info msg="CreateContainer within sandbox \"6808e0cad28ed51cfa234c6cace5443320491fb28d1b189d58b60d3bc27a8c56\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"30491e3d71c91f6d471e7185dc0bcdb060e56155f81366fbba216a3139eeea27\"" Jan 17 00:16:28.515490 containerd[1465]: time="2026-01-17T00:16:28.515343780Z" level=info msg="StartContainer for \"30491e3d71c91f6d471e7185dc0bcdb060e56155f81366fbba216a3139eeea27\"" Jan 17 00:16:28.586815 systemd[1]: Started cri-containerd-30491e3d71c91f6d471e7185dc0bcdb060e56155f81366fbba216a3139eeea27.scope - libcontainer container 30491e3d71c91f6d471e7185dc0bcdb060e56155f81366fbba216a3139eeea27. Jan 17 00:16:28.632103 containerd[1465]: time="2026-01-17T00:16:28.632021759Z" level=info msg="StartContainer for \"30491e3d71c91f6d471e7185dc0bcdb060e56155f81366fbba216a3139eeea27\" returns successfully" Jan 17 00:16:28.649078 systemd[1]: cri-containerd-30491e3d71c91f6d471e7185dc0bcdb060e56155f81366fbba216a3139eeea27.scope: Deactivated successfully. Jan 17 00:16:28.655102 kubelet[2515]: I0117 00:16:28.654885 2515 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 00:16:28.657328 kubelet[2515]: E0117 00:16:28.656722 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:16:28.657328 kubelet[2515]: E0117 00:16:28.656846 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:16:28.684836 kubelet[2515]: I0117 00:16:28.683793 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-648dfd57f5-t4xqc" podStartSLOduration=3.007713203 podStartE2EDuration="5.68376803s" podCreationTimestamp="2026-01-17 00:16:23 +0000 UTC" firstStartedPulling="2026-01-17 00:16:24.262027769 +0000 UTC m=+29.992985123" lastFinishedPulling="2026-01-17 00:16:26.938082597 +0000 UTC m=+32.669039950" observedRunningTime="2026-01-17 00:16:27.713720072 +0000 UTC m=+33.444677434" watchObservedRunningTime="2026-01-17 00:16:28.68376803 +0000 UTC m=+34.414725395" Jan 17 00:16:28.756785 containerd[1465]: time="2026-01-17T00:16:28.709347522Z" level=info msg="shim disconnected" id=30491e3d71c91f6d471e7185dc0bcdb060e56155f81366fbba216a3139eeea27 namespace=k8s.io Jan 17 00:16:28.757258 containerd[1465]: time="2026-01-17T00:16:28.757042830Z" level=warning msg="cleaning up after shim disconnected" id=30491e3d71c91f6d471e7185dc0bcdb060e56155f81366fbba216a3139eeea27 namespace=k8s.io Jan 17 00:16:28.757258 containerd[1465]: time="2026-01-17T00:16:28.757067391Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:16:28.957510 systemd[1]: run-containerd-runc-k8s.io-30491e3d71c91f6d471e7185dc0bcdb060e56155f81366fbba216a3139eeea27-runc.GBICf8.mount: Deactivated successfully. Jan 17 00:16:28.957696 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-30491e3d71c91f6d471e7185dc0bcdb060e56155f81366fbba216a3139eeea27-rootfs.mount: Deactivated successfully. Jan 17 00:16:29.478967 kubelet[2515]: E0117 00:16:29.478609 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lfkr9" podUID="74b48e50-ea55-46c5-84cf-509f72a7af13" Jan 17 00:16:29.660784 kubelet[2515]: E0117 00:16:29.660557 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:16:29.662119 containerd[1465]: time="2026-01-17T00:16:29.662020727Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 17 00:16:31.479491 kubelet[2515]: E0117 00:16:31.478812 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lfkr9" podUID="74b48e50-ea55-46c5-84cf-509f72a7af13" Jan 17 00:16:32.138061 kubelet[2515]: I0117 00:16:32.137977 2515 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 00:16:32.138919 kubelet[2515]: E0117 00:16:32.138778 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:16:32.672359 kubelet[2515]: E0117 00:16:32.672241 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:16:33.297669 containerd[1465]: time="2026-01-17T00:16:33.297597441Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:16:33.300114 containerd[1465]: time="2026-01-17T00:16:33.300050879Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 17 00:16:33.303261 containerd[1465]: time="2026-01-17T00:16:33.303174577Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:16:33.310794 containerd[1465]: time="2026-01-17T00:16:33.310720743Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:16:33.313139 containerd[1465]: time="2026-01-17T00:16:33.312906874Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.650835252s" Jan 17 00:16:33.313139 containerd[1465]: time="2026-01-17T00:16:33.312961490Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 17 00:16:33.316918 containerd[1465]: time="2026-01-17T00:16:33.316669641Z" level=info msg="CreateContainer within sandbox \"6808e0cad28ed51cfa234c6cace5443320491fb28d1b189d58b60d3bc27a8c56\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 17 00:16:33.353818 containerd[1465]: time="2026-01-17T00:16:33.353754321Z" level=info msg="CreateContainer within sandbox \"6808e0cad28ed51cfa234c6cace5443320491fb28d1b189d58b60d3bc27a8c56\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f55c41ef2b9d03d3db4c58b5b82417eba75f1c4e9a98fea548a49e9cb3b17b18\"" Jan 17 00:16:33.366202 containerd[1465]: time="2026-01-17T00:16:33.364378524Z" level=info msg="StartContainer for \"f55c41ef2b9d03d3db4c58b5b82417eba75f1c4e9a98fea548a49e9cb3b17b18\"" Jan 17 00:16:33.428749 systemd[1]: run-containerd-runc-k8s.io-f55c41ef2b9d03d3db4c58b5b82417eba75f1c4e9a98fea548a49e9cb3b17b18-runc.DSoMQT.mount: Deactivated successfully. Jan 17 00:16:33.439742 systemd[1]: Started cri-containerd-f55c41ef2b9d03d3db4c58b5b82417eba75f1c4e9a98fea548a49e9cb3b17b18.scope - libcontainer container f55c41ef2b9d03d3db4c58b5b82417eba75f1c4e9a98fea548a49e9cb3b17b18. Jan 17 00:16:33.479094 kubelet[2515]: E0117 00:16:33.479009 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lfkr9" podUID="74b48e50-ea55-46c5-84cf-509f72a7af13" Jan 17 00:16:33.516305 containerd[1465]: time="2026-01-17T00:16:33.515575497Z" level=info msg="StartContainer for \"f55c41ef2b9d03d3db4c58b5b82417eba75f1c4e9a98fea548a49e9cb3b17b18\" returns successfully" Jan 17 00:16:33.680068 kubelet[2515]: E0117 00:16:33.679617 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:16:34.351148 systemd[1]: cri-containerd-f55c41ef2b9d03d3db4c58b5b82417eba75f1c4e9a98fea548a49e9cb3b17b18.scope: Deactivated successfully. Jan 17 00:16:34.393280 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f55c41ef2b9d03d3db4c58b5b82417eba75f1c4e9a98fea548a49e9cb3b17b18-rootfs.mount: Deactivated successfully. Jan 17 00:16:34.409578 containerd[1465]: time="2026-01-17T00:16:34.408733038Z" level=info msg="shim disconnected" id=f55c41ef2b9d03d3db4c58b5b82417eba75f1c4e9a98fea548a49e9cb3b17b18 namespace=k8s.io Jan 17 00:16:34.410251 containerd[1465]: time="2026-01-17T00:16:34.409766645Z" level=warning msg="cleaning up after shim disconnected" id=f55c41ef2b9d03d3db4c58b5b82417eba75f1c4e9a98fea548a49e9cb3b17b18 namespace=k8s.io Jan 17 00:16:34.410251 containerd[1465]: time="2026-01-17T00:16:34.409804513Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:16:34.448549 kubelet[2515]: I0117 00:16:34.448476 2515 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 17 00:16:34.513742 kubelet[2515]: I0117 00:16:34.513356 2515 status_manager.go:890] "Failed to get status for pod" podUID="96db8296-fac0-44e6-a2a4-5921dbbfa75c" pod="calico-system/calico-kube-controllers-76574bc5-8kb79" err="pods \"calico-kube-controllers-76574bc5-8kb79\" is forbidden: User \"system:node:ci-4081.3.6-n-912fd252f4\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081.3.6-n-912fd252f4' and this object" Jan 17 00:16:34.529003 systemd[1]: Created slice kubepods-besteffort-pod96db8296_fac0_44e6_a2a4_5921dbbfa75c.slice - libcontainer container kubepods-besteffort-pod96db8296_fac0_44e6_a2a4_5921dbbfa75c.slice. Jan 17 00:16:34.540084 kubelet[2515]: W0117 00:16:34.539993 2515 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4081.3.6-n-912fd252f4" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081.3.6-n-912fd252f4' and this object Jan 17 00:16:34.540849 kubelet[2515]: W0117 00:16:34.540805 2515 reflector.go:569] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081.3.6-n-912fd252f4" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4081.3.6-n-912fd252f4' and this object Jan 17 00:16:34.542071 kubelet[2515]: E0117 00:16:34.542000 2515 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ci-4081.3.6-n-912fd252f4\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081.3.6-n-912fd252f4' and this object" logger="UnhandledError" Jan 17 00:16:34.546584 kubelet[2515]: E0117 00:16:34.545505 2515 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4081.3.6-n-912fd252f4\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-4081.3.6-n-912fd252f4' and this object" logger="UnhandledError" Jan 17 00:16:34.546584 kubelet[2515]: W0117 00:16:34.545653 2515 reflector.go:569] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ci-4081.3.6-n-912fd252f4" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4081.3.6-n-912fd252f4' and this object Jan 17 00:16:34.546584 kubelet[2515]: E0117 00:16:34.545695 2515 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"calico-apiserver-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"calico-apiserver-certs\" is forbidden: User \"system:node:ci-4081.3.6-n-912fd252f4\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-4081.3.6-n-912fd252f4' and this object" logger="UnhandledError" Jan 17 00:16:34.546584 kubelet[2515]: W0117 00:16:34.546017 2515 reflector.go:569] object-"calico-system"/"whisker-backend-key-pair": failed to list *v1.Secret: secrets "whisker-backend-key-pair" is forbidden: User "system:node:ci-4081.3.6-n-912fd252f4" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081.3.6-n-912fd252f4' and this object Jan 17 00:16:34.546981 kubelet[2515]: E0117 00:16:34.546037 2515 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"whisker-backend-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"whisker-backend-key-pair\" is forbidden: User \"system:node:ci-4081.3.6-n-912fd252f4\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081.3.6-n-912fd252f4' and this object" logger="UnhandledError" Jan 17 00:16:34.561153 systemd[1]: Created slice kubepods-burstable-pod8cc8f574_3f41_42c0_ad2b_73a6264664c2.slice - libcontainer container kubepods-burstable-pod8cc8f574_3f41_42c0_ad2b_73a6264664c2.slice. Jan 17 00:16:34.584026 systemd[1]: Created slice kubepods-besteffort-podd3a2c65a_63b7_42fa_9521_230bac7a856c.slice - libcontainer container kubepods-besteffort-podd3a2c65a_63b7_42fa_9521_230bac7a856c.slice. Jan 17 00:16:34.603900 systemd[1]: Created slice kubepods-besteffort-pode3b01780_da16_4a69_b846_f0add78dd84a.slice - libcontainer container kubepods-besteffort-pode3b01780_da16_4a69_b846_f0add78dd84a.slice. Jan 17 00:16:34.617284 systemd[1]: Created slice kubepods-besteffort-podfd23fc1c_2ea9_47e8_be5f_5279e384fd8c.slice - libcontainer container kubepods-besteffort-podfd23fc1c_2ea9_47e8_be5f_5279e384fd8c.slice. Jan 17 00:16:34.639038 systemd[1]: Created slice kubepods-besteffort-podabee5d80_98e2_4d1b_a6be_4919665c817d.slice - libcontainer container kubepods-besteffort-podabee5d80_98e2_4d1b_a6be_4919665c817d.slice. Jan 17 00:16:34.642214 kubelet[2515]: I0117 00:16:34.641675 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abee5d80-98e2-4d1b-a6be-4919665c817d-config\") pod \"goldmane-666569f655-2b6w7\" (UID: \"abee5d80-98e2-4d1b-a6be-4919665c817d\") " pod="calico-system/goldmane-666569f655-2b6w7" Jan 17 00:16:34.642214 kubelet[2515]: I0117 00:16:34.641739 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/abee5d80-98e2-4d1b-a6be-4919665c817d-goldmane-ca-bundle\") pod \"goldmane-666569f655-2b6w7\" (UID: \"abee5d80-98e2-4d1b-a6be-4919665c817d\") " pod="calico-system/goldmane-666569f655-2b6w7" Jan 17 00:16:34.642214 kubelet[2515]: I0117 00:16:34.641815 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/96db8296-fac0-44e6-a2a4-5921dbbfa75c-tigera-ca-bundle\") pod \"calico-kube-controllers-76574bc5-8kb79\" (UID: \"96db8296-fac0-44e6-a2a4-5921dbbfa75c\") " pod="calico-system/calico-kube-controllers-76574bc5-8kb79" Jan 17 00:16:34.642214 kubelet[2515]: I0117 00:16:34.641846 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x69kb\" (UniqueName: \"kubernetes.io/projected/96db8296-fac0-44e6-a2a4-5921dbbfa75c-kube-api-access-x69kb\") pod \"calico-kube-controllers-76574bc5-8kb79\" (UID: \"96db8296-fac0-44e6-a2a4-5921dbbfa75c\") " pod="calico-system/calico-kube-controllers-76574bc5-8kb79" Jan 17 00:16:34.642214 kubelet[2515]: I0117 00:16:34.641872 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fd23fc1c-2ea9-47e8-be5f-5279e384fd8c-calico-apiserver-certs\") pod \"calico-apiserver-598d5588f5-xr65t\" (UID: \"fd23fc1c-2ea9-47e8-be5f-5279e384fd8c\") " pod="calico-apiserver/calico-apiserver-598d5588f5-xr65t" Jan 17 00:16:34.642955 kubelet[2515]: I0117 00:16:34.641895 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/abee5d80-98e2-4d1b-a6be-4919665c817d-goldmane-key-pair\") pod \"goldmane-666569f655-2b6w7\" (UID: \"abee5d80-98e2-4d1b-a6be-4919665c817d\") " pod="calico-system/goldmane-666569f655-2b6w7" Jan 17 00:16:34.642955 kubelet[2515]: I0117 00:16:34.641931 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2f67ac25-9d9d-4a2d-8ba8-729f2f585a51-config-volume\") pod \"coredns-668d6bf9bc-rslgw\" (UID: \"2f67ac25-9d9d-4a2d-8ba8-729f2f585a51\") " pod="kube-system/coredns-668d6bf9bc-rslgw" Jan 17 00:16:34.642955 kubelet[2515]: I0117 00:16:34.641964 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-959k9\" (UniqueName: \"kubernetes.io/projected/e3b01780-da16-4a69-b846-f0add78dd84a-kube-api-access-959k9\") pod \"whisker-6c748659fb-h9bq5\" (UID: \"e3b01780-da16-4a69-b846-f0add78dd84a\") " pod="calico-system/whisker-6c748659fb-h9bq5" Jan 17 00:16:34.642955 kubelet[2515]: I0117 00:16:34.641995 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tngkr\" (UniqueName: \"kubernetes.io/projected/abee5d80-98e2-4d1b-a6be-4919665c817d-kube-api-access-tngkr\") pod \"goldmane-666569f655-2b6w7\" (UID: \"abee5d80-98e2-4d1b-a6be-4919665c817d\") " pod="calico-system/goldmane-666569f655-2b6w7" Jan 17 00:16:34.642955 kubelet[2515]: I0117 00:16:34.642025 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d3a2c65a-63b7-42fa-9521-230bac7a856c-calico-apiserver-certs\") pod \"calico-apiserver-598d5588f5-f9bzn\" (UID: \"d3a2c65a-63b7-42fa-9521-230bac7a856c\") " pod="calico-apiserver/calico-apiserver-598d5588f5-f9bzn" Jan 17 00:16:34.643214 kubelet[2515]: I0117 00:16:34.642076 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkppk\" (UniqueName: \"kubernetes.io/projected/2f67ac25-9d9d-4a2d-8ba8-729f2f585a51-kube-api-access-zkppk\") pod \"coredns-668d6bf9bc-rslgw\" (UID: \"2f67ac25-9d9d-4a2d-8ba8-729f2f585a51\") " pod="kube-system/coredns-668d6bf9bc-rslgw" Jan 17 00:16:34.643214 kubelet[2515]: I0117 00:16:34.642103 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e3b01780-da16-4a69-b846-f0add78dd84a-whisker-ca-bundle\") pod \"whisker-6c748659fb-h9bq5\" (UID: \"e3b01780-da16-4a69-b846-f0add78dd84a\") " pod="calico-system/whisker-6c748659fb-h9bq5" Jan 17 00:16:34.643214 kubelet[2515]: I0117 00:16:34.642133 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zznc6\" (UniqueName: \"kubernetes.io/projected/d3a2c65a-63b7-42fa-9521-230bac7a856c-kube-api-access-zznc6\") pod \"calico-apiserver-598d5588f5-f9bzn\" (UID: \"d3a2c65a-63b7-42fa-9521-230bac7a856c\") " pod="calico-apiserver/calico-apiserver-598d5588f5-f9bzn" Jan 17 00:16:34.643214 kubelet[2515]: I0117 00:16:34.642170 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8cc8f574-3f41-42c0-ad2b-73a6264664c2-config-volume\") pod \"coredns-668d6bf9bc-spn4s\" (UID: \"8cc8f574-3f41-42c0-ad2b-73a6264664c2\") " pod="kube-system/coredns-668d6bf9bc-spn4s" Jan 17 00:16:34.645395 kubelet[2515]: I0117 00:16:34.644641 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zzdk\" (UniqueName: \"kubernetes.io/projected/8cc8f574-3f41-42c0-ad2b-73a6264664c2-kube-api-access-5zzdk\") pod \"coredns-668d6bf9bc-spn4s\" (UID: \"8cc8f574-3f41-42c0-ad2b-73a6264664c2\") " pod="kube-system/coredns-668d6bf9bc-spn4s" Jan 17 00:16:34.645395 kubelet[2515]: I0117 00:16:34.644703 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e3b01780-da16-4a69-b846-f0add78dd84a-whisker-backend-key-pair\") pod \"whisker-6c748659fb-h9bq5\" (UID: \"e3b01780-da16-4a69-b846-f0add78dd84a\") " pod="calico-system/whisker-6c748659fb-h9bq5" Jan 17 00:16:34.645395 kubelet[2515]: I0117 00:16:34.644740 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rt5z5\" (UniqueName: \"kubernetes.io/projected/fd23fc1c-2ea9-47e8-be5f-5279e384fd8c-kube-api-access-rt5z5\") pod \"calico-apiserver-598d5588f5-xr65t\" (UID: \"fd23fc1c-2ea9-47e8-be5f-5279e384fd8c\") " pod="calico-apiserver/calico-apiserver-598d5588f5-xr65t" Jan 17 00:16:34.652074 systemd[1]: Created slice kubepods-burstable-pod2f67ac25_9d9d_4a2d_8ba8_729f2f585a51.slice - libcontainer container kubepods-burstable-pod2f67ac25_9d9d_4a2d_8ba8_729f2f585a51.slice. Jan 17 00:16:34.687772 kubelet[2515]: E0117 00:16:34.687148 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:16:34.691473 containerd[1465]: time="2026-01-17T00:16:34.691386155Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 17 00:16:34.845861 containerd[1465]: time="2026-01-17T00:16:34.845806723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76574bc5-8kb79,Uid:96db8296-fac0-44e6-a2a4-5921dbbfa75c,Namespace:calico-system,Attempt:0,}" Jan 17 00:16:34.948371 containerd[1465]: time="2026-01-17T00:16:34.948033231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-2b6w7,Uid:abee5d80-98e2-4d1b-a6be-4919665c817d,Namespace:calico-system,Attempt:0,}" Jan 17 00:16:35.174433 containerd[1465]: time="2026-01-17T00:16:35.173942978Z" level=error msg="Failed to destroy network for sandbox \"f46bb503ee981dfedd9f80b4a9ee729ce585d60a11bb71e15886b10c6cabf01b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:35.174433 containerd[1465]: time="2026-01-17T00:16:35.174097387Z" level=error msg="Failed to destroy network for sandbox \"80ad9c1b6bf14b9fab9d04fd157851b2cedf40cac83c9e27578d81291132a0a3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:35.180190 containerd[1465]: time="2026-01-17T00:16:35.179808966Z" level=error msg="encountered an error cleaning up failed sandbox \"80ad9c1b6bf14b9fab9d04fd157851b2cedf40cac83c9e27578d81291132a0a3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:35.180560 containerd[1465]: time="2026-01-17T00:16:35.180348197Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-2b6w7,Uid:abee5d80-98e2-4d1b-a6be-4919665c817d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"80ad9c1b6bf14b9fab9d04fd157851b2cedf40cac83c9e27578d81291132a0a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:35.182198 containerd[1465]: time="2026-01-17T00:16:35.179869292Z" level=error msg="encountered an error cleaning up failed sandbox \"f46bb503ee981dfedd9f80b4a9ee729ce585d60a11bb71e15886b10c6cabf01b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:35.182198 containerd[1465]: time="2026-01-17T00:16:35.182003865Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76574bc5-8kb79,Uid:96db8296-fac0-44e6-a2a4-5921dbbfa75c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f46bb503ee981dfedd9f80b4a9ee729ce585d60a11bb71e15886b10c6cabf01b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:35.187683 kubelet[2515]: E0117 00:16:35.187247 2515 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80ad9c1b6bf14b9fab9d04fd157851b2cedf40cac83c9e27578d81291132a0a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:35.187683 kubelet[2515]: E0117 00:16:35.187187 2515 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f46bb503ee981dfedd9f80b4a9ee729ce585d60a11bb71e15886b10c6cabf01b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:35.187683 kubelet[2515]: E0117 00:16:35.187498 2515 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80ad9c1b6bf14b9fab9d04fd157851b2cedf40cac83c9e27578d81291132a0a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-2b6w7" Jan 17 00:16:35.188026 kubelet[2515]: E0117 00:16:35.187705 2515 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f46bb503ee981dfedd9f80b4a9ee729ce585d60a11bb71e15886b10c6cabf01b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76574bc5-8kb79" Jan 17 00:16:35.188026 kubelet[2515]: E0117 00:16:35.187552 2515 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80ad9c1b6bf14b9fab9d04fd157851b2cedf40cac83c9e27578d81291132a0a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-2b6w7" Jan 17 00:16:35.188026 kubelet[2515]: E0117 00:16:35.187823 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-2b6w7_calico-system(abee5d80-98e2-4d1b-a6be-4919665c817d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-2b6w7_calico-system(abee5d80-98e2-4d1b-a6be-4919665c817d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"80ad9c1b6bf14b9fab9d04fd157851b2cedf40cac83c9e27578d81291132a0a3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-2b6w7" podUID="abee5d80-98e2-4d1b-a6be-4919665c817d" Jan 17 00:16:35.188694 kubelet[2515]: E0117 00:16:35.188480 2515 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f46bb503ee981dfedd9f80b4a9ee729ce585d60a11bb71e15886b10c6cabf01b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76574bc5-8kb79" Jan 17 00:16:35.188694 kubelet[2515]: E0117 00:16:35.188570 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-76574bc5-8kb79_calico-system(96db8296-fac0-44e6-a2a4-5921dbbfa75c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-76574bc5-8kb79_calico-system(96db8296-fac0-44e6-a2a4-5921dbbfa75c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f46bb503ee981dfedd9f80b4a9ee729ce585d60a11bb71e15886b10c6cabf01b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-76574bc5-8kb79" podUID="96db8296-fac0-44e6-a2a4-5921dbbfa75c" Jan 17 00:16:35.490162 systemd[1]: Created slice kubepods-besteffort-pod74b48e50_ea55_46c5_84cf_509f72a7af13.slice - libcontainer container kubepods-besteffort-pod74b48e50_ea55_46c5_84cf_509f72a7af13.slice. Jan 17 00:16:35.497862 containerd[1465]: time="2026-01-17T00:16:35.497403397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lfkr9,Uid:74b48e50-ea55-46c5-84cf-509f72a7af13,Namespace:calico-system,Attempt:0,}" Jan 17 00:16:35.650847 containerd[1465]: time="2026-01-17T00:16:35.650552443Z" level=error msg="Failed to destroy network for sandbox \"fa5e46525d5ac77c5f52c1a58e410f601e78f7da697af73c2781cd381e43d311\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:35.651587 containerd[1465]: time="2026-01-17T00:16:35.651536508Z" level=error msg="encountered an error cleaning up failed sandbox \"fa5e46525d5ac77c5f52c1a58e410f601e78f7da697af73c2781cd381e43d311\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:35.652311 containerd[1465]: time="2026-01-17T00:16:35.651823766Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lfkr9,Uid:74b48e50-ea55-46c5-84cf-509f72a7af13,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fa5e46525d5ac77c5f52c1a58e410f601e78f7da697af73c2781cd381e43d311\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:35.653456 kubelet[2515]: E0117 00:16:35.653069 2515 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa5e46525d5ac77c5f52c1a58e410f601e78f7da697af73c2781cd381e43d311\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:35.653456 kubelet[2515]: E0117 00:16:35.653152 2515 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa5e46525d5ac77c5f52c1a58e410f601e78f7da697af73c2781cd381e43d311\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lfkr9" Jan 17 00:16:35.653456 kubelet[2515]: E0117 00:16:35.653188 2515 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa5e46525d5ac77c5f52c1a58e410f601e78f7da697af73c2781cd381e43d311\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lfkr9" Jan 17 00:16:35.653663 kubelet[2515]: E0117 00:16:35.653276 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lfkr9_calico-system(74b48e50-ea55-46c5-84cf-509f72a7af13)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lfkr9_calico-system(74b48e50-ea55-46c5-84cf-509f72a7af13)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fa5e46525d5ac77c5f52c1a58e410f601e78f7da697af73c2781cd381e43d311\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lfkr9" podUID="74b48e50-ea55-46c5-84cf-509f72a7af13" Jan 17 00:16:35.657151 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fa5e46525d5ac77c5f52c1a58e410f601e78f7da697af73c2781cd381e43d311-shm.mount: Deactivated successfully. Jan 17 00:16:35.709863 kubelet[2515]: I0117 00:16:35.709762 2515 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa5e46525d5ac77c5f52c1a58e410f601e78f7da697af73c2781cd381e43d311" Jan 17 00:16:35.712631 kubelet[2515]: I0117 00:16:35.712514 2515 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80ad9c1b6bf14b9fab9d04fd157851b2cedf40cac83c9e27578d81291132a0a3" Jan 17 00:16:35.720340 containerd[1465]: time="2026-01-17T00:16:35.719369492Z" level=info msg="StopPodSandbox for \"80ad9c1b6bf14b9fab9d04fd157851b2cedf40cac83c9e27578d81291132a0a3\"" Jan 17 00:16:35.721954 kubelet[2515]: I0117 00:16:35.721403 2515 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f46bb503ee981dfedd9f80b4a9ee729ce585d60a11bb71e15886b10c6cabf01b" Jan 17 00:16:35.722241 containerd[1465]: time="2026-01-17T00:16:35.720347917Z" level=info msg="StopPodSandbox for \"fa5e46525d5ac77c5f52c1a58e410f601e78f7da697af73c2781cd381e43d311\"" Jan 17 00:16:35.724286 containerd[1465]: time="2026-01-17T00:16:35.723712500Z" level=info msg="Ensure that sandbox 80ad9c1b6bf14b9fab9d04fd157851b2cedf40cac83c9e27578d81291132a0a3 in task-service has been cleanup successfully" Jan 17 00:16:35.724286 containerd[1465]: time="2026-01-17T00:16:35.723820451Z" level=info msg="Ensure that sandbox fa5e46525d5ac77c5f52c1a58e410f601e78f7da697af73c2781cd381e43d311 in task-service has been cleanup successfully" Jan 17 00:16:35.734245 containerd[1465]: time="2026-01-17T00:16:35.733929916Z" level=info msg="StopPodSandbox for \"f46bb503ee981dfedd9f80b4a9ee729ce585d60a11bb71e15886b10c6cabf01b\"" Jan 17 00:16:35.735795 containerd[1465]: time="2026-01-17T00:16:35.735330640Z" level=info msg="Ensure that sandbox f46bb503ee981dfedd9f80b4a9ee729ce585d60a11bb71e15886b10c6cabf01b in task-service has been cleanup successfully" Jan 17 00:16:35.753716 kubelet[2515]: E0117 00:16:35.753347 2515 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jan 17 00:16:35.756878 kubelet[2515]: E0117 00:16:35.753368 2515 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jan 17 00:16:35.756878 kubelet[2515]: E0117 00:16:35.754764 2515 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2f67ac25-9d9d-4a2d-8ba8-729f2f585a51-config-volume podName:2f67ac25-9d9d-4a2d-8ba8-729f2f585a51 nodeName:}" failed. No retries permitted until 2026-01-17 00:16:36.254726618 +0000 UTC m=+41.985683989 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2f67ac25-9d9d-4a2d-8ba8-729f2f585a51-config-volume") pod "coredns-668d6bf9bc-rslgw" (UID: "2f67ac25-9d9d-4a2d-8ba8-729f2f585a51") : failed to sync configmap cache: timed out waiting for the condition Jan 17 00:16:35.756878 kubelet[2515]: E0117 00:16:35.756818 2515 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8cc8f574-3f41-42c0-ad2b-73a6264664c2-config-volume podName:8cc8f574-3f41-42c0-ad2b-73a6264664c2 nodeName:}" failed. No retries permitted until 2026-01-17 00:16:36.256776433 +0000 UTC m=+41.987733805 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8cc8f574-3f41-42c0-ad2b-73a6264664c2-config-volume") pod "coredns-668d6bf9bc-spn4s" (UID: "8cc8f574-3f41-42c0-ad2b-73a6264664c2") : failed to sync configmap cache: timed out waiting for the condition Jan 17 00:16:35.793058 containerd[1465]: time="2026-01-17T00:16:35.792995506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-598d5588f5-f9bzn,Uid:d3a2c65a-63b7-42fa-9521-230bac7a856c,Namespace:calico-apiserver,Attempt:0,}" Jan 17 00:16:35.812342 containerd[1465]: time="2026-01-17T00:16:35.811739525Z" level=error msg="StopPodSandbox for \"fa5e46525d5ac77c5f52c1a58e410f601e78f7da697af73c2781cd381e43d311\" failed" error="failed to destroy network for sandbox \"fa5e46525d5ac77c5f52c1a58e410f601e78f7da697af73c2781cd381e43d311\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:35.812564 kubelet[2515]: E0117 00:16:35.812038 2515 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fa5e46525d5ac77c5f52c1a58e410f601e78f7da697af73c2781cd381e43d311\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fa5e46525d5ac77c5f52c1a58e410f601e78f7da697af73c2781cd381e43d311" Jan 17 00:16:35.812564 kubelet[2515]: E0117 00:16:35.812127 2515 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fa5e46525d5ac77c5f52c1a58e410f601e78f7da697af73c2781cd381e43d311"} Jan 17 00:16:35.812564 kubelet[2515]: E0117 00:16:35.812226 2515 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"74b48e50-ea55-46c5-84cf-509f72a7af13\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fa5e46525d5ac77c5f52c1a58e410f601e78f7da697af73c2781cd381e43d311\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:16:35.814024 kubelet[2515]: E0117 00:16:35.813822 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"74b48e50-ea55-46c5-84cf-509f72a7af13\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fa5e46525d5ac77c5f52c1a58e410f601e78f7da697af73c2781cd381e43d311\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lfkr9" podUID="74b48e50-ea55-46c5-84cf-509f72a7af13" Jan 17 00:16:35.826832 containerd[1465]: time="2026-01-17T00:16:35.826633531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c748659fb-h9bq5,Uid:e3b01780-da16-4a69-b846-f0add78dd84a,Namespace:calico-system,Attempt:0,}" Jan 17 00:16:35.830520 containerd[1465]: time="2026-01-17T00:16:35.830403393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-598d5588f5-xr65t,Uid:fd23fc1c-2ea9-47e8-be5f-5279e384fd8c,Namespace:calico-apiserver,Attempt:0,}" Jan 17 00:16:35.844555 containerd[1465]: time="2026-01-17T00:16:35.844480650Z" level=error msg="StopPodSandbox for \"f46bb503ee981dfedd9f80b4a9ee729ce585d60a11bb71e15886b10c6cabf01b\" failed" error="failed to destroy network for sandbox \"f46bb503ee981dfedd9f80b4a9ee729ce585d60a11bb71e15886b10c6cabf01b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:35.845261 kubelet[2515]: E0117 00:16:35.845188 2515 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f46bb503ee981dfedd9f80b4a9ee729ce585d60a11bb71e15886b10c6cabf01b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f46bb503ee981dfedd9f80b4a9ee729ce585d60a11bb71e15886b10c6cabf01b" Jan 17 00:16:35.845876 kubelet[2515]: E0117 00:16:35.845272 2515 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f46bb503ee981dfedd9f80b4a9ee729ce585d60a11bb71e15886b10c6cabf01b"} Jan 17 00:16:35.845876 kubelet[2515]: E0117 00:16:35.845325 2515 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"96db8296-fac0-44e6-a2a4-5921dbbfa75c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f46bb503ee981dfedd9f80b4a9ee729ce585d60a11bb71e15886b10c6cabf01b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:16:35.845876 kubelet[2515]: E0117 00:16:35.845379 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"96db8296-fac0-44e6-a2a4-5921dbbfa75c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f46bb503ee981dfedd9f80b4a9ee729ce585d60a11bb71e15886b10c6cabf01b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-76574bc5-8kb79" podUID="96db8296-fac0-44e6-a2a4-5921dbbfa75c" Jan 17 00:16:35.852230 containerd[1465]: time="2026-01-17T00:16:35.851558983Z" level=error msg="StopPodSandbox for \"80ad9c1b6bf14b9fab9d04fd157851b2cedf40cac83c9e27578d81291132a0a3\" failed" error="failed to destroy network for sandbox \"80ad9c1b6bf14b9fab9d04fd157851b2cedf40cac83c9e27578d81291132a0a3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:35.853212 kubelet[2515]: E0117 00:16:35.851988 2515 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"80ad9c1b6bf14b9fab9d04fd157851b2cedf40cac83c9e27578d81291132a0a3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="80ad9c1b6bf14b9fab9d04fd157851b2cedf40cac83c9e27578d81291132a0a3" Jan 17 00:16:35.853212 kubelet[2515]: E0117 00:16:35.852080 2515 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"80ad9c1b6bf14b9fab9d04fd157851b2cedf40cac83c9e27578d81291132a0a3"} Jan 17 00:16:35.853212 kubelet[2515]: E0117 00:16:35.852133 2515 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"abee5d80-98e2-4d1b-a6be-4919665c817d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"80ad9c1b6bf14b9fab9d04fd157851b2cedf40cac83c9e27578d81291132a0a3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:16:35.853212 kubelet[2515]: E0117 00:16:35.852188 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"abee5d80-98e2-4d1b-a6be-4919665c817d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"80ad9c1b6bf14b9fab9d04fd157851b2cedf40cac83c9e27578d81291132a0a3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-2b6w7" podUID="abee5d80-98e2-4d1b-a6be-4919665c817d" Jan 17 00:16:35.975130 containerd[1465]: time="2026-01-17T00:16:35.975022545Z" level=error msg="Failed to destroy network for sandbox \"42f46fe090b606811a7c214bf303a20fffb167baf6a0a4b82f2870b27920a981\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:35.976088 containerd[1465]: time="2026-01-17T00:16:35.975766083Z" level=error msg="encountered an error cleaning up failed sandbox \"42f46fe090b606811a7c214bf303a20fffb167baf6a0a4b82f2870b27920a981\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:35.976088 containerd[1465]: time="2026-01-17T00:16:35.975827948Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-598d5588f5-f9bzn,Uid:d3a2c65a-63b7-42fa-9521-230bac7a856c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"42f46fe090b606811a7c214bf303a20fffb167baf6a0a4b82f2870b27920a981\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:35.976346 kubelet[2515]: E0117 00:16:35.976069 2515 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42f46fe090b606811a7c214bf303a20fffb167baf6a0a4b82f2870b27920a981\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:35.976346 kubelet[2515]: E0117 00:16:35.976136 2515 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42f46fe090b606811a7c214bf303a20fffb167baf6a0a4b82f2870b27920a981\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-598d5588f5-f9bzn" Jan 17 00:16:35.976346 kubelet[2515]: E0117 00:16:35.976161 2515 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42f46fe090b606811a7c214bf303a20fffb167baf6a0a4b82f2870b27920a981\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-598d5588f5-f9bzn" Jan 17 00:16:35.976627 kubelet[2515]: E0117 00:16:35.976212 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-598d5588f5-f9bzn_calico-apiserver(d3a2c65a-63b7-42fa-9521-230bac7a856c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-598d5588f5-f9bzn_calico-apiserver(d3a2c65a-63b7-42fa-9521-230bac7a856c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"42f46fe090b606811a7c214bf303a20fffb167baf6a0a4b82f2870b27920a981\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-598d5588f5-f9bzn" podUID="d3a2c65a-63b7-42fa-9521-230bac7a856c" Jan 17 00:16:36.040515 containerd[1465]: time="2026-01-17T00:16:36.037935684Z" level=error msg="Failed to destroy network for sandbox \"80d79e1dd770d0cf15204a932b15a90b89db61e7e85ce4c49c2b8405f932d2e3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:36.040515 containerd[1465]: time="2026-01-17T00:16:36.038964179Z" level=error msg="encountered an error cleaning up failed sandbox \"80d79e1dd770d0cf15204a932b15a90b89db61e7e85ce4c49c2b8405f932d2e3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:36.040515 containerd[1465]: time="2026-01-17T00:16:36.039119700Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-598d5588f5-xr65t,Uid:fd23fc1c-2ea9-47e8-be5f-5279e384fd8c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"80d79e1dd770d0cf15204a932b15a90b89db61e7e85ce4c49c2b8405f932d2e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:36.040841 kubelet[2515]: E0117 00:16:36.039476 2515 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80d79e1dd770d0cf15204a932b15a90b89db61e7e85ce4c49c2b8405f932d2e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:36.040841 kubelet[2515]: E0117 00:16:36.039543 2515 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80d79e1dd770d0cf15204a932b15a90b89db61e7e85ce4c49c2b8405f932d2e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-598d5588f5-xr65t" Jan 17 00:16:36.040841 kubelet[2515]: E0117 00:16:36.039573 2515 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80d79e1dd770d0cf15204a932b15a90b89db61e7e85ce4c49c2b8405f932d2e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-598d5588f5-xr65t" Jan 17 00:16:36.041023 kubelet[2515]: E0117 00:16:36.039627 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-598d5588f5-xr65t_calico-apiserver(fd23fc1c-2ea9-47e8-be5f-5279e384fd8c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-598d5588f5-xr65t_calico-apiserver(fd23fc1c-2ea9-47e8-be5f-5279e384fd8c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"80d79e1dd770d0cf15204a932b15a90b89db61e7e85ce4c49c2b8405f932d2e3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-598d5588f5-xr65t" podUID="fd23fc1c-2ea9-47e8-be5f-5279e384fd8c" Jan 17 00:16:36.057244 containerd[1465]: time="2026-01-17T00:16:36.057171295Z" level=error msg="Failed to destroy network for sandbox \"0aace391784eacafec893208408d9d8dd1f5bece15b75bf4a62dd03336f5e417\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:36.057849 containerd[1465]: time="2026-01-17T00:16:36.057795374Z" level=error msg="encountered an error cleaning up failed sandbox \"0aace391784eacafec893208408d9d8dd1f5bece15b75bf4a62dd03336f5e417\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:36.057968 containerd[1465]: time="2026-01-17T00:16:36.057892040Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c748659fb-h9bq5,Uid:e3b01780-da16-4a69-b846-f0add78dd84a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0aace391784eacafec893208408d9d8dd1f5bece15b75bf4a62dd03336f5e417\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:36.058274 kubelet[2515]: E0117 00:16:36.058237 2515 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0aace391784eacafec893208408d9d8dd1f5bece15b75bf4a62dd03336f5e417\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:36.058387 kubelet[2515]: E0117 00:16:36.058298 2515 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0aace391784eacafec893208408d9d8dd1f5bece15b75bf4a62dd03336f5e417\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6c748659fb-h9bq5" Jan 17 00:16:36.058387 kubelet[2515]: E0117 00:16:36.058322 2515 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0aace391784eacafec893208408d9d8dd1f5bece15b75bf4a62dd03336f5e417\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6c748659fb-h9bq5" Jan 17 00:16:36.058520 kubelet[2515]: E0117 00:16:36.058441 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6c748659fb-h9bq5_calico-system(e3b01780-da16-4a69-b846-f0add78dd84a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6c748659fb-h9bq5_calico-system(e3b01780-da16-4a69-b846-f0add78dd84a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0aace391784eacafec893208408d9d8dd1f5bece15b75bf4a62dd03336f5e417\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6c748659fb-h9bq5" podUID="e3b01780-da16-4a69-b846-f0add78dd84a" Jan 17 00:16:36.369949 kubelet[2515]: E0117 00:16:36.369763 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:16:36.372042 containerd[1465]: time="2026-01-17T00:16:36.371522624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-spn4s,Uid:8cc8f574-3f41-42c0-ad2b-73a6264664c2,Namespace:kube-system,Attempt:0,}" Jan 17 00:16:36.457009 kubelet[2515]: E0117 00:16:36.456957 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:16:36.461311 containerd[1465]: time="2026-01-17T00:16:36.460525152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rslgw,Uid:2f67ac25-9d9d-4a2d-8ba8-729f2f585a51,Namespace:kube-system,Attempt:0,}" Jan 17 00:16:36.579221 containerd[1465]: time="2026-01-17T00:16:36.579157478Z" level=error msg="Failed to destroy network for sandbox \"44e891593804e101ca66ba664a3e7090ba8654b106de88e61702f2e86233a3e6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:36.581453 containerd[1465]: time="2026-01-17T00:16:36.580215645Z" level=error msg="encountered an error cleaning up failed sandbox \"44e891593804e101ca66ba664a3e7090ba8654b106de88e61702f2e86233a3e6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:36.581453 containerd[1465]: time="2026-01-17T00:16:36.580304836Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-spn4s,Uid:8cc8f574-3f41-42c0-ad2b-73a6264664c2,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"44e891593804e101ca66ba664a3e7090ba8654b106de88e61702f2e86233a3e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:36.583663 kubelet[2515]: E0117 00:16:36.581891 2515 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44e891593804e101ca66ba664a3e7090ba8654b106de88e61702f2e86233a3e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:36.583663 kubelet[2515]: E0117 00:16:36.581989 2515 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44e891593804e101ca66ba664a3e7090ba8654b106de88e61702f2e86233a3e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-spn4s" Jan 17 00:16:36.583663 kubelet[2515]: E0117 00:16:36.582023 2515 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44e891593804e101ca66ba664a3e7090ba8654b106de88e61702f2e86233a3e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-spn4s" Jan 17 00:16:36.583952 kubelet[2515]: E0117 00:16:36.582082 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-spn4s_kube-system(8cc8f574-3f41-42c0-ad2b-73a6264664c2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-spn4s_kube-system(8cc8f574-3f41-42c0-ad2b-73a6264664c2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"44e891593804e101ca66ba664a3e7090ba8654b106de88e61702f2e86233a3e6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-spn4s" podUID="8cc8f574-3f41-42c0-ad2b-73a6264664c2" Jan 17 00:16:36.587789 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-44e891593804e101ca66ba664a3e7090ba8654b106de88e61702f2e86233a3e6-shm.mount: Deactivated successfully. Jan 17 00:16:36.660165 containerd[1465]: time="2026-01-17T00:16:36.660094983Z" level=error msg="Failed to destroy network for sandbox \"31c78df1dd67a548d2795fba2f046e3fcbda4b44064514813d75fa0a1542bf8d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:36.662822 containerd[1465]: time="2026-01-17T00:16:36.662736217Z" level=error msg="encountered an error cleaning up failed sandbox \"31c78df1dd67a548d2795fba2f046e3fcbda4b44064514813d75fa0a1542bf8d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:36.663025 containerd[1465]: time="2026-01-17T00:16:36.662844640Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rslgw,Uid:2f67ac25-9d9d-4a2d-8ba8-729f2f585a51,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"31c78df1dd67a548d2795fba2f046e3fcbda4b44064514813d75fa0a1542bf8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:36.664849 kubelet[2515]: E0117 00:16:36.663372 2515 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31c78df1dd67a548d2795fba2f046e3fcbda4b44064514813d75fa0a1542bf8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:36.664849 kubelet[2515]: E0117 00:16:36.663483 2515 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31c78df1dd67a548d2795fba2f046e3fcbda4b44064514813d75fa0a1542bf8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-rslgw" Jan 17 00:16:36.664849 kubelet[2515]: E0117 00:16:36.663521 2515 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31c78df1dd67a548d2795fba2f046e3fcbda4b44064514813d75fa0a1542bf8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-rslgw" Jan 17 00:16:36.665144 kubelet[2515]: E0117 00:16:36.663587 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-rslgw_kube-system(2f67ac25-9d9d-4a2d-8ba8-729f2f585a51)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-rslgw_kube-system(2f67ac25-9d9d-4a2d-8ba8-729f2f585a51)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"31c78df1dd67a548d2795fba2f046e3fcbda4b44064514813d75fa0a1542bf8d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-rslgw" podUID="2f67ac25-9d9d-4a2d-8ba8-729f2f585a51" Jan 17 00:16:36.666397 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-31c78df1dd67a548d2795fba2f046e3fcbda4b44064514813d75fa0a1542bf8d-shm.mount: Deactivated successfully. Jan 17 00:16:36.725864 kubelet[2515]: I0117 00:16:36.725801 2515 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42f46fe090b606811a7c214bf303a20fffb167baf6a0a4b82f2870b27920a981" Jan 17 00:16:36.728016 containerd[1465]: time="2026-01-17T00:16:36.727225778Z" level=info msg="StopPodSandbox for \"42f46fe090b606811a7c214bf303a20fffb167baf6a0a4b82f2870b27920a981\"" Jan 17 00:16:36.729395 kubelet[2515]: I0117 00:16:36.728714 2515 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44e891593804e101ca66ba664a3e7090ba8654b106de88e61702f2e86233a3e6" Jan 17 00:16:36.729601 containerd[1465]: time="2026-01-17T00:16:36.728961772Z" level=info msg="Ensure that sandbox 42f46fe090b606811a7c214bf303a20fffb167baf6a0a4b82f2870b27920a981 in task-service has been cleanup successfully" Jan 17 00:16:36.735545 containerd[1465]: time="2026-01-17T00:16:36.735267548Z" level=info msg="StopPodSandbox for \"44e891593804e101ca66ba664a3e7090ba8654b106de88e61702f2e86233a3e6\"" Jan 17 00:16:36.736752 containerd[1465]: time="2026-01-17T00:16:36.736541114Z" level=info msg="Ensure that sandbox 44e891593804e101ca66ba664a3e7090ba8654b106de88e61702f2e86233a3e6 in task-service has been cleanup successfully" Jan 17 00:16:36.746162 kubelet[2515]: I0117 00:16:36.743524 2515 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0aace391784eacafec893208408d9d8dd1f5bece15b75bf4a62dd03336f5e417" Jan 17 00:16:36.748448 containerd[1465]: time="2026-01-17T00:16:36.748378703Z" level=info msg="StopPodSandbox for \"0aace391784eacafec893208408d9d8dd1f5bece15b75bf4a62dd03336f5e417\"" Jan 17 00:16:36.748626 containerd[1465]: time="2026-01-17T00:16:36.748599088Z" level=info msg="Ensure that sandbox 0aace391784eacafec893208408d9d8dd1f5bece15b75bf4a62dd03336f5e417 in task-service has been cleanup successfully" Jan 17 00:16:36.751625 kubelet[2515]: I0117 00:16:36.751584 2515 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31c78df1dd67a548d2795fba2f046e3fcbda4b44064514813d75fa0a1542bf8d" Jan 17 00:16:36.759005 containerd[1465]: time="2026-01-17T00:16:36.758855551Z" level=info msg="StopPodSandbox for \"31c78df1dd67a548d2795fba2f046e3fcbda4b44064514813d75fa0a1542bf8d\"" Jan 17 00:16:36.759687 containerd[1465]: time="2026-01-17T00:16:36.759628075Z" level=info msg="Ensure that sandbox 31c78df1dd67a548d2795fba2f046e3fcbda4b44064514813d75fa0a1542bf8d in task-service has been cleanup successfully" Jan 17 00:16:36.776449 kubelet[2515]: I0117 00:16:36.776367 2515 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80d79e1dd770d0cf15204a932b15a90b89db61e7e85ce4c49c2b8405f932d2e3" Jan 17 00:16:36.782818 containerd[1465]: time="2026-01-17T00:16:36.782261489Z" level=info msg="StopPodSandbox for \"80d79e1dd770d0cf15204a932b15a90b89db61e7e85ce4c49c2b8405f932d2e3\"" Jan 17 00:16:36.782818 containerd[1465]: time="2026-01-17T00:16:36.782534212Z" level=info msg="Ensure that sandbox 80d79e1dd770d0cf15204a932b15a90b89db61e7e85ce4c49c2b8405f932d2e3 in task-service has been cleanup successfully" Jan 17 00:16:36.902757 containerd[1465]: time="2026-01-17T00:16:36.902691386Z" level=error msg="StopPodSandbox for \"42f46fe090b606811a7c214bf303a20fffb167baf6a0a4b82f2870b27920a981\" failed" error="failed to destroy network for sandbox \"42f46fe090b606811a7c214bf303a20fffb167baf6a0a4b82f2870b27920a981\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:36.903476 kubelet[2515]: E0117 00:16:36.903216 2515 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"42f46fe090b606811a7c214bf303a20fffb167baf6a0a4b82f2870b27920a981\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="42f46fe090b606811a7c214bf303a20fffb167baf6a0a4b82f2870b27920a981" Jan 17 00:16:36.903476 kubelet[2515]: E0117 00:16:36.903282 2515 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"42f46fe090b606811a7c214bf303a20fffb167baf6a0a4b82f2870b27920a981"} Jan 17 00:16:36.903476 kubelet[2515]: E0117 00:16:36.903347 2515 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d3a2c65a-63b7-42fa-9521-230bac7a856c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"42f46fe090b606811a7c214bf303a20fffb167baf6a0a4b82f2870b27920a981\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:16:36.903476 kubelet[2515]: E0117 00:16:36.903393 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d3a2c65a-63b7-42fa-9521-230bac7a856c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"42f46fe090b606811a7c214bf303a20fffb167baf6a0a4b82f2870b27920a981\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-598d5588f5-f9bzn" podUID="d3a2c65a-63b7-42fa-9521-230bac7a856c" Jan 17 00:16:36.925292 containerd[1465]: time="2026-01-17T00:16:36.925124481Z" level=error msg="StopPodSandbox for \"44e891593804e101ca66ba664a3e7090ba8654b106de88e61702f2e86233a3e6\" failed" error="failed to destroy network for sandbox \"44e891593804e101ca66ba664a3e7090ba8654b106de88e61702f2e86233a3e6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:36.927349 containerd[1465]: time="2026-01-17T00:16:36.927221143Z" level=error msg="StopPodSandbox for \"31c78df1dd67a548d2795fba2f046e3fcbda4b44064514813d75fa0a1542bf8d\" failed" error="failed to destroy network for sandbox \"31c78df1dd67a548d2795fba2f046e3fcbda4b44064514813d75fa0a1542bf8d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:36.927473 kubelet[2515]: E0117 00:16:36.926927 2515 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"44e891593804e101ca66ba664a3e7090ba8654b106de88e61702f2e86233a3e6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="44e891593804e101ca66ba664a3e7090ba8654b106de88e61702f2e86233a3e6" Jan 17 00:16:36.927473 kubelet[2515]: E0117 00:16:36.927035 2515 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"44e891593804e101ca66ba664a3e7090ba8654b106de88e61702f2e86233a3e6"} Jan 17 00:16:36.927473 kubelet[2515]: E0117 00:16:36.927103 2515 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8cc8f574-3f41-42c0-ad2b-73a6264664c2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"44e891593804e101ca66ba664a3e7090ba8654b106de88e61702f2e86233a3e6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:16:36.927473 kubelet[2515]: E0117 00:16:36.927136 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8cc8f574-3f41-42c0-ad2b-73a6264664c2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"44e891593804e101ca66ba664a3e7090ba8654b106de88e61702f2e86233a3e6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-spn4s" podUID="8cc8f574-3f41-42c0-ad2b-73a6264664c2" Jan 17 00:16:36.928995 containerd[1465]: time="2026-01-17T00:16:36.928616588Z" level=error msg="StopPodSandbox for \"80d79e1dd770d0cf15204a932b15a90b89db61e7e85ce4c49c2b8405f932d2e3\" failed" error="failed to destroy network for sandbox \"80d79e1dd770d0cf15204a932b15a90b89db61e7e85ce4c49c2b8405f932d2e3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:36.930447 kubelet[2515]: E0117 00:16:36.930306 2515 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"80d79e1dd770d0cf15204a932b15a90b89db61e7e85ce4c49c2b8405f932d2e3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="80d79e1dd770d0cf15204a932b15a90b89db61e7e85ce4c49c2b8405f932d2e3" Jan 17 00:16:36.930447 kubelet[2515]: E0117 00:16:36.930374 2515 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"80d79e1dd770d0cf15204a932b15a90b89db61e7e85ce4c49c2b8405f932d2e3"} Jan 17 00:16:36.930849 kubelet[2515]: E0117 00:16:36.930665 2515 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"31c78df1dd67a548d2795fba2f046e3fcbda4b44064514813d75fa0a1542bf8d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="31c78df1dd67a548d2795fba2f046e3fcbda4b44064514813d75fa0a1542bf8d" Jan 17 00:16:36.930849 kubelet[2515]: E0117 00:16:36.930708 2515 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"31c78df1dd67a548d2795fba2f046e3fcbda4b44064514813d75fa0a1542bf8d"} Jan 17 00:16:36.930849 kubelet[2515]: E0117 00:16:36.930747 2515 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2f67ac25-9d9d-4a2d-8ba8-729f2f585a51\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"31c78df1dd67a548d2795fba2f046e3fcbda4b44064514813d75fa0a1542bf8d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:16:36.930849 kubelet[2515]: E0117 00:16:36.930797 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2f67ac25-9d9d-4a2d-8ba8-729f2f585a51\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"31c78df1dd67a548d2795fba2f046e3fcbda4b44064514813d75fa0a1542bf8d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-rslgw" podUID="2f67ac25-9d9d-4a2d-8ba8-729f2f585a51" Jan 17 00:16:36.931884 kubelet[2515]: E0117 00:16:36.931115 2515 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fd23fc1c-2ea9-47e8-be5f-5279e384fd8c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"80d79e1dd770d0cf15204a932b15a90b89db61e7e85ce4c49c2b8405f932d2e3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:16:36.931884 kubelet[2515]: E0117 00:16:36.931160 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fd23fc1c-2ea9-47e8-be5f-5279e384fd8c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"80d79e1dd770d0cf15204a932b15a90b89db61e7e85ce4c49c2b8405f932d2e3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-598d5588f5-xr65t" podUID="fd23fc1c-2ea9-47e8-be5f-5279e384fd8c" Jan 17 00:16:36.931884 kubelet[2515]: E0117 00:16:36.931615 2515 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0aace391784eacafec893208408d9d8dd1f5bece15b75bf4a62dd03336f5e417\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0aace391784eacafec893208408d9d8dd1f5bece15b75bf4a62dd03336f5e417" Jan 17 00:16:36.931884 kubelet[2515]: E0117 00:16:36.931656 2515 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0aace391784eacafec893208408d9d8dd1f5bece15b75bf4a62dd03336f5e417"} Jan 17 00:16:36.932329 containerd[1465]: time="2026-01-17T00:16:36.931365335Z" level=error msg="StopPodSandbox for \"0aace391784eacafec893208408d9d8dd1f5bece15b75bf4a62dd03336f5e417\" failed" error="failed to destroy network for sandbox \"0aace391784eacafec893208408d9d8dd1f5bece15b75bf4a62dd03336f5e417\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:16:36.932402 kubelet[2515]: E0117 00:16:36.931724 2515 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e3b01780-da16-4a69-b846-f0add78dd84a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0aace391784eacafec893208408d9d8dd1f5bece15b75bf4a62dd03336f5e417\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:16:36.932402 kubelet[2515]: E0117 00:16:36.931764 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e3b01780-da16-4a69-b846-f0add78dd84a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0aace391784eacafec893208408d9d8dd1f5bece15b75bf4a62dd03336f5e417\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6c748659fb-h9bq5" podUID="e3b01780-da16-4a69-b846-f0add78dd84a" Jan 17 00:16:42.995040 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2584368737.mount: Deactivated successfully. Jan 17 00:16:43.277788 containerd[1465]: time="2026-01-17T00:16:43.175637762Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 17 00:16:43.283736 containerd[1465]: time="2026-01-17T00:16:43.266261472Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:16:43.323449 containerd[1465]: time="2026-01-17T00:16:43.323301577Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:16:43.326532 containerd[1465]: time="2026-01-17T00:16:43.325797103Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:16:43.331806 containerd[1465]: time="2026-01-17T00:16:43.331724956Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 8.633392475s" Jan 17 00:16:43.332065 containerd[1465]: time="2026-01-17T00:16:43.332036516Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 17 00:16:43.427065 containerd[1465]: time="2026-01-17T00:16:43.426998909Z" level=info msg="CreateContainer within sandbox \"6808e0cad28ed51cfa234c6cace5443320491fb28d1b189d58b60d3bc27a8c56\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 17 00:16:43.590004 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1583220998.mount: Deactivated successfully. Jan 17 00:16:43.611062 containerd[1465]: time="2026-01-17T00:16:43.610982938Z" level=info msg="CreateContainer within sandbox \"6808e0cad28ed51cfa234c6cace5443320491fb28d1b189d58b60d3bc27a8c56\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ac3dd0b6b60f39296cfb3ed837af022e675a3fdab4313d4b9b349c6b674e4333\"" Jan 17 00:16:43.615724 containerd[1465]: time="2026-01-17T00:16:43.615653470Z" level=info msg="StartContainer for \"ac3dd0b6b60f39296cfb3ed837af022e675a3fdab4313d4b9b349c6b674e4333\"" Jan 17 00:16:43.755847 systemd[1]: Started cri-containerd-ac3dd0b6b60f39296cfb3ed837af022e675a3fdab4313d4b9b349c6b674e4333.scope - libcontainer container ac3dd0b6b60f39296cfb3ed837af022e675a3fdab4313d4b9b349c6b674e4333. Jan 17 00:16:43.835471 containerd[1465]: time="2026-01-17T00:16:43.835389042Z" level=info msg="StartContainer for \"ac3dd0b6b60f39296cfb3ed837af022e675a3fdab4313d4b9b349c6b674e4333\" returns successfully" Jan 17 00:16:43.849855 kubelet[2515]: E0117 00:16:43.849298 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:16:44.068092 kubelet[2515]: I0117 00:16:44.064523 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-c7wxp" podStartSLOduration=1.970167632 podStartE2EDuration="21.038606317s" podCreationTimestamp="2026-01-17 00:16:23 +0000 UTC" firstStartedPulling="2026-01-17 00:16:24.264791236 +0000 UTC m=+29.995748589" lastFinishedPulling="2026-01-17 00:16:43.333229914 +0000 UTC m=+49.064187274" observedRunningTime="2026-01-17 00:16:43.991078252 +0000 UTC m=+49.722035622" watchObservedRunningTime="2026-01-17 00:16:44.038606317 +0000 UTC m=+49.769563679" Jan 17 00:16:44.193716 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 17 00:16:44.195718 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 17 00:16:44.464167 containerd[1465]: time="2026-01-17T00:16:44.463573241Z" level=info msg="StopPodSandbox for \"0aace391784eacafec893208408d9d8dd1f5bece15b75bf4a62dd03336f5e417\"" Jan 17 00:16:44.857515 kubelet[2515]: E0117 00:16:44.855494 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:16:44.931512 systemd[1]: run-containerd-runc-k8s.io-ac3dd0b6b60f39296cfb3ed837af022e675a3fdab4313d4b9b349c6b674e4333-runc.3QjYqW.mount: Deactivated successfully. Jan 17 00:16:45.084707 containerd[1465]: 2026-01-17 00:16:44.642 [INFO][3755] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0aace391784eacafec893208408d9d8dd1f5bece15b75bf4a62dd03336f5e417" Jan 17 00:16:45.084707 containerd[1465]: 2026-01-17 00:16:44.644 [INFO][3755] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0aace391784eacafec893208408d9d8dd1f5bece15b75bf4a62dd03336f5e417" iface="eth0" netns="/var/run/netns/cni-f9ffae46-05c9-69ff-74dd-a3f06c1fc8a6" Jan 17 00:16:45.084707 containerd[1465]: 2026-01-17 00:16:44.644 [INFO][3755] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0aace391784eacafec893208408d9d8dd1f5bece15b75bf4a62dd03336f5e417" iface="eth0" netns="/var/run/netns/cni-f9ffae46-05c9-69ff-74dd-a3f06c1fc8a6" Jan 17 00:16:45.084707 containerd[1465]: 2026-01-17 00:16:44.645 [INFO][3755] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0aace391784eacafec893208408d9d8dd1f5bece15b75bf4a62dd03336f5e417" iface="eth0" netns="/var/run/netns/cni-f9ffae46-05c9-69ff-74dd-a3f06c1fc8a6" Jan 17 00:16:45.084707 containerd[1465]: 2026-01-17 00:16:44.645 [INFO][3755] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0aace391784eacafec893208408d9d8dd1f5bece15b75bf4a62dd03336f5e417" Jan 17 00:16:45.084707 containerd[1465]: 2026-01-17 00:16:44.645 [INFO][3755] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0aace391784eacafec893208408d9d8dd1f5bece15b75bf4a62dd03336f5e417" Jan 17 00:16:45.084707 containerd[1465]: 2026-01-17 00:16:45.044 [INFO][3764] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0aace391784eacafec893208408d9d8dd1f5bece15b75bf4a62dd03336f5e417" HandleID="k8s-pod-network.0aace391784eacafec893208408d9d8dd1f5bece15b75bf4a62dd03336f5e417" Workload="ci--4081.3.6--n--912fd252f4-k8s-whisker--6c748659fb--h9bq5-eth0" Jan 17 00:16:45.084707 containerd[1465]: 2026-01-17 00:16:45.048 [INFO][3764] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:45.084707 containerd[1465]: 2026-01-17 00:16:45.049 [INFO][3764] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:45.084707 containerd[1465]: 2026-01-17 00:16:45.069 [WARNING][3764] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0aace391784eacafec893208408d9d8dd1f5bece15b75bf4a62dd03336f5e417" HandleID="k8s-pod-network.0aace391784eacafec893208408d9d8dd1f5bece15b75bf4a62dd03336f5e417" Workload="ci--4081.3.6--n--912fd252f4-k8s-whisker--6c748659fb--h9bq5-eth0" Jan 17 00:16:45.084707 containerd[1465]: 2026-01-17 00:16:45.070 [INFO][3764] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0aace391784eacafec893208408d9d8dd1f5bece15b75bf4a62dd03336f5e417" HandleID="k8s-pod-network.0aace391784eacafec893208408d9d8dd1f5bece15b75bf4a62dd03336f5e417" Workload="ci--4081.3.6--n--912fd252f4-k8s-whisker--6c748659fb--h9bq5-eth0" Jan 17 00:16:45.084707 containerd[1465]: 2026-01-17 00:16:45.077 [INFO][3764] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:45.084707 containerd[1465]: 2026-01-17 00:16:45.080 [INFO][3755] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0aace391784eacafec893208408d9d8dd1f5bece15b75bf4a62dd03336f5e417" Jan 17 00:16:45.087052 containerd[1465]: time="2026-01-17T00:16:45.085503413Z" level=info msg="TearDown network for sandbox \"0aace391784eacafec893208408d9d8dd1f5bece15b75bf4a62dd03336f5e417\" successfully" Jan 17 00:16:45.087052 containerd[1465]: time="2026-01-17T00:16:45.085541491Z" level=info msg="StopPodSandbox for \"0aace391784eacafec893208408d9d8dd1f5bece15b75bf4a62dd03336f5e417\" returns successfully" Jan 17 00:16:45.091558 systemd[1]: run-netns-cni\x2df9ffae46\x2d05c9\x2d69ff\x2d74dd\x2da3f06c1fc8a6.mount: Deactivated successfully. Jan 17 00:16:45.163946 kubelet[2515]: I0117 00:16:45.163860 2515 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e3b01780-da16-4a69-b846-f0add78dd84a-whisker-ca-bundle\") pod \"e3b01780-da16-4a69-b846-f0add78dd84a\" (UID: \"e3b01780-da16-4a69-b846-f0add78dd84a\") " Jan 17 00:16:45.163946 kubelet[2515]: I0117 00:16:45.163947 2515 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e3b01780-da16-4a69-b846-f0add78dd84a-whisker-backend-key-pair\") pod \"e3b01780-da16-4a69-b846-f0add78dd84a\" (UID: \"e3b01780-da16-4a69-b846-f0add78dd84a\") " Jan 17 00:16:45.164241 kubelet[2515]: I0117 00:16:45.164008 2515 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-959k9\" (UniqueName: \"kubernetes.io/projected/e3b01780-da16-4a69-b846-f0add78dd84a-kube-api-access-959k9\") pod \"e3b01780-da16-4a69-b846-f0add78dd84a\" (UID: \"e3b01780-da16-4a69-b846-f0add78dd84a\") " Jan 17 00:16:45.167734 kubelet[2515]: I0117 00:16:45.165925 2515 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3b01780-da16-4a69-b846-f0add78dd84a-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "e3b01780-da16-4a69-b846-f0add78dd84a" (UID: "e3b01780-da16-4a69-b846-f0add78dd84a"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 17 00:16:45.175232 systemd[1]: var-lib-kubelet-pods-e3b01780\x2dda16\x2d4a69\x2db846\x2df0add78dd84a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d959k9.mount: Deactivated successfully. Jan 17 00:16:45.177669 kubelet[2515]: I0117 00:16:45.177396 2515 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3b01780-da16-4a69-b846-f0add78dd84a-kube-api-access-959k9" (OuterVolumeSpecName: "kube-api-access-959k9") pod "e3b01780-da16-4a69-b846-f0add78dd84a" (UID: "e3b01780-da16-4a69-b846-f0add78dd84a"). InnerVolumeSpecName "kube-api-access-959k9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 17 00:16:45.182748 kubelet[2515]: I0117 00:16:45.182673 2515 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3b01780-da16-4a69-b846-f0add78dd84a-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "e3b01780-da16-4a69-b846-f0add78dd84a" (UID: "e3b01780-da16-4a69-b846-f0add78dd84a"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 17 00:16:45.183460 systemd[1]: var-lib-kubelet-pods-e3b01780\x2dda16\x2d4a69\x2db846\x2df0add78dd84a-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 17 00:16:45.265496 kubelet[2515]: I0117 00:16:45.265345 2515 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-959k9\" (UniqueName: \"kubernetes.io/projected/e3b01780-da16-4a69-b846-f0add78dd84a-kube-api-access-959k9\") on node \"ci-4081.3.6-n-912fd252f4\" DevicePath \"\"" Jan 17 00:16:45.265496 kubelet[2515]: I0117 00:16:45.265398 2515 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e3b01780-da16-4a69-b846-f0add78dd84a-whisker-ca-bundle\") on node \"ci-4081.3.6-n-912fd252f4\" DevicePath \"\"" Jan 17 00:16:45.265496 kubelet[2515]: I0117 00:16:45.265460 2515 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e3b01780-da16-4a69-b846-f0add78dd84a-whisker-backend-key-pair\") on node \"ci-4081.3.6-n-912fd252f4\" DevicePath \"\"" Jan 17 00:16:45.865471 kubelet[2515]: E0117 00:16:45.864147 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:16:45.874459 systemd[1]: Removed slice kubepods-besteffort-pode3b01780_da16_4a69_b846_f0add78dd84a.slice - libcontainer container kubepods-besteffort-pode3b01780_da16_4a69_b846_f0add78dd84a.slice. Jan 17 00:16:46.055124 systemd[1]: Created slice kubepods-besteffort-pod1d8c80dc_ca7e_4704_80bc_010f68ebac60.slice - libcontainer container kubepods-besteffort-pod1d8c80dc_ca7e_4704_80bc_010f68ebac60.slice. Jan 17 00:16:46.078724 kubelet[2515]: I0117 00:16:46.078646 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfkq5\" (UniqueName: \"kubernetes.io/projected/1d8c80dc-ca7e-4704-80bc-010f68ebac60-kube-api-access-vfkq5\") pod \"whisker-78cdb99998-slhfc\" (UID: \"1d8c80dc-ca7e-4704-80bc-010f68ebac60\") " pod="calico-system/whisker-78cdb99998-slhfc" Jan 17 00:16:46.078916 kubelet[2515]: I0117 00:16:46.078736 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d8c80dc-ca7e-4704-80bc-010f68ebac60-whisker-ca-bundle\") pod \"whisker-78cdb99998-slhfc\" (UID: \"1d8c80dc-ca7e-4704-80bc-010f68ebac60\") " pod="calico-system/whisker-78cdb99998-slhfc" Jan 17 00:16:46.078916 kubelet[2515]: I0117 00:16:46.078798 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1d8c80dc-ca7e-4704-80bc-010f68ebac60-whisker-backend-key-pair\") pod \"whisker-78cdb99998-slhfc\" (UID: \"1d8c80dc-ca7e-4704-80bc-010f68ebac60\") " pod="calico-system/whisker-78cdb99998-slhfc" Jan 17 00:16:46.362984 containerd[1465]: time="2026-01-17T00:16:46.362923193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78cdb99998-slhfc,Uid:1d8c80dc-ca7e-4704-80bc-010f68ebac60,Namespace:calico-system,Attempt:0,}" Jan 17 00:16:46.487787 kubelet[2515]: I0117 00:16:46.486378 2515 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3b01780-da16-4a69-b846-f0add78dd84a" path="/var/lib/kubelet/pods/e3b01780-da16-4a69-b846-f0add78dd84a/volumes" Jan 17 00:16:46.711232 systemd-networkd[1375]: cali2963528f4a6: Link UP Jan 17 00:16:46.712072 systemd-networkd[1375]: cali2963528f4a6: Gained carrier Jan 17 00:16:46.783094 containerd[1465]: 2026-01-17 00:16:46.478 [INFO][3914] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 17 00:16:46.783094 containerd[1465]: 2026-01-17 00:16:46.507 [INFO][3914] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--912fd252f4-k8s-whisker--78cdb99998--slhfc-eth0 whisker-78cdb99998- calico-system 1d8c80dc-ca7e-4704-80bc-010f68ebac60 986 0 2026-01-17 00:16:45 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:78cdb99998 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.6-n-912fd252f4 whisker-78cdb99998-slhfc eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali2963528f4a6 [] [] }} ContainerID="defd1db7c4990b29b12f561faeba0fdbc4ccd718996cd8719161a43f53866f09" Namespace="calico-system" Pod="whisker-78cdb99998-slhfc" WorkloadEndpoint="ci--4081.3.6--n--912fd252f4-k8s-whisker--78cdb99998--slhfc-" Jan 17 00:16:46.783094 containerd[1465]: 2026-01-17 00:16:46.507 [INFO][3914] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="defd1db7c4990b29b12f561faeba0fdbc4ccd718996cd8719161a43f53866f09" Namespace="calico-system" Pod="whisker-78cdb99998-slhfc" WorkloadEndpoint="ci--4081.3.6--n--912fd252f4-k8s-whisker--78cdb99998--slhfc-eth0" Jan 17 00:16:46.783094 containerd[1465]: 2026-01-17 00:16:46.594 [INFO][3925] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="defd1db7c4990b29b12f561faeba0fdbc4ccd718996cd8719161a43f53866f09" HandleID="k8s-pod-network.defd1db7c4990b29b12f561faeba0fdbc4ccd718996cd8719161a43f53866f09" Workload="ci--4081.3.6--n--912fd252f4-k8s-whisker--78cdb99998--slhfc-eth0" Jan 17 00:16:46.783094 containerd[1465]: 2026-01-17 00:16:46.594 [INFO][3925] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="defd1db7c4990b29b12f561faeba0fdbc4ccd718996cd8719161a43f53866f09" HandleID="k8s-pod-network.defd1db7c4990b29b12f561faeba0fdbc4ccd718996cd8719161a43f53866f09" Workload="ci--4081.3.6--n--912fd252f4-k8s-whisker--78cdb99998--slhfc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000307df0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-912fd252f4", "pod":"whisker-78cdb99998-slhfc", "timestamp":"2026-01-17 00:16:46.594266933 +0000 UTC"}, Hostname:"ci-4081.3.6-n-912fd252f4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:16:46.783094 containerd[1465]: 2026-01-17 00:16:46.595 [INFO][3925] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:46.783094 containerd[1465]: 2026-01-17 00:16:46.595 [INFO][3925] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:46.783094 containerd[1465]: 2026-01-17 00:16:46.595 [INFO][3925] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-912fd252f4' Jan 17 00:16:46.783094 containerd[1465]: 2026-01-17 00:16:46.609 [INFO][3925] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.defd1db7c4990b29b12f561faeba0fdbc4ccd718996cd8719161a43f53866f09" host="ci-4081.3.6-n-912fd252f4" Jan 17 00:16:46.783094 containerd[1465]: 2026-01-17 00:16:46.629 [INFO][3925] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-912fd252f4" Jan 17 00:16:46.783094 containerd[1465]: 2026-01-17 00:16:46.642 [INFO][3925] ipam/ipam.go 511: Trying affinity for 192.168.4.0/26 host="ci-4081.3.6-n-912fd252f4" Jan 17 00:16:46.783094 containerd[1465]: 2026-01-17 00:16:46.646 [INFO][3925] ipam/ipam.go 158: Attempting to load block cidr=192.168.4.0/26 host="ci-4081.3.6-n-912fd252f4" Jan 17 00:16:46.783094 containerd[1465]: 2026-01-17 00:16:46.651 [INFO][3925] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.4.0/26 host="ci-4081.3.6-n-912fd252f4" Jan 17 00:16:46.783094 containerd[1465]: 2026-01-17 00:16:46.651 [INFO][3925] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.4.0/26 handle="k8s-pod-network.defd1db7c4990b29b12f561faeba0fdbc4ccd718996cd8719161a43f53866f09" host="ci-4081.3.6-n-912fd252f4" Jan 17 00:16:46.783094 containerd[1465]: 2026-01-17 00:16:46.653 [INFO][3925] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.defd1db7c4990b29b12f561faeba0fdbc4ccd718996cd8719161a43f53866f09 Jan 17 00:16:46.783094 containerd[1465]: 2026-01-17 00:16:46.662 [INFO][3925] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.4.0/26 handle="k8s-pod-network.defd1db7c4990b29b12f561faeba0fdbc4ccd718996cd8719161a43f53866f09" host="ci-4081.3.6-n-912fd252f4" Jan 17 00:16:46.783094 containerd[1465]: 2026-01-17 00:16:46.673 [INFO][3925] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.4.1/26] block=192.168.4.0/26 handle="k8s-pod-network.defd1db7c4990b29b12f561faeba0fdbc4ccd718996cd8719161a43f53866f09" host="ci-4081.3.6-n-912fd252f4" Jan 17 00:16:46.783094 containerd[1465]: 2026-01-17 00:16:46.674 [INFO][3925] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.4.1/26] handle="k8s-pod-network.defd1db7c4990b29b12f561faeba0fdbc4ccd718996cd8719161a43f53866f09" host="ci-4081.3.6-n-912fd252f4" Jan 17 00:16:46.783094 containerd[1465]: 2026-01-17 00:16:46.674 [INFO][3925] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:46.783094 containerd[1465]: 2026-01-17 00:16:46.674 [INFO][3925] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.4.1/26] IPv6=[] ContainerID="defd1db7c4990b29b12f561faeba0fdbc4ccd718996cd8719161a43f53866f09" HandleID="k8s-pod-network.defd1db7c4990b29b12f561faeba0fdbc4ccd718996cd8719161a43f53866f09" Workload="ci--4081.3.6--n--912fd252f4-k8s-whisker--78cdb99998--slhfc-eth0" Jan 17 00:16:46.784330 containerd[1465]: 2026-01-17 00:16:46.684 [INFO][3914] cni-plugin/k8s.go 418: Populated endpoint ContainerID="defd1db7c4990b29b12f561faeba0fdbc4ccd718996cd8719161a43f53866f09" Namespace="calico-system" Pod="whisker-78cdb99998-slhfc" WorkloadEndpoint="ci--4081.3.6--n--912fd252f4-k8s-whisker--78cdb99998--slhfc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--912fd252f4-k8s-whisker--78cdb99998--slhfc-eth0", GenerateName:"whisker-78cdb99998-", Namespace:"calico-system", SelfLink:"", UID:"1d8c80dc-ca7e-4704-80bc-010f68ebac60", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"78cdb99998", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-912fd252f4", ContainerID:"", Pod:"whisker-78cdb99998-slhfc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.4.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2963528f4a6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:46.784330 containerd[1465]: 2026-01-17 00:16:46.684 [INFO][3914] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.4.1/32] ContainerID="defd1db7c4990b29b12f561faeba0fdbc4ccd718996cd8719161a43f53866f09" Namespace="calico-system" Pod="whisker-78cdb99998-slhfc" WorkloadEndpoint="ci--4081.3.6--n--912fd252f4-k8s-whisker--78cdb99998--slhfc-eth0" Jan 17 00:16:46.784330 containerd[1465]: 2026-01-17 00:16:46.684 [INFO][3914] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2963528f4a6 ContainerID="defd1db7c4990b29b12f561faeba0fdbc4ccd718996cd8719161a43f53866f09" Namespace="calico-system" Pod="whisker-78cdb99998-slhfc" WorkloadEndpoint="ci--4081.3.6--n--912fd252f4-k8s-whisker--78cdb99998--slhfc-eth0" Jan 17 00:16:46.784330 containerd[1465]: 2026-01-17 00:16:46.712 [INFO][3914] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="defd1db7c4990b29b12f561faeba0fdbc4ccd718996cd8719161a43f53866f09" Namespace="calico-system" Pod="whisker-78cdb99998-slhfc" WorkloadEndpoint="ci--4081.3.6--n--912fd252f4-k8s-whisker--78cdb99998--slhfc-eth0" Jan 17 00:16:46.784330 containerd[1465]: 2026-01-17 00:16:46.717 [INFO][3914] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="defd1db7c4990b29b12f561faeba0fdbc4ccd718996cd8719161a43f53866f09" Namespace="calico-system" Pod="whisker-78cdb99998-slhfc" WorkloadEndpoint="ci--4081.3.6--n--912fd252f4-k8s-whisker--78cdb99998--slhfc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--912fd252f4-k8s-whisker--78cdb99998--slhfc-eth0", GenerateName:"whisker-78cdb99998-", Namespace:"calico-system", SelfLink:"", UID:"1d8c80dc-ca7e-4704-80bc-010f68ebac60", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"78cdb99998", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-912fd252f4", ContainerID:"defd1db7c4990b29b12f561faeba0fdbc4ccd718996cd8719161a43f53866f09", Pod:"whisker-78cdb99998-slhfc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.4.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2963528f4a6", MAC:"d2:bb:f6:13:39:b8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:46.784330 containerd[1465]: 2026-01-17 00:16:46.777 [INFO][3914] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="defd1db7c4990b29b12f561faeba0fdbc4ccd718996cd8719161a43f53866f09" Namespace="calico-system" Pod="whisker-78cdb99998-slhfc" WorkloadEndpoint="ci--4081.3.6--n--912fd252f4-k8s-whisker--78cdb99998--slhfc-eth0" Jan 17 00:16:46.848550 containerd[1465]: time="2026-01-17T00:16:46.847136055Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:16:46.849962 containerd[1465]: time="2026-01-17T00:16:46.848282419Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:16:46.849962 containerd[1465]: time="2026-01-17T00:16:46.848318807Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:16:46.849962 containerd[1465]: time="2026-01-17T00:16:46.848483397Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:16:46.900743 systemd[1]: Started cri-containerd-defd1db7c4990b29b12f561faeba0fdbc4ccd718996cd8719161a43f53866f09.scope - libcontainer container defd1db7c4990b29b12f561faeba0fdbc4ccd718996cd8719161a43f53866f09. Jan 17 00:16:47.265299 containerd[1465]: time="2026-01-17T00:16:47.265238803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78cdb99998-slhfc,Uid:1d8c80dc-ca7e-4704-80bc-010f68ebac60,Namespace:calico-system,Attempt:0,} returns sandbox id \"defd1db7c4990b29b12f561faeba0fdbc4ccd718996cd8719161a43f53866f09\"" Jan 17 00:16:47.279241 containerd[1465]: time="2026-01-17T00:16:47.279166785Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:16:47.481440 containerd[1465]: time="2026-01-17T00:16:47.481363341Z" level=info msg="StopPodSandbox for \"f46bb503ee981dfedd9f80b4a9ee729ce585d60a11bb71e15886b10c6cabf01b\"" Jan 17 00:16:47.483380 containerd[1465]: time="2026-01-17T00:16:47.483283918Z" level=info msg="StopPodSandbox for \"44e891593804e101ca66ba664a3e7090ba8654b106de88e61702f2e86233a3e6\"" Jan 17 00:16:47.672382 containerd[1465]: time="2026-01-17T00:16:47.672332581Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:16:47.712251 containerd[1465]: time="2026-01-17T00:16:47.675066762Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:16:47.712509 containerd[1465]: time="2026-01-17T00:16:47.675124422Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:16:47.713168 kubelet[2515]: E0117 00:16:47.712953 2515 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:16:47.714292 kubelet[2515]: E0117 00:16:47.713830 2515 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:16:47.736462 kubelet[2515]: E0117 00:16:47.735647 2515 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:d9e40a58538b470a9311e65e534c32dd,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vfkq5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78cdb99998-slhfc_calico-system(1d8c80dc-ca7e-4704-80bc-010f68ebac60): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:16:47.747617 containerd[1465]: time="2026-01-17T00:16:47.747554689Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:16:47.756519 containerd[1465]: 2026-01-17 00:16:47.641 [INFO][4026] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f46bb503ee981dfedd9f80b4a9ee729ce585d60a11bb71e15886b10c6cabf01b" Jan 17 00:16:47.756519 containerd[1465]: 2026-01-17 00:16:47.641 [INFO][4026] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f46bb503ee981dfedd9f80b4a9ee729ce585d60a11bb71e15886b10c6cabf01b" iface="eth0" netns="/var/run/netns/cni-b6f9b29e-d1cf-d827-94b3-e8f7697d1337" Jan 17 00:16:47.756519 containerd[1465]: 2026-01-17 00:16:47.643 [INFO][4026] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f46bb503ee981dfedd9f80b4a9ee729ce585d60a11bb71e15886b10c6cabf01b" iface="eth0" netns="/var/run/netns/cni-b6f9b29e-d1cf-d827-94b3-e8f7697d1337" Jan 17 00:16:47.756519 containerd[1465]: 2026-01-17 00:16:47.644 [INFO][4026] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f46bb503ee981dfedd9f80b4a9ee729ce585d60a11bb71e15886b10c6cabf01b" iface="eth0" netns="/var/run/netns/cni-b6f9b29e-d1cf-d827-94b3-e8f7697d1337" Jan 17 00:16:47.756519 containerd[1465]: 2026-01-17 00:16:47.644 [INFO][4026] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f46bb503ee981dfedd9f80b4a9ee729ce585d60a11bb71e15886b10c6cabf01b" Jan 17 00:16:47.756519 containerd[1465]: 2026-01-17 00:16:47.644 [INFO][4026] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f46bb503ee981dfedd9f80b4a9ee729ce585d60a11bb71e15886b10c6cabf01b" Jan 17 00:16:47.756519 containerd[1465]: 2026-01-17 00:16:47.702 [INFO][4045] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f46bb503ee981dfedd9f80b4a9ee729ce585d60a11bb71e15886b10c6cabf01b" HandleID="k8s-pod-network.f46bb503ee981dfedd9f80b4a9ee729ce585d60a11bb71e15886b10c6cabf01b" Workload="ci--4081.3.6--n--912fd252f4-k8s-calico--kube--controllers--76574bc5--8kb79-eth0" Jan 17 00:16:47.756519 containerd[1465]: 2026-01-17 00:16:47.702 [INFO][4045] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:47.756519 containerd[1465]: 2026-01-17 00:16:47.702 [INFO][4045] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:47.756519 containerd[1465]: 2026-01-17 00:16:47.723 [WARNING][4045] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f46bb503ee981dfedd9f80b4a9ee729ce585d60a11bb71e15886b10c6cabf01b" HandleID="k8s-pod-network.f46bb503ee981dfedd9f80b4a9ee729ce585d60a11bb71e15886b10c6cabf01b" Workload="ci--4081.3.6--n--912fd252f4-k8s-calico--kube--controllers--76574bc5--8kb79-eth0" Jan 17 00:16:47.756519 containerd[1465]: 2026-01-17 00:16:47.723 [INFO][4045] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f46bb503ee981dfedd9f80b4a9ee729ce585d60a11bb71e15886b10c6cabf01b" HandleID="k8s-pod-network.f46bb503ee981dfedd9f80b4a9ee729ce585d60a11bb71e15886b10c6cabf01b" Workload="ci--4081.3.6--n--912fd252f4-k8s-calico--kube--controllers--76574bc5--8kb79-eth0" Jan 17 00:16:47.756519 containerd[1465]: 2026-01-17 00:16:47.726 [INFO][4045] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:47.756519 containerd[1465]: 2026-01-17 00:16:47.735 [INFO][4026] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f46bb503ee981dfedd9f80b4a9ee729ce585d60a11bb71e15886b10c6cabf01b" Jan 17 00:16:47.759774 containerd[1465]: time="2026-01-17T00:16:47.759603268Z" level=info msg="TearDown network for sandbox \"f46bb503ee981dfedd9f80b4a9ee729ce585d60a11bb71e15886b10c6cabf01b\" successfully" Jan 17 00:16:47.759774 containerd[1465]: time="2026-01-17T00:16:47.759639260Z" level=info msg="StopPodSandbox for \"f46bb503ee981dfedd9f80b4a9ee729ce585d60a11bb71e15886b10c6cabf01b\" returns successfully" Jan 17 00:16:47.761122 containerd[1465]: time="2026-01-17T00:16:47.760880065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76574bc5-8kb79,Uid:96db8296-fac0-44e6-a2a4-5921dbbfa75c,Namespace:calico-system,Attempt:1,}" Jan 17 00:16:47.766642 systemd[1]: run-netns-cni\x2db6f9b29e\x2dd1cf\x2dd827\x2d94b3\x2de8f7697d1337.mount: Deactivated successfully. Jan 17 00:16:47.773237 containerd[1465]: 2026-01-17 00:16:47.620 [INFO][4025] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="44e891593804e101ca66ba664a3e7090ba8654b106de88e61702f2e86233a3e6" Jan 17 00:16:47.773237 containerd[1465]: 2026-01-17 00:16:47.622 [INFO][4025] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="44e891593804e101ca66ba664a3e7090ba8654b106de88e61702f2e86233a3e6" iface="eth0" netns="/var/run/netns/cni-7abafae2-7817-7686-9953-1b751e735252" Jan 17 00:16:47.773237 containerd[1465]: 2026-01-17 00:16:47.622 [INFO][4025] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="44e891593804e101ca66ba664a3e7090ba8654b106de88e61702f2e86233a3e6" iface="eth0" netns="/var/run/netns/cni-7abafae2-7817-7686-9953-1b751e735252" Jan 17 00:16:47.773237 containerd[1465]: 2026-01-17 00:16:47.623 [INFO][4025] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="44e891593804e101ca66ba664a3e7090ba8654b106de88e61702f2e86233a3e6" iface="eth0" netns="/var/run/netns/cni-7abafae2-7817-7686-9953-1b751e735252" Jan 17 00:16:47.773237 containerd[1465]: 2026-01-17 00:16:47.623 [INFO][4025] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="44e891593804e101ca66ba664a3e7090ba8654b106de88e61702f2e86233a3e6" Jan 17 00:16:47.773237 containerd[1465]: 2026-01-17 00:16:47.623 [INFO][4025] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="44e891593804e101ca66ba664a3e7090ba8654b106de88e61702f2e86233a3e6" Jan 17 00:16:47.773237 containerd[1465]: 2026-01-17 00:16:47.707 [INFO][4040] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="44e891593804e101ca66ba664a3e7090ba8654b106de88e61702f2e86233a3e6" HandleID="k8s-pod-network.44e891593804e101ca66ba664a3e7090ba8654b106de88e61702f2e86233a3e6" Workload="ci--4081.3.6--n--912fd252f4-k8s-coredns--668d6bf9bc--spn4s-eth0" Jan 17 00:16:47.773237 containerd[1465]: 2026-01-17 00:16:47.707 [INFO][4040] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:47.773237 containerd[1465]: 2026-01-17 00:16:47.726 [INFO][4040] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:47.773237 containerd[1465]: 2026-01-17 00:16:47.747 [WARNING][4040] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="44e891593804e101ca66ba664a3e7090ba8654b106de88e61702f2e86233a3e6" HandleID="k8s-pod-network.44e891593804e101ca66ba664a3e7090ba8654b106de88e61702f2e86233a3e6" Workload="ci--4081.3.6--n--912fd252f4-k8s-coredns--668d6bf9bc--spn4s-eth0" Jan 17 00:16:47.773237 containerd[1465]: 2026-01-17 00:16:47.747 [INFO][4040] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="44e891593804e101ca66ba664a3e7090ba8654b106de88e61702f2e86233a3e6" HandleID="k8s-pod-network.44e891593804e101ca66ba664a3e7090ba8654b106de88e61702f2e86233a3e6" Workload="ci--4081.3.6--n--912fd252f4-k8s-coredns--668d6bf9bc--spn4s-eth0" Jan 17 00:16:47.773237 containerd[1465]: 2026-01-17 00:16:47.752 [INFO][4040] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:47.773237 containerd[1465]: 2026-01-17 00:16:47.770 [INFO][4025] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="44e891593804e101ca66ba664a3e7090ba8654b106de88e61702f2e86233a3e6" Jan 17 00:16:47.774714 containerd[1465]: time="2026-01-17T00:16:47.774541528Z" level=info msg="TearDown network for sandbox \"44e891593804e101ca66ba664a3e7090ba8654b106de88e61702f2e86233a3e6\" successfully" Jan 17 00:16:47.774714 containerd[1465]: time="2026-01-17T00:16:47.774575226Z" level=info msg="StopPodSandbox for \"44e891593804e101ca66ba664a3e7090ba8654b106de88e61702f2e86233a3e6\" returns successfully" Jan 17 00:16:47.777070 kubelet[2515]: E0117 00:16:47.777028 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:16:47.783497 containerd[1465]: time="2026-01-17T00:16:47.783396482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-spn4s,Uid:8cc8f574-3f41-42c0-ad2b-73a6264664c2,Namespace:kube-system,Attempt:1,}" Jan 17 00:16:47.787576 systemd[1]: run-netns-cni\x2d7abafae2\x2d7817\x2d7686\x2d9953\x2d1b751e735252.mount: Deactivated successfully. Jan 17 00:16:47.895516 kernel: bpftool[4085]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 17 00:16:48.051549 systemd-networkd[1375]: cali2963528f4a6: Gained IPv6LL Jan 17 00:16:48.088909 containerd[1465]: time="2026-01-17T00:16:48.088579090Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:16:48.090975 containerd[1465]: time="2026-01-17T00:16:48.090856527Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:16:48.093920 containerd[1465]: time="2026-01-17T00:16:48.091082303Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:16:48.094082 kubelet[2515]: E0117 00:16:48.093986 2515 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:16:48.094082 kubelet[2515]: E0117 00:16:48.094057 2515 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:16:48.094348 kubelet[2515]: E0117 00:16:48.094256 2515 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vfkq5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78cdb99998-slhfc_calico-system(1d8c80dc-ca7e-4704-80bc-010f68ebac60): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:16:48.095881 kubelet[2515]: E0117 00:16:48.095771 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78cdb99998-slhfc" podUID="1d8c80dc-ca7e-4704-80bc-010f68ebac60" Jan 17 00:16:48.127754 systemd-networkd[1375]: cali4de818d07d5: Link UP Jan 17 00:16:48.130605 systemd-networkd[1375]: cali4de818d07d5: Gained carrier Jan 17 00:16:48.172478 containerd[1465]: 2026-01-17 00:16:47.916 [INFO][4069] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--912fd252f4-k8s-coredns--668d6bf9bc--spn4s-eth0 coredns-668d6bf9bc- kube-system 8cc8f574-3f41-42c0-ad2b-73a6264664c2 998 0 2026-01-17 00:16:00 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-912fd252f4 coredns-668d6bf9bc-spn4s eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4de818d07d5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="6b63a28789bb070adbe49a4eb6ad30d762074150d085ef9252bc2624b30fe4ed" Namespace="kube-system" Pod="coredns-668d6bf9bc-spn4s" WorkloadEndpoint="ci--4081.3.6--n--912fd252f4-k8s-coredns--668d6bf9bc--spn4s-" Jan 17 00:16:48.172478 containerd[1465]: 2026-01-17 00:16:47.916 [INFO][4069] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6b63a28789bb070adbe49a4eb6ad30d762074150d085ef9252bc2624b30fe4ed" Namespace="kube-system" Pod="coredns-668d6bf9bc-spn4s" WorkloadEndpoint="ci--4081.3.6--n--912fd252f4-k8s-coredns--668d6bf9bc--spn4s-eth0" Jan 17 00:16:48.172478 containerd[1465]: 2026-01-17 00:16:48.023 [INFO][4090] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6b63a28789bb070adbe49a4eb6ad30d762074150d085ef9252bc2624b30fe4ed" HandleID="k8s-pod-network.6b63a28789bb070adbe49a4eb6ad30d762074150d085ef9252bc2624b30fe4ed" Workload="ci--4081.3.6--n--912fd252f4-k8s-coredns--668d6bf9bc--spn4s-eth0" Jan 17 00:16:48.172478 containerd[1465]: 2026-01-17 00:16:48.026 [INFO][4090] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6b63a28789bb070adbe49a4eb6ad30d762074150d085ef9252bc2624b30fe4ed" HandleID="k8s-pod-network.6b63a28789bb070adbe49a4eb6ad30d762074150d085ef9252bc2624b30fe4ed" Workload="ci--4081.3.6--n--912fd252f4-k8s-coredns--668d6bf9bc--spn4s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e3d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-912fd252f4", "pod":"coredns-668d6bf9bc-spn4s", "timestamp":"2026-01-17 00:16:48.02378844 +0000 UTC"}, Hostname:"ci-4081.3.6-n-912fd252f4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:16:48.172478 containerd[1465]: 2026-01-17 00:16:48.026 [INFO][4090] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:48.172478 containerd[1465]: 2026-01-17 00:16:48.026 [INFO][4090] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:48.172478 containerd[1465]: 2026-01-17 00:16:48.029 [INFO][4090] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-912fd252f4' Jan 17 00:16:48.172478 containerd[1465]: 2026-01-17 00:16:48.058 [INFO][4090] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6b63a28789bb070adbe49a4eb6ad30d762074150d085ef9252bc2624b30fe4ed" host="ci-4081.3.6-n-912fd252f4" Jan 17 00:16:48.172478 containerd[1465]: 2026-01-17 00:16:48.070 [INFO][4090] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-912fd252f4" Jan 17 00:16:48.172478 containerd[1465]: 2026-01-17 00:16:48.080 [INFO][4090] ipam/ipam.go 511: Trying affinity for 192.168.4.0/26 host="ci-4081.3.6-n-912fd252f4" Jan 17 00:16:48.172478 containerd[1465]: 2026-01-17 00:16:48.084 [INFO][4090] ipam/ipam.go 158: Attempting to load block cidr=192.168.4.0/26 host="ci-4081.3.6-n-912fd252f4" Jan 17 00:16:48.172478 containerd[1465]: 2026-01-17 00:16:48.088 [INFO][4090] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.4.0/26 host="ci-4081.3.6-n-912fd252f4" Jan 17 00:16:48.172478 containerd[1465]: 2026-01-17 00:16:48.088 [INFO][4090] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.4.0/26 handle="k8s-pod-network.6b63a28789bb070adbe49a4eb6ad30d762074150d085ef9252bc2624b30fe4ed" host="ci-4081.3.6-n-912fd252f4" Jan 17 00:16:48.172478 containerd[1465]: 2026-01-17 00:16:48.095 [INFO][4090] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6b63a28789bb070adbe49a4eb6ad30d762074150d085ef9252bc2624b30fe4ed Jan 17 00:16:48.172478 containerd[1465]: 2026-01-17 00:16:48.102 [INFO][4090] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.4.0/26 handle="k8s-pod-network.6b63a28789bb070adbe49a4eb6ad30d762074150d085ef9252bc2624b30fe4ed" host="ci-4081.3.6-n-912fd252f4" Jan 17 00:16:48.172478 containerd[1465]: 2026-01-17 00:16:48.111 [INFO][4090] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.4.2/26] block=192.168.4.0/26 handle="k8s-pod-network.6b63a28789bb070adbe49a4eb6ad30d762074150d085ef9252bc2624b30fe4ed" host="ci-4081.3.6-n-912fd252f4" Jan 17 00:16:48.172478 containerd[1465]: 2026-01-17 00:16:48.111 [INFO][4090] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.4.2/26] handle="k8s-pod-network.6b63a28789bb070adbe49a4eb6ad30d762074150d085ef9252bc2624b30fe4ed" host="ci-4081.3.6-n-912fd252f4" Jan 17 00:16:48.172478 containerd[1465]: 2026-01-17 00:16:48.112 [INFO][4090] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:48.172478 containerd[1465]: 2026-01-17 00:16:48.112 [INFO][4090] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.4.2/26] IPv6=[] ContainerID="6b63a28789bb070adbe49a4eb6ad30d762074150d085ef9252bc2624b30fe4ed" HandleID="k8s-pod-network.6b63a28789bb070adbe49a4eb6ad30d762074150d085ef9252bc2624b30fe4ed" Workload="ci--4081.3.6--n--912fd252f4-k8s-coredns--668d6bf9bc--spn4s-eth0" Jan 17 00:16:48.173151 containerd[1465]: 2026-01-17 00:16:48.116 [INFO][4069] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6b63a28789bb070adbe49a4eb6ad30d762074150d085ef9252bc2624b30fe4ed" Namespace="kube-system" Pod="coredns-668d6bf9bc-spn4s" WorkloadEndpoint="ci--4081.3.6--n--912fd252f4-k8s-coredns--668d6bf9bc--spn4s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--912fd252f4-k8s-coredns--668d6bf9bc--spn4s-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8cc8f574-3f41-42c0-ad2b-73a6264664c2", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-912fd252f4", ContainerID:"", Pod:"coredns-668d6bf9bc-spn4s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.4.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4de818d07d5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:48.173151 containerd[1465]: 2026-01-17 00:16:48.116 [INFO][4069] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.4.2/32] ContainerID="6b63a28789bb070adbe49a4eb6ad30d762074150d085ef9252bc2624b30fe4ed" Namespace="kube-system" Pod="coredns-668d6bf9bc-spn4s" WorkloadEndpoint="ci--4081.3.6--n--912fd252f4-k8s-coredns--668d6bf9bc--spn4s-eth0" Jan 17 00:16:48.173151 containerd[1465]: 2026-01-17 00:16:48.117 [INFO][4069] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4de818d07d5 ContainerID="6b63a28789bb070adbe49a4eb6ad30d762074150d085ef9252bc2624b30fe4ed" Namespace="kube-system" Pod="coredns-668d6bf9bc-spn4s" WorkloadEndpoint="ci--4081.3.6--n--912fd252f4-k8s-coredns--668d6bf9bc--spn4s-eth0" Jan 17 00:16:48.173151 containerd[1465]: 2026-01-17 00:16:48.130 [INFO][4069] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6b63a28789bb070adbe49a4eb6ad30d762074150d085ef9252bc2624b30fe4ed" Namespace="kube-system" Pod="coredns-668d6bf9bc-spn4s" WorkloadEndpoint="ci--4081.3.6--n--912fd252f4-k8s-coredns--668d6bf9bc--spn4s-eth0" Jan 17 00:16:48.173151 containerd[1465]: 2026-01-17 00:16:48.131 [INFO][4069] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6b63a28789bb070adbe49a4eb6ad30d762074150d085ef9252bc2624b30fe4ed" Namespace="kube-system" Pod="coredns-668d6bf9bc-spn4s" WorkloadEndpoint="ci--4081.3.6--n--912fd252f4-k8s-coredns--668d6bf9bc--spn4s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--912fd252f4-k8s-coredns--668d6bf9bc--spn4s-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8cc8f574-3f41-42c0-ad2b-73a6264664c2", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-912fd252f4", ContainerID:"6b63a28789bb070adbe49a4eb6ad30d762074150d085ef9252bc2624b30fe4ed", Pod:"coredns-668d6bf9bc-spn4s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.4.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4de818d07d5", MAC:"be:e0:af:4d:c3:88", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:48.173151 containerd[1465]: 2026-01-17 00:16:48.167 [INFO][4069] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6b63a28789bb070adbe49a4eb6ad30d762074150d085ef9252bc2624b30fe4ed" Namespace="kube-system" Pod="coredns-668d6bf9bc-spn4s" WorkloadEndpoint="ci--4081.3.6--n--912fd252f4-k8s-coredns--668d6bf9bc--spn4s-eth0" Jan 17 00:16:48.243418 containerd[1465]: time="2026-01-17T00:16:48.242996953Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:16:48.243418 containerd[1465]: time="2026-01-17T00:16:48.243130698Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:16:48.243418 containerd[1465]: time="2026-01-17T00:16:48.243186780Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:16:48.245586 containerd[1465]: time="2026-01-17T00:16:48.244307283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:16:48.270221 systemd-networkd[1375]: cali8dd1eee73de: Link UP Jan 17 00:16:48.271708 systemd-networkd[1375]: cali8dd1eee73de: Gained carrier Jan 17 00:16:48.314621 containerd[1465]: 2026-01-17 00:16:47.938 [INFO][4058] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--912fd252f4-k8s-calico--kube--controllers--76574bc5--8kb79-eth0 calico-kube-controllers-76574bc5- calico-system 96db8296-fac0-44e6-a2a4-5921dbbfa75c 999 0 2026-01-17 00:16:24 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:76574bc5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.6-n-912fd252f4 calico-kube-controllers-76574bc5-8kb79 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali8dd1eee73de [] [] }} ContainerID="2f0ac97f9992fe1a48c6b021036206cd643ecad78100b76ea4f5b83a6dcd4734" Namespace="calico-system" Pod="calico-kube-controllers-76574bc5-8kb79" WorkloadEndpoint="ci--4081.3.6--n--912fd252f4-k8s-calico--kube--controllers--76574bc5--8kb79-" Jan 17 00:16:48.314621 containerd[1465]: 2026-01-17 00:16:47.939 [INFO][4058] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2f0ac97f9992fe1a48c6b021036206cd643ecad78100b76ea4f5b83a6dcd4734" Namespace="calico-system" Pod="calico-kube-controllers-76574bc5-8kb79" WorkloadEndpoint="ci--4081.3.6--n--912fd252f4-k8s-calico--kube--controllers--76574bc5--8kb79-eth0" Jan 17 00:16:48.314621 containerd[1465]: 2026-01-17 00:16:48.061 [INFO][4095] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2f0ac97f9992fe1a48c6b021036206cd643ecad78100b76ea4f5b83a6dcd4734" HandleID="k8s-pod-network.2f0ac97f9992fe1a48c6b021036206cd643ecad78100b76ea4f5b83a6dcd4734" Workload="ci--4081.3.6--n--912fd252f4-k8s-calico--kube--controllers--76574bc5--8kb79-eth0" Jan 17 00:16:48.314621 containerd[1465]: 2026-01-17 00:16:48.062 [INFO][4095] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2f0ac97f9992fe1a48c6b021036206cd643ecad78100b76ea4f5b83a6dcd4734" HandleID="k8s-pod-network.2f0ac97f9992fe1a48c6b021036206cd643ecad78100b76ea4f5b83a6dcd4734" Workload="ci--4081.3.6--n--912fd252f4-k8s-calico--kube--controllers--76574bc5--8kb79-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000364fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-912fd252f4", "pod":"calico-kube-controllers-76574bc5-8kb79", "timestamp":"2026-01-17 00:16:48.061511848 +0000 UTC"}, Hostname:"ci-4081.3.6-n-912fd252f4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:16:48.314621 containerd[1465]: 2026-01-17 00:16:48.062 [INFO][4095] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:48.314621 containerd[1465]: 2026-01-17 00:16:48.112 [INFO][4095] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:48.314621 containerd[1465]: 2026-01-17 00:16:48.112 [INFO][4095] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-912fd252f4' Jan 17 00:16:48.314621 containerd[1465]: 2026-01-17 00:16:48.151 [INFO][4095] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2f0ac97f9992fe1a48c6b021036206cd643ecad78100b76ea4f5b83a6dcd4734" host="ci-4081.3.6-n-912fd252f4" Jan 17 00:16:48.314621 containerd[1465]: 2026-01-17 00:16:48.171 [INFO][4095] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-912fd252f4" Jan 17 00:16:48.314621 containerd[1465]: 2026-01-17 00:16:48.201 [INFO][4095] ipam/ipam.go 511: Trying affinity for 192.168.4.0/26 host="ci-4081.3.6-n-912fd252f4" Jan 17 00:16:48.314621 containerd[1465]: 2026-01-17 00:16:48.207 [INFO][4095] ipam/ipam.go 158: Attempting to load block cidr=192.168.4.0/26 host="ci-4081.3.6-n-912fd252f4" Jan 17 00:16:48.314621 containerd[1465]: 2026-01-17 00:16:48.214 [INFO][4095] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.4.0/26 host="ci-4081.3.6-n-912fd252f4" Jan 17 00:16:48.314621 containerd[1465]: 2026-01-17 00:16:48.214 [INFO][4095] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.4.0/26 handle="k8s-pod-network.2f0ac97f9992fe1a48c6b021036206cd643ecad78100b76ea4f5b83a6dcd4734" host="ci-4081.3.6-n-912fd252f4" Jan 17 00:16:48.314621 containerd[1465]: 2026-01-17 00:16:48.220 [INFO][4095] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2f0ac97f9992fe1a48c6b021036206cd643ecad78100b76ea4f5b83a6dcd4734 Jan 17 00:16:48.314621 containerd[1465]: 2026-01-17 00:16:48.234 [INFO][4095] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.4.0/26 handle="k8s-pod-network.2f0ac97f9992fe1a48c6b021036206cd643ecad78100b76ea4f5b83a6dcd4734" host="ci-4081.3.6-n-912fd252f4" Jan 17 00:16:48.314621 containerd[1465]: 2026-01-17 00:16:48.247 [INFO][4095] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.4.3/26] block=192.168.4.0/26 handle="k8s-pod-network.2f0ac97f9992fe1a48c6b021036206cd643ecad78100b76ea4f5b83a6dcd4734" host="ci-4081.3.6-n-912fd252f4" Jan 17 00:16:48.314621 containerd[1465]: 2026-01-17 00:16:48.247 [INFO][4095] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.4.3/26] handle="k8s-pod-network.2f0ac97f9992fe1a48c6b021036206cd643ecad78100b76ea4f5b83a6dcd4734" host="ci-4081.3.6-n-912fd252f4" Jan 17 00:16:48.314621 containerd[1465]: 2026-01-17 00:16:48.248 [INFO][4095] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:48.314621 containerd[1465]: 2026-01-17 00:16:48.248 [INFO][4095] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.4.3/26] IPv6=[] ContainerID="2f0ac97f9992fe1a48c6b021036206cd643ecad78100b76ea4f5b83a6dcd4734" HandleID="k8s-pod-network.2f0ac97f9992fe1a48c6b021036206cd643ecad78100b76ea4f5b83a6dcd4734" Workload="ci--4081.3.6--n--912fd252f4-k8s-calico--kube--controllers--76574bc5--8kb79-eth0" Jan 17 00:16:48.318808 containerd[1465]: 2026-01-17 00:16:48.260 [INFO][4058] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2f0ac97f9992fe1a48c6b021036206cd643ecad78100b76ea4f5b83a6dcd4734" Namespace="calico-system" Pod="calico-kube-controllers-76574bc5-8kb79" WorkloadEndpoint="ci--4081.3.6--n--912fd252f4-k8s-calico--kube--controllers--76574bc5--8kb79-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--912fd252f4-k8s-calico--kube--controllers--76574bc5--8kb79-eth0", GenerateName:"calico-kube-controllers-76574bc5-", Namespace:"calico-system", SelfLink:"", UID:"96db8296-fac0-44e6-a2a4-5921dbbfa75c", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76574bc5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-912fd252f4", ContainerID:"", Pod:"calico-kube-controllers-76574bc5-8kb79", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.4.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8dd1eee73de", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:48.318808 containerd[1465]: 2026-01-17 00:16:48.260 [INFO][4058] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.4.3/32] ContainerID="2f0ac97f9992fe1a48c6b021036206cd643ecad78100b76ea4f5b83a6dcd4734" Namespace="calico-system" Pod="calico-kube-controllers-76574bc5-8kb79" WorkloadEndpoint="ci--4081.3.6--n--912fd252f4-k8s-calico--kube--controllers--76574bc5--8kb79-eth0" Jan 17 00:16:48.318808 containerd[1465]: 2026-01-17 00:16:48.260 [INFO][4058] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8dd1eee73de ContainerID="2f0ac97f9992fe1a48c6b021036206cd643ecad78100b76ea4f5b83a6dcd4734" Namespace="calico-system" Pod="calico-kube-controllers-76574bc5-8kb79" WorkloadEndpoint="ci--4081.3.6--n--912fd252f4-k8s-calico--kube--controllers--76574bc5--8kb79-eth0" Jan 17 00:16:48.318808 containerd[1465]: 2026-01-17 00:16:48.273 [INFO][4058] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2f0ac97f9992fe1a48c6b021036206cd643ecad78100b76ea4f5b83a6dcd4734" Namespace="calico-system" Pod="calico-kube-controllers-76574bc5-8kb79" WorkloadEndpoint="ci--4081.3.6--n--912fd252f4-k8s-calico--kube--controllers--76574bc5--8kb79-eth0" Jan 17 00:16:48.318808 containerd[1465]: 2026-01-17 00:16:48.277 [INFO][4058] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2f0ac97f9992fe1a48c6b021036206cd643ecad78100b76ea4f5b83a6dcd4734" Namespace="calico-system" Pod="calico-kube-controllers-76574bc5-8kb79" WorkloadEndpoint="ci--4081.3.6--n--912fd252f4-k8s-calico--kube--controllers--76574bc5--8kb79-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--912fd252f4-k8s-calico--kube--controllers--76574bc5--8kb79-eth0", GenerateName:"calico-kube-controllers-76574bc5-", Namespace:"calico-system", SelfLink:"", UID:"96db8296-fac0-44e6-a2a4-5921dbbfa75c", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76574bc5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-912fd252f4", ContainerID:"2f0ac97f9992fe1a48c6b021036206cd643ecad78100b76ea4f5b83a6dcd4734", Pod:"calico-kube-controllers-76574bc5-8kb79", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.4.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8dd1eee73de", MAC:"9a:d1:0a:d5:d1:9c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:48.318808 containerd[1465]: 2026-01-17 00:16:48.299 [INFO][4058] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2f0ac97f9992fe1a48c6b021036206cd643ecad78100b76ea4f5b83a6dcd4734" Namespace="calico-system" Pod="calico-kube-controllers-76574bc5-8kb79" WorkloadEndpoint="ci--4081.3.6--n--912fd252f4-k8s-calico--kube--controllers--76574bc5--8kb79-eth0" Jan 17 00:16:48.315723 systemd[1]: Started cri-containerd-6b63a28789bb070adbe49a4eb6ad30d762074150d085ef9252bc2624b30fe4ed.scope - libcontainer container 6b63a28789bb070adbe49a4eb6ad30d762074150d085ef9252bc2624b30fe4ed. Jan 17 00:16:48.384964 containerd[1465]: time="2026-01-17T00:16:48.383618153Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:16:48.384964 containerd[1465]: time="2026-01-17T00:16:48.383815359Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:16:48.384964 containerd[1465]: time="2026-01-17T00:16:48.383892661Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:16:48.384964 containerd[1465]: time="2026-01-17T00:16:48.384380963Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:16:48.440713 systemd[1]: Started cri-containerd-2f0ac97f9992fe1a48c6b021036206cd643ecad78100b76ea4f5b83a6dcd4734.scope - libcontainer container 2f0ac97f9992fe1a48c6b021036206cd643ecad78100b76ea4f5b83a6dcd4734. Jan 17 00:16:48.445157 containerd[1465]: time="2026-01-17T00:16:48.444778484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-spn4s,Uid:8cc8f574-3f41-42c0-ad2b-73a6264664c2,Namespace:kube-system,Attempt:1,} returns sandbox id \"6b63a28789bb070adbe49a4eb6ad30d762074150d085ef9252bc2624b30fe4ed\"" Jan 17 00:16:48.447814 kubelet[2515]: E0117 00:16:48.447774 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:16:48.451477 containerd[1465]: time="2026-01-17T00:16:48.451247696Z" level=info msg="CreateContainer within sandbox \"6b63a28789bb070adbe49a4eb6ad30d762074150d085ef9252bc2624b30fe4ed\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:16:48.481712 containerd[1465]: time="2026-01-17T00:16:48.481665312Z" level=info msg="StopPodSandbox for \"80d79e1dd770d0cf15204a932b15a90b89db61e7e85ce4c49c2b8405f932d2e3\"" Jan 17 00:16:48.487099 containerd[1465]: time="2026-01-17T00:16:48.487020999Z" level=info msg="StopPodSandbox for \"fa5e46525d5ac77c5f52c1a58e410f601e78f7da697af73c2781cd381e43d311\"" Jan 17 00:16:48.504169 containerd[1465]: time="2026-01-17T00:16:48.503403790Z" level=info msg="CreateContainer within sandbox \"6b63a28789bb070adbe49a4eb6ad30d762074150d085ef9252bc2624b30fe4ed\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5ea2e03c7add8b9970c595a46ebf18a34eaf287ee4dc6234c0d32b77d0746864\"" Jan 17 00:16:48.509912 containerd[1465]: time="2026-01-17T00:16:48.507429643Z" level=info msg="StartContainer for \"5ea2e03c7add8b9970c595a46ebf18a34eaf287ee4dc6234c0d32b77d0746864\"" Jan 17 00:16:48.604745 systemd[1]: Started cri-containerd-5ea2e03c7add8b9970c595a46ebf18a34eaf287ee4dc6234c0d32b77d0746864.scope - libcontainer container 5ea2e03c7add8b9970c595a46ebf18a34eaf287ee4dc6234c0d32b77d0746864. Jan 17 00:16:48.678587 containerd[1465]: time="2026-01-17T00:16:48.677957411Z" level=info msg="StartContainer for \"5ea2e03c7add8b9970c595a46ebf18a34eaf287ee4dc6234c0d32b77d0746864\" returns successfully" Jan 17 00:16:48.813392 containerd[1465]: time="2026-01-17T00:16:48.813324368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76574bc5-8kb79,Uid:96db8296-fac0-44e6-a2a4-5921dbbfa75c,Namespace:calico-system,Attempt:1,} returns sandbox id \"2f0ac97f9992fe1a48c6b021036206cd643ecad78100b76ea4f5b83a6dcd4734\"" Jan 17 00:16:48.814237 containerd[1465]: 2026-01-17 00:16:48.698 [INFO][4215] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="80d79e1dd770d0cf15204a932b15a90b89db61e7e85ce4c49c2b8405f932d2e3" Jan 17 00:16:48.814237 containerd[1465]: 2026-01-17 00:16:48.698 [INFO][4215] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="80d79e1dd770d0cf15204a932b15a90b89db61e7e85ce4c49c2b8405f932d2e3" iface="eth0" netns="/var/run/netns/cni-08af041d-5126-64cf-69fe-86736d1fa529" Jan 17 00:16:48.814237 containerd[1465]: 2026-01-17 00:16:48.698 [INFO][4215] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="80d79e1dd770d0cf15204a932b15a90b89db61e7e85ce4c49c2b8405f932d2e3" iface="eth0" netns="/var/run/netns/cni-08af041d-5126-64cf-69fe-86736d1fa529" Jan 17 00:16:48.814237 containerd[1465]: 2026-01-17 00:16:48.701 [INFO][4215] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="80d79e1dd770d0cf15204a932b15a90b89db61e7e85ce4c49c2b8405f932d2e3" iface="eth0" netns="/var/run/netns/cni-08af041d-5126-64cf-69fe-86736d1fa529" Jan 17 00:16:48.814237 containerd[1465]: 2026-01-17 00:16:48.701 [INFO][4215] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="80d79e1dd770d0cf15204a932b15a90b89db61e7e85ce4c49c2b8405f932d2e3" Jan 17 00:16:48.814237 containerd[1465]: 2026-01-17 00:16:48.701 [INFO][4215] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="80d79e1dd770d0cf15204a932b15a90b89db61e7e85ce4c49c2b8405f932d2e3" Jan 17 00:16:48.814237 containerd[1465]: 2026-01-17 00:16:48.754 [INFO][4264] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="80d79e1dd770d0cf15204a932b15a90b89db61e7e85ce4c49c2b8405f932d2e3" HandleID="k8s-pod-network.80d79e1dd770d0cf15204a932b15a90b89db61e7e85ce4c49c2b8405f932d2e3" Workload="ci--4081.3.6--n--912fd252f4-k8s-calico--apiserver--598d5588f5--xr65t-eth0" Jan 17 00:16:48.814237 containerd[1465]: 2026-01-17 00:16:48.755 [INFO][4264] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:48.814237 containerd[1465]: 2026-01-17 00:16:48.755 [INFO][4264] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:48.814237 containerd[1465]: 2026-01-17 00:16:48.780 [WARNING][4264] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="80d79e1dd770d0cf15204a932b15a90b89db61e7e85ce4c49c2b8405f932d2e3" HandleID="k8s-pod-network.80d79e1dd770d0cf15204a932b15a90b89db61e7e85ce4c49c2b8405f932d2e3" Workload="ci--4081.3.6--n--912fd252f4-k8s-calico--apiserver--598d5588f5--xr65t-eth0" Jan 17 00:16:48.814237 containerd[1465]: 2026-01-17 00:16:48.780 [INFO][4264] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="80d79e1dd770d0cf15204a932b15a90b89db61e7e85ce4c49c2b8405f932d2e3" HandleID="k8s-pod-network.80d79e1dd770d0cf15204a932b15a90b89db61e7e85ce4c49c2b8405f932d2e3" Workload="ci--4081.3.6--n--912fd252f4-k8s-calico--apiserver--598d5588f5--xr65t-eth0" Jan 17 00:16:48.814237 containerd[1465]: 2026-01-17 00:16:48.786 [INFO][4264] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:48.814237 containerd[1465]: 2026-01-17 00:16:48.801 [INFO][4215] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="80d79e1dd770d0cf15204a932b15a90b89db61e7e85ce4c49c2b8405f932d2e3" Jan 17 00:16:48.815913 containerd[1465]: time="2026-01-17T00:16:48.815350488Z" level=info msg="TearDown network for sandbox \"80d79e1dd770d0cf15204a932b15a90b89db61e7e85ce4c49c2b8405f932d2e3\" successfully" Jan 17 00:16:48.815913 containerd[1465]: time="2026-01-17T00:16:48.815372987Z" level=info msg="StopPodSandbox for \"80d79e1dd770d0cf15204a932b15a90b89db61e7e85ce4c49c2b8405f932d2e3\" returns successfully" Jan 17 00:16:48.818943 containerd[1465]: time="2026-01-17T00:16:48.818907031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-598d5588f5-xr65t,Uid:fd23fc1c-2ea9-47e8-be5f-5279e384fd8c,Namespace:calico-apiserver,Attempt:1,}" Jan 17 00:16:48.821155 containerd[1465]: time="2026-01-17T00:16:48.821119354Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:16:48.843592 containerd[1465]: 2026-01-17 00:16:48.686 [INFO][4207] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fa5e46525d5ac77c5f52c1a58e410f601e78f7da697af73c2781cd381e43d311" Jan 17 00:16:48.843592 containerd[1465]: 2026-01-17 00:16:48.686 [INFO][4207] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fa5e46525d5ac77c5f52c1a58e410f601e78f7da697af73c2781cd381e43d311" iface="eth0" netns="/var/run/netns/cni-7c0321a5-6f08-a012-cd40-495a41f2128b" Jan 17 00:16:48.843592 containerd[1465]: 2026-01-17 00:16:48.686 [INFO][4207] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fa5e46525d5ac77c5f52c1a58e410f601e78f7da697af73c2781cd381e43d311" iface="eth0" netns="/var/run/netns/cni-7c0321a5-6f08-a012-cd40-495a41f2128b" Jan 17 00:16:48.843592 containerd[1465]: 2026-01-17 00:16:48.687 [INFO][4207] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fa5e46525d5ac77c5f52c1a58e410f601e78f7da697af73c2781cd381e43d311" iface="eth0" netns="/var/run/netns/cni-7c0321a5-6f08-a012-cd40-495a41f2128b" Jan 17 00:16:48.843592 containerd[1465]: 2026-01-17 00:16:48.687 [INFO][4207] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fa5e46525d5ac77c5f52c1a58e410f601e78f7da697af73c2781cd381e43d311" Jan 17 00:16:48.843592 containerd[1465]: 2026-01-17 00:16:48.688 [INFO][4207] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fa5e46525d5ac77c5f52c1a58e410f601e78f7da697af73c2781cd381e43d311" Jan 17 00:16:48.843592 containerd[1465]: 2026-01-17 00:16:48.802 [INFO][4259] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fa5e46525d5ac77c5f52c1a58e410f601e78f7da697af73c2781cd381e43d311" HandleID="k8s-pod-network.fa5e46525d5ac77c5f52c1a58e410f601e78f7da697af73c2781cd381e43d311" Workload="ci--4081.3.6--n--912fd252f4-k8s-csi--node--driver--lfkr9-eth0" Jan 17 00:16:48.843592 containerd[1465]: 2026-01-17 00:16:48.805 [INFO][4259] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:48.843592 containerd[1465]: 2026-01-17 00:16:48.805 [INFO][4259] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:48.843592 containerd[1465]: 2026-01-17 00:16:48.827 [WARNING][4259] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fa5e46525d5ac77c5f52c1a58e410f601e78f7da697af73c2781cd381e43d311" HandleID="k8s-pod-network.fa5e46525d5ac77c5f52c1a58e410f601e78f7da697af73c2781cd381e43d311" Workload="ci--4081.3.6--n--912fd252f4-k8s-csi--node--driver--lfkr9-eth0" Jan 17 00:16:48.843592 containerd[1465]: 2026-01-17 00:16:48.827 [INFO][4259] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fa5e46525d5ac77c5f52c1a58e410f601e78f7da697af73c2781cd381e43d311" HandleID="k8s-pod-network.fa5e46525d5ac77c5f52c1a58e410f601e78f7da697af73c2781cd381e43d311" Workload="ci--4081.3.6--n--912fd252f4-k8s-csi--node--driver--lfkr9-eth0" Jan 17 00:16:48.843592 containerd[1465]: 2026-01-17 00:16:48.831 [INFO][4259] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:48.843592 containerd[1465]: 2026-01-17 00:16:48.839 [INFO][4207] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fa5e46525d5ac77c5f52c1a58e410f601e78f7da697af73c2781cd381e43d311" Jan 17 00:16:48.845044 containerd[1465]: time="2026-01-17T00:16:48.843736689Z" level=info msg="TearDown network for sandbox \"fa5e46525d5ac77c5f52c1a58e410f601e78f7da697af73c2781cd381e43d311\" successfully" Jan 17 00:16:48.845044 containerd[1465]: time="2026-01-17T00:16:48.843761617Z" level=info msg="StopPodSandbox for \"fa5e46525d5ac77c5f52c1a58e410f601e78f7da697af73c2781cd381e43d311\" returns successfully" Jan 17 00:16:48.845906 containerd[1465]: time="2026-01-17T00:16:48.845869699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lfkr9,Uid:74b48e50-ea55-46c5-84cf-509f72a7af13,Namespace:calico-system,Attempt:1,}" Jan 17 00:16:48.933397 kubelet[2515]: E0117 00:16:48.933217 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:16:48.942199 kubelet[2515]: E0117 00:16:48.941892 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78cdb99998-slhfc" podUID="1d8c80dc-ca7e-4704-80bc-010f68ebac60" Jan 17 00:16:49.014239 kubelet[2515]: I0117 00:16:49.013917 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-spn4s" podStartSLOduration=49.01311638 podStartE2EDuration="49.01311638s" podCreationTimestamp="2026-01-17 00:16:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:16:49.009069657 +0000 UTC m=+54.740027020" watchObservedRunningTime="2026-01-17 00:16:49.01311638 +0000 UTC m=+54.744073745" Jan 17 00:16:49.158511 containerd[1465]: time="2026-01-17T00:16:49.157538485Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:16:49.163965 containerd[1465]: time="2026-01-17T00:16:49.163632729Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:16:49.164599 containerd[1465]: time="2026-01-17T00:16:49.163683181Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:16:49.165081 kubelet[2515]: E0117 00:16:49.165010 2515 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:16:49.165610 kubelet[2515]: E0117 00:16:49.165100 2515 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:16:49.165610 kubelet[2515]: E0117 00:16:49.165312 2515 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x69kb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-76574bc5-8kb79_calico-system(96db8296-fac0-44e6-a2a4-5921dbbfa75c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:16:49.166630 kubelet[2515]: E0117 00:16:49.166549 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76574bc5-8kb79" podUID="96db8296-fac0-44e6-a2a4-5921dbbfa75c" Jan 17 00:16:49.199248 systemd[1]: run-netns-cni\x2d08af041d\x2d5126\x2d64cf\x2d69fe\x2d86736d1fa529.mount: Deactivated successfully. Jan 17 00:16:49.202376 systemd[1]: run-netns-cni\x2d7c0321a5\x2d6f08\x2da012\x2dcd40\x2d495a41f2128b.mount: Deactivated successfully. Jan 17 00:16:49.299258 systemd-networkd[1375]: calidb65c978d23: Link UP Jan 17 00:16:49.300391 systemd-networkd[1375]: calidb65c978d23: Gained carrier Jan 17 00:16:49.337500 systemd-networkd[1375]: vxlan.calico: Link UP Jan 17 00:16:49.337509 systemd-networkd[1375]: vxlan.calico: Gained carrier Jan 17 00:16:49.376070 containerd[1465]: 2026-01-17 00:16:49.081 [INFO][4294] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--912fd252f4-k8s-calico--apiserver--598d5588f5--xr65t-eth0 calico-apiserver-598d5588f5- calico-apiserver fd23fc1c-2ea9-47e8-be5f-5279e384fd8c 1019 0 2026-01-17 00:16:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:598d5588f5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-912fd252f4 calico-apiserver-598d5588f5-xr65t eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calidb65c978d23 [] [] }} ContainerID="4905d5c0c90d49fb2ae3b33b21a919649a62e60c8463db003d5549d379eda7ef" Namespace="calico-apiserver" Pod="calico-apiserver-598d5588f5-xr65t" WorkloadEndpoint="ci--4081.3.6--n--912fd252f4-k8s-calico--apiserver--598d5588f5--xr65t-" Jan 17 00:16:49.376070 containerd[1465]: 2026-01-17 00:16:49.081 [INFO][4294] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4905d5c0c90d49fb2ae3b33b21a919649a62e60c8463db003d5549d379eda7ef" Namespace="calico-apiserver" Pod="calico-apiserver-598d5588f5-xr65t" WorkloadEndpoint="ci--4081.3.6--n--912fd252f4-k8s-calico--apiserver--598d5588f5--xr65t-eth0" Jan 17 00:16:49.376070 containerd[1465]: 2026-01-17 00:16:49.156 [INFO][4321] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4905d5c0c90d49fb2ae3b33b21a919649a62e60c8463db003d5549d379eda7ef" HandleID="k8s-pod-network.4905d5c0c90d49fb2ae3b33b21a919649a62e60c8463db003d5549d379eda7ef" Workload="ci--4081.3.6--n--912fd252f4-k8s-calico--apiserver--598d5588f5--xr65t-eth0" Jan 17 00:16:49.376070 containerd[1465]: 2026-01-17 00:16:49.157 [INFO][4321] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4905d5c0c90d49fb2ae3b33b21a919649a62e60c8463db003d5549d379eda7ef" HandleID="k8s-pod-network.4905d5c0c90d49fb2ae3b33b21a919649a62e60c8463db003d5549d379eda7ef" Workload="ci--4081.3.6--n--912fd252f4-k8s-calico--apiserver--598d5588f5--xr65t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003da0a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-912fd252f4", "pod":"calico-apiserver-598d5588f5-xr65t", "timestamp":"2026-01-17 00:16:49.156550375 +0000 UTC"}, Hostname:"ci-4081.3.6-n-912fd252f4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:16:49.376070 containerd[1465]: 2026-01-17 00:16:49.157 [INFO][4321] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:49.376070 containerd[1465]: 2026-01-17 00:16:49.157 [INFO][4321] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:49.376070 containerd[1465]: 2026-01-17 00:16:49.157 [INFO][4321] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-912fd252f4' Jan 17 00:16:49.376070 containerd[1465]: 2026-01-17 00:16:49.178 [INFO][4321] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4905d5c0c90d49fb2ae3b33b21a919649a62e60c8463db003d5549d379eda7ef" host="ci-4081.3.6-n-912fd252f4" Jan 17 00:16:49.376070 containerd[1465]: 2026-01-17 00:16:49.201 [INFO][4321] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-912fd252f4" Jan 17 00:16:49.376070 containerd[1465]: 2026-01-17 00:16:49.221 [INFO][4321] ipam/ipam.go 511: Trying affinity for 192.168.4.0/26 host="ci-4081.3.6-n-912fd252f4" Jan 17 00:16:49.376070 containerd[1465]: 2026-01-17 00:16:49.227 [INFO][4321] ipam/ipam.go 158: Attempting to load block cidr=192.168.4.0/26 host="ci-4081.3.6-n-912fd252f4" Jan 17 00:16:49.376070 containerd[1465]: 2026-01-17 00:16:49.232 [INFO][4321] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.4.0/26 host="ci-4081.3.6-n-912fd252f4" Jan 17 00:16:49.376070 containerd[1465]: 2026-01-17 00:16:49.233 [INFO][4321] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.4.0/26 handle="k8s-pod-network.4905d5c0c90d49fb2ae3b33b21a919649a62e60c8463db003d5549d379eda7ef" host="ci-4081.3.6-n-912fd252f4" Jan 17 00:16:49.376070 containerd[1465]: 2026-01-17 00:16:49.236 [INFO][4321] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4905d5c0c90d49fb2ae3b33b21a919649a62e60c8463db003d5549d379eda7ef Jan 17 00:16:49.376070 containerd[1465]: 2026-01-17 00:16:49.247 [INFO][4321] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.4.0/26 handle="k8s-pod-network.4905d5c0c90d49fb2ae3b33b21a919649a62e60c8463db003d5549d379eda7ef" host="ci-4081.3.6-n-912fd252f4" Jan 17 00:16:49.376070 containerd[1465]: 2026-01-17 00:16:49.265 [INFO][4321] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.4.4/26] block=192.168.4.0/26 handle="k8s-pod-network.4905d5c0c90d49fb2ae3b33b21a919649a62e60c8463db003d5549d379eda7ef" host="ci-4081.3.6-n-912fd252f4" Jan 17 00:16:49.376070 containerd[1465]: 2026-01-17 00:16:49.265 [INFO][4321] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.4.4/26] handle="k8s-pod-network.4905d5c0c90d49fb2ae3b33b21a919649a62e60c8463db003d5549d379eda7ef" host="ci-4081.3.6-n-912fd252f4" Jan 17 00:16:49.376070 containerd[1465]: 2026-01-17 00:16:49.266 [INFO][4321] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:49.376070 containerd[1465]: 2026-01-17 00:16:49.266 [INFO][4321] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.4.4/26] IPv6=[] ContainerID="4905d5c0c90d49fb2ae3b33b21a919649a62e60c8463db003d5549d379eda7ef" HandleID="k8s-pod-network.4905d5c0c90d49fb2ae3b33b21a919649a62e60c8463db003d5549d379eda7ef" Workload="ci--4081.3.6--n--912fd252f4-k8s-calico--apiserver--598d5588f5--xr65t-eth0" Jan 17 00:16:49.378562 containerd[1465]: 2026-01-17 00:16:49.291 [INFO][4294] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4905d5c0c90d49fb2ae3b33b21a919649a62e60c8463db003d5549d379eda7ef" Namespace="calico-apiserver" Pod="calico-apiserver-598d5588f5-xr65t" WorkloadEndpoint="ci--4081.3.6--n--912fd252f4-k8s-calico--apiserver--598d5588f5--xr65t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--912fd252f4-k8s-calico--apiserver--598d5588f5--xr65t-eth0", GenerateName:"calico-apiserver-598d5588f5-", Namespace:"calico-apiserver", SelfLink:"", UID:"fd23fc1c-2ea9-47e8-be5f-5279e384fd8c", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"598d5588f5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-912fd252f4", ContainerID:"", Pod:"calico-apiserver-598d5588f5-xr65t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.4.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidb65c978d23", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:49.378562 containerd[1465]: 2026-01-17 00:16:49.291 [INFO][4294] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.4.4/32] ContainerID="4905d5c0c90d49fb2ae3b33b21a919649a62e60c8463db003d5549d379eda7ef" Namespace="calico-apiserver" Pod="calico-apiserver-598d5588f5-xr65t" WorkloadEndpoint="ci--4081.3.6--n--912fd252f4-k8s-calico--apiserver--598d5588f5--xr65t-eth0" Jan 17 00:16:49.378562 containerd[1465]: 2026-01-17 00:16:49.293 [INFO][4294] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidb65c978d23 ContainerID="4905d5c0c90d49fb2ae3b33b21a919649a62e60c8463db003d5549d379eda7ef" Namespace="calico-apiserver" Pod="calico-apiserver-598d5588f5-xr65t" WorkloadEndpoint="ci--4081.3.6--n--912fd252f4-k8s-calico--apiserver--598d5588f5--xr65t-eth0" Jan 17 00:16:49.378562 containerd[1465]: 2026-01-17 00:16:49.301 [INFO][4294] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4905d5c0c90d49fb2ae3b33b21a919649a62e60c8463db003d5549d379eda7ef" Namespace="calico-apiserver" Pod="calico-apiserver-598d5588f5-xr65t" WorkloadEndpoint="ci--4081.3.6--n--912fd252f4-k8s-calico--apiserver--598d5588f5--xr65t-eth0" Jan 17 00:16:49.378562 containerd[1465]: 2026-01-17 00:16:49.302 [INFO][4294] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4905d5c0c90d49fb2ae3b33b21a919649a62e60c8463db003d5549d379eda7ef" Namespace="calico-apiserver" Pod="calico-apiserver-598d5588f5-xr65t" WorkloadEndpoint="ci--4081.3.6--n--912fd252f4-k8s-calico--apiserver--598d5588f5--xr65t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--912fd252f4-k8s-calico--apiserver--598d5588f5--xr65t-eth0", GenerateName:"calico-apiserver-598d5588f5-", Namespace:"calico-apiserver", SelfLink:"", UID:"fd23fc1c-2ea9-47e8-be5f-5279e384fd8c", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"598d5588f5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-912fd252f4", ContainerID:"4905d5c0c90d49fb2ae3b33b21a919649a62e60c8463db003d5549d379eda7ef", Pod:"calico-apiserver-598d5588f5-xr65t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.4.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidb65c978d23", MAC:"5e:98:ad:9d:18:cd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:49.378562 containerd[1465]: 2026-01-17 00:16:49.367 [INFO][4294] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4905d5c0c90d49fb2ae3b33b21a919649a62e60c8463db003d5549d379eda7ef" Namespace="calico-apiserver" Pod="calico-apiserver-598d5588f5-xr65t" WorkloadEndpoint="ci--4081.3.6--n--912fd252f4-k8s-calico--apiserver--598d5588f5--xr65t-eth0" Jan 17 00:16:49.427920 containerd[1465]: time="2026-01-17T00:16:49.427435247Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:16:49.427920 containerd[1465]: time="2026-01-17T00:16:49.427547922Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:16:49.427920 containerd[1465]: time="2026-01-17T00:16:49.427573463Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:16:49.429267 containerd[1465]: time="2026-01-17T00:16:49.428600577Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:16:49.502478 containerd[1465]: time="2026-01-17T00:16:49.500965188Z" level=info msg="StopPodSandbox for \"80ad9c1b6bf14b9fab9d04fd157851b2cedf40cac83c9e27578d81291132a0a3\"" Jan 17 00:16:49.502048 systemd-networkd[1375]: cali7f26127017d: Link UP Jan 17 00:16:49.512463 systemd-networkd[1375]: cali7f26127017d: Gained carrier Jan 17 00:16:49.526788 systemd[1]: Started cri-containerd-4905d5c0c90d49fb2ae3b33b21a919649a62e60c8463db003d5549d379eda7ef.scope - libcontainer container 4905d5c0c90d49fb2ae3b33b21a919649a62e60c8463db003d5549d379eda7ef. Jan 17 00:16:49.571434 containerd[1465]: 2026-01-17 00:16:49.089 [INFO][4304] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--912fd252f4-k8s-csi--node--driver--lfkr9-eth0 csi-node-driver- calico-system 74b48e50-ea55-46c5-84cf-509f72a7af13 1018 0 2026-01-17 00:16:23 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.6-n-912fd252f4 csi-node-driver-lfkr9 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali7f26127017d [] [] }} ContainerID="40055b2d548f0919ceb15ca9d2f8ef74f4887c30555ad16c6cad49f91a219090" Namespace="calico-system" Pod="csi-node-driver-lfkr9" WorkloadEndpoint="ci--4081.3.6--n--912fd252f4-k8s-csi--node--driver--lfkr9-" Jan 17 00:16:49.571434 containerd[1465]: 2026-01-17 00:16:49.089 [INFO][4304] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="40055b2d548f0919ceb15ca9d2f8ef74f4887c30555ad16c6cad49f91a219090" Namespace="calico-system" Pod="csi-node-driver-lfkr9" WorkloadEndpoint="ci--4081.3.6--n--912fd252f4-k8s-csi--node--driver--lfkr9-eth0" Jan 17 00:16:49.571434 containerd[1465]: 2026-01-17 00:16:49.215 [INFO][4323] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="40055b2d548f0919ceb15ca9d2f8ef74f4887c30555ad16c6cad49f91a219090" HandleID="k8s-pod-network.40055b2d548f0919ceb15ca9d2f8ef74f4887c30555ad16c6cad49f91a219090" Workload="ci--4081.3.6--n--912fd252f4-k8s-csi--node--driver--lfkr9-eth0" Jan 17 00:16:49.571434 containerd[1465]: 2026-01-17 00:16:49.216 [INFO][4323] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="40055b2d548f0919ceb15ca9d2f8ef74f4887c30555ad16c6cad49f91a219090" HandleID="k8s-pod-network.40055b2d548f0919ceb15ca9d2f8ef74f4887c30555ad16c6cad49f91a219090" Workload="ci--4081.3.6--n--912fd252f4-k8s-csi--node--driver--lfkr9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003d6660), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-912fd252f4", "pod":"csi-node-driver-lfkr9", "timestamp":"2026-01-17 00:16:49.215828168 +0000 UTC"}, Hostname:"ci-4081.3.6-n-912fd252f4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:16:49.571434 containerd[1465]: 2026-01-17 00:16:49.216 [INFO][4323] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:49.571434 containerd[1465]: 2026-01-17 00:16:49.270 [INFO][4323] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:49.571434 containerd[1465]: 2026-01-17 00:16:49.272 [INFO][4323] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-912fd252f4' Jan 17 00:16:49.571434 containerd[1465]: 2026-01-17 00:16:49.309 [INFO][4323] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.40055b2d548f0919ceb15ca9d2f8ef74f4887c30555ad16c6cad49f91a219090" host="ci-4081.3.6-n-912fd252f4" Jan 17 00:16:49.571434 containerd[1465]: 2026-01-17 00:16:49.369 [INFO][4323] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-912fd252f4" Jan 17 00:16:49.571434 containerd[1465]: 2026-01-17 00:16:49.384 [INFO][4323] ipam/ipam.go 511: Trying affinity for 192.168.4.0/26 host="ci-4081.3.6-n-912fd252f4" Jan 17 00:16:49.571434 containerd[1465]: 2026-01-17 00:16:49.390 [INFO][4323] ipam/ipam.go 158: Attempting to load block cidr=192.168.4.0/26 host="ci-4081.3.6-n-912fd252f4" Jan 17 00:16:49.571434 containerd[1465]: 2026-01-17 00:16:49.398 [INFO][4323] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.4.0/26 host="ci-4081.3.6-n-912fd252f4" Jan 17 00:16:49.571434 containerd[1465]: 2026-01-17 00:16:49.399 [INFO][4323] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.4.0/26 handle="k8s-pod-network.40055b2d548f0919ceb15ca9d2f8ef74f4887c30555ad16c6cad49f91a219090" host="ci-4081.3.6-n-912fd252f4" Jan 17 00:16:49.571434 containerd[1465]: 2026-01-17 00:16:49.415 [INFO][4323] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.40055b2d548f0919ceb15ca9d2f8ef74f4887c30555ad16c6cad49f91a219090 Jan 17 00:16:49.571434 containerd[1465]: 2026-01-17 00:16:49.432 [INFO][4323] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.4.0/26 handle="k8s-pod-network.40055b2d548f0919ceb15ca9d2f8ef74f4887c30555ad16c6cad49f91a219090" host="ci-4081.3.6-n-912fd252f4" Jan 17 00:16:49.571434 containerd[1465]: 2026-01-17 00:16:49.450 [INFO][4323] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.4.5/26] block=192.168.4.0/26 handle="k8s-pod-network.40055b2d548f0919ceb15ca9d2f8ef74f4887c30555ad16c6cad49f91a219090" host="ci-4081.3.6-n-912fd252f4" Jan 17 00:16:49.571434 containerd[1465]: 2026-01-17 00:16:49.450 [INFO][4323] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.4.5/26] handle="k8s-pod-network.40055b2d548f0919ceb15ca9d2f8ef74f4887c30555ad16c6cad49f91a219090" host="ci-4081.3.6-n-912fd252f4" Jan 17 00:16:49.571434 containerd[1465]: 2026-01-17 00:16:49.450 [INFO][4323] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:49.571434 containerd[1465]: 2026-01-17 00:16:49.450 [INFO][4323] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.4.5/26] IPv6=[] ContainerID="40055b2d548f0919ceb15ca9d2f8ef74f4887c30555ad16c6cad49f91a219090" HandleID="k8s-pod-network.40055b2d548f0919ceb15ca9d2f8ef74f4887c30555ad16c6cad49f91a219090" Workload="ci--4081.3.6--n--912fd252f4-k8s-csi--node--driver--lfkr9-eth0" Jan 17 00:16:49.573124 containerd[1465]: 2026-01-17 00:16:49.463 [INFO][4304] cni-plugin/k8s.go 418: Populated endpoint ContainerID="40055b2d548f0919ceb15ca9d2f8ef74f4887c30555ad16c6cad49f91a219090" Namespace="calico-system" Pod="csi-node-driver-lfkr9" WorkloadEndpoint="ci--4081.3.6--n--912fd252f4-k8s-csi--node--driver--lfkr9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--912fd252f4-k8s-csi--node--driver--lfkr9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"74b48e50-ea55-46c5-84cf-509f72a7af13", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-912fd252f4", ContainerID:"", Pod:"csi-node-driver-lfkr9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.4.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7f26127017d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:49.573124 containerd[1465]: 2026-01-17 00:16:49.463 [INFO][4304] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.4.5/32] ContainerID="40055b2d548f0919ceb15ca9d2f8ef74f4887c30555ad16c6cad49f91a219090" Namespace="calico-system" Pod="csi-node-driver-lfkr9" WorkloadEndpoint="ci--4081.3.6--n--912fd252f4-k8s-csi--node--driver--lfkr9-eth0" Jan 17 00:16:49.573124 containerd[1465]: 2026-01-17 00:16:49.463 [INFO][4304] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7f26127017d ContainerID="40055b2d548f0919ceb15ca9d2f8ef74f4887c30555ad16c6cad49f91a219090" Namespace="calico-system" Pod="csi-node-driver-lfkr9" WorkloadEndpoint="ci--4081.3.6--n--912fd252f4-k8s-csi--node--driver--lfkr9-eth0" Jan 17 00:16:49.573124 containerd[1465]: 2026-01-17 00:16:49.513 [INFO][4304] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="40055b2d548f0919ceb15ca9d2f8ef74f4887c30555ad16c6cad49f91a219090" Namespace="calico-system" Pod="csi-node-driver-lfkr9" WorkloadEndpoint="ci--4081.3.6--n--912fd252f4-k8s-csi--node--driver--lfkr9-eth0" Jan 17 00:16:49.573124 containerd[1465]: 2026-01-17 00:16:49.516 [INFO][4304] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="40055b2d548f0919ceb15ca9d2f8ef74f4887c30555ad16c6cad49f91a219090" Namespace="calico-system" Pod="csi-node-driver-lfkr9" WorkloadEndpoint="ci--4081.3.6--n--912fd252f4-k8s-csi--node--driver--lfkr9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--912fd252f4-k8s-csi--node--driver--lfkr9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"74b48e50-ea55-46c5-84cf-509f72a7af13", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-912fd252f4", ContainerID:"40055b2d548f0919ceb15ca9d2f8ef74f4887c30555ad16c6cad49f91a219090", Pod:"csi-node-driver-lfkr9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.4.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7f26127017d", MAC:"46:51:60:d8:e1:fc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:49.573124 containerd[1465]: 2026-01-17 00:16:49.554 [INFO][4304] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="40055b2d548f0919ceb15ca9d2f8ef74f4887c30555ad16c6cad49f91a219090" Namespace="calico-system" Pod="csi-node-driver-lfkr9" WorkloadEndpoint="ci--4081.3.6--n--912fd252f4-k8s-csi--node--driver--lfkr9-eth0" Jan 17 00:16:49.659951 containerd[1465]: time="2026-01-17T00:16:49.659590813Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:16:49.660477 containerd[1465]: time="2026-01-17T00:16:49.659753066Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:16:49.662781 containerd[1465]: time="2026-01-17T00:16:49.660797146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:16:49.663155 containerd[1465]: time="2026-01-17T00:16:49.663025159Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:16:49.724777 systemd[1]: Started cri-containerd-40055b2d548f0919ceb15ca9d2f8ef74f4887c30555ad16c6cad49f91a219090.scope - libcontainer container 40055b2d548f0919ceb15ca9d2f8ef74f4887c30555ad16c6cad49f91a219090. Jan 17 00:16:49.780550 systemd-networkd[1375]: cali8dd1eee73de: Gained IPv6LL Jan 17 00:16:49.803516 containerd[1465]: 2026-01-17 00:16:49.719 [INFO][4400] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="80ad9c1b6bf14b9fab9d04fd157851b2cedf40cac83c9e27578d81291132a0a3" Jan 17 00:16:49.803516 containerd[1465]: 2026-01-17 00:16:49.720 [INFO][4400] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="80ad9c1b6bf14b9fab9d04fd157851b2cedf40cac83c9e27578d81291132a0a3" iface="eth0" netns="/var/run/netns/cni-a77e4716-b0fa-9af4-e14b-5c96aae43f2a" Jan 17 00:16:49.803516 containerd[1465]: 2026-01-17 00:16:49.720 [INFO][4400] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="80ad9c1b6bf14b9fab9d04fd157851b2cedf40cac83c9e27578d81291132a0a3" iface="eth0" netns="/var/run/netns/cni-a77e4716-b0fa-9af4-e14b-5c96aae43f2a" Jan 17 00:16:49.803516 containerd[1465]: 2026-01-17 00:16:49.720 [INFO][4400] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="80ad9c1b6bf14b9fab9d04fd157851b2cedf40cac83c9e27578d81291132a0a3" iface="eth0" netns="/var/run/netns/cni-a77e4716-b0fa-9af4-e14b-5c96aae43f2a" Jan 17 00:16:49.803516 containerd[1465]: 2026-01-17 00:16:49.720 [INFO][4400] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="80ad9c1b6bf14b9fab9d04fd157851b2cedf40cac83c9e27578d81291132a0a3" Jan 17 00:16:49.803516 containerd[1465]: 2026-01-17 00:16:49.720 [INFO][4400] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="80ad9c1b6bf14b9fab9d04fd157851b2cedf40cac83c9e27578d81291132a0a3" Jan 17 00:16:49.803516 containerd[1465]: 2026-01-17 00:16:49.769 [INFO][4460] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="80ad9c1b6bf14b9fab9d04fd157851b2cedf40cac83c9e27578d81291132a0a3" HandleID="k8s-pod-network.80ad9c1b6bf14b9fab9d04fd157851b2cedf40cac83c9e27578d81291132a0a3" Workload="ci--4081.3.6--n--912fd252f4-k8s-goldmane--666569f655--2b6w7-eth0" Jan 17 00:16:49.803516 containerd[1465]: 2026-01-17 00:16:49.770 [INFO][4460] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:49.803516 containerd[1465]: 2026-01-17 00:16:49.770 [INFO][4460] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:49.803516 containerd[1465]: 2026-01-17 00:16:49.783 [WARNING][4460] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="80ad9c1b6bf14b9fab9d04fd157851b2cedf40cac83c9e27578d81291132a0a3" HandleID="k8s-pod-network.80ad9c1b6bf14b9fab9d04fd157851b2cedf40cac83c9e27578d81291132a0a3" Workload="ci--4081.3.6--n--912fd252f4-k8s-goldmane--666569f655--2b6w7-eth0" Jan 17 00:16:49.803516 containerd[1465]: 2026-01-17 00:16:49.783 [INFO][4460] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="80ad9c1b6bf14b9fab9d04fd157851b2cedf40cac83c9e27578d81291132a0a3" HandleID="k8s-pod-network.80ad9c1b6bf14b9fab9d04fd157851b2cedf40cac83c9e27578d81291132a0a3" Workload="ci--4081.3.6--n--912fd252f4-k8s-goldmane--666569f655--2b6w7-eth0" Jan 17 00:16:49.803516 containerd[1465]: 2026-01-17 00:16:49.787 [INFO][4460] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:49.803516 containerd[1465]: 2026-01-17 00:16:49.792 [INFO][4400] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="80ad9c1b6bf14b9fab9d04fd157851b2cedf40cac83c9e27578d81291132a0a3" Jan 17 00:16:49.804690 containerd[1465]: time="2026-01-17T00:16:49.803986275Z" level=info msg="TearDown network for sandbox \"80ad9c1b6bf14b9fab9d04fd157851b2cedf40cac83c9e27578d81291132a0a3\" successfully" Jan 17 00:16:49.804690 containerd[1465]: time="2026-01-17T00:16:49.804473090Z" level=info msg="StopPodSandbox for \"80ad9c1b6bf14b9fab9d04fd157851b2cedf40cac83c9e27578d81291132a0a3\" returns successfully" Jan 17 00:16:49.806275 containerd[1465]: time="2026-01-17T00:16:49.806244839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-2b6w7,Uid:abee5d80-98e2-4d1b-a6be-4919665c817d,Namespace:calico-system,Attempt:1,}" Jan 17 00:16:49.822788 containerd[1465]: time="2026-01-17T00:16:49.822739041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-598d5588f5-xr65t,Uid:fd23fc1c-2ea9-47e8-be5f-5279e384fd8c,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"4905d5c0c90d49fb2ae3b33b21a919649a62e60c8463db003d5549d379eda7ef\"" Jan 17 00:16:49.826628 containerd[1465]: time="2026-01-17T00:16:49.826587978Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:16:49.844263 containerd[1465]: time="2026-01-17T00:16:49.844220636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lfkr9,Uid:74b48e50-ea55-46c5-84cf-509f72a7af13,Namespace:calico-system,Attempt:1,} returns sandbox id \"40055b2d548f0919ceb15ca9d2f8ef74f4887c30555ad16c6cad49f91a219090\"" Jan 17 00:16:49.945377 kubelet[2515]: E0117 00:16:49.945063 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:16:49.949838 kubelet[2515]: E0117 00:16:49.948584 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76574bc5-8kb79" podUID="96db8296-fac0-44e6-a2a4-5921dbbfa75c" Jan 17 00:16:50.034604 systemd-networkd[1375]: cali4de818d07d5: Gained IPv6LL Jan 17 00:16:50.111225 systemd-networkd[1375]: cali7c2e220de9e: Link UP Jan 17 00:16:50.112585 systemd-networkd[1375]: cali7c2e220de9e: Gained carrier Jan 17 00:16:50.132625 containerd[1465]: 2026-01-17 00:16:49.924 [INFO][4487] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--912fd252f4-k8s-goldmane--666569f655--2b6w7-eth0 goldmane-666569f655- calico-system abee5d80-98e2-4d1b-a6be-4919665c817d 1046 0 2026-01-17 00:16:21 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.6-n-912fd252f4 goldmane-666569f655-2b6w7 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali7c2e220de9e [] [] }} ContainerID="4f572b073f9fa58f5b9d08296038d2e6152d0d60eb500596886d53b6b8437ec0" Namespace="calico-system" Pod="goldmane-666569f655-2b6w7" WorkloadEndpoint="ci--4081.3.6--n--912fd252f4-k8s-goldmane--666569f655--2b6w7-" Jan 17 00:16:50.132625 containerd[1465]: 2026-01-17 00:16:49.924 [INFO][4487] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4f572b073f9fa58f5b9d08296038d2e6152d0d60eb500596886d53b6b8437ec0" Namespace="calico-system" Pod="goldmane-666569f655-2b6w7" WorkloadEndpoint="ci--4081.3.6--n--912fd252f4-k8s-goldmane--666569f655--2b6w7-eth0" Jan 17 00:16:50.132625 containerd[1465]: 2026-01-17 00:16:50.009 [INFO][4499] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4f572b073f9fa58f5b9d08296038d2e6152d0d60eb500596886d53b6b8437ec0" HandleID="k8s-pod-network.4f572b073f9fa58f5b9d08296038d2e6152d0d60eb500596886d53b6b8437ec0" Workload="ci--4081.3.6--n--912fd252f4-k8s-goldmane--666569f655--2b6w7-eth0" Jan 17 00:16:50.132625 containerd[1465]: 2026-01-17 00:16:50.011 [INFO][4499] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4f572b073f9fa58f5b9d08296038d2e6152d0d60eb500596886d53b6b8437ec0" HandleID="k8s-pod-network.4f572b073f9fa58f5b9d08296038d2e6152d0d60eb500596886d53b6b8437ec0" Workload="ci--4081.3.6--n--912fd252f4-k8s-goldmane--666569f655--2b6w7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cf5d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-912fd252f4", "pod":"goldmane-666569f655-2b6w7", "timestamp":"2026-01-17 00:16:50.009743813 +0000 UTC"}, Hostname:"ci-4081.3.6-n-912fd252f4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:16:50.132625 containerd[1465]: 2026-01-17 00:16:50.011 [INFO][4499] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:50.132625 containerd[1465]: 2026-01-17 00:16:50.012 [INFO][4499] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:50.132625 containerd[1465]: 2026-01-17 00:16:50.012 [INFO][4499] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-912fd252f4' Jan 17 00:16:50.132625 containerd[1465]: 2026-01-17 00:16:50.027 [INFO][4499] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4f572b073f9fa58f5b9d08296038d2e6152d0d60eb500596886d53b6b8437ec0" host="ci-4081.3.6-n-912fd252f4" Jan 17 00:16:50.132625 containerd[1465]: 2026-01-17 00:16:50.048 [INFO][4499] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-912fd252f4" Jan 17 00:16:50.132625 containerd[1465]: 2026-01-17 00:16:50.057 [INFO][4499] ipam/ipam.go 511: Trying affinity for 192.168.4.0/26 host="ci-4081.3.6-n-912fd252f4" Jan 17 00:16:50.132625 containerd[1465]: 2026-01-17 00:16:50.060 [INFO][4499] ipam/ipam.go 158: Attempting to load block cidr=192.168.4.0/26 host="ci-4081.3.6-n-912fd252f4" Jan 17 00:16:50.132625 containerd[1465]: 2026-01-17 00:16:50.064 [INFO][4499] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.4.0/26 host="ci-4081.3.6-n-912fd252f4" Jan 17 00:16:50.132625 containerd[1465]: 2026-01-17 00:16:50.065 [INFO][4499] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.4.0/26 handle="k8s-pod-network.4f572b073f9fa58f5b9d08296038d2e6152d0d60eb500596886d53b6b8437ec0" host="ci-4081.3.6-n-912fd252f4" Jan 17 00:16:50.132625 containerd[1465]: 2026-01-17 00:16:50.067 [INFO][4499] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4f572b073f9fa58f5b9d08296038d2e6152d0d60eb500596886d53b6b8437ec0 Jan 17 00:16:50.132625 containerd[1465]: 2026-01-17 00:16:50.080 [INFO][4499] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.4.0/26 handle="k8s-pod-network.4f572b073f9fa58f5b9d08296038d2e6152d0d60eb500596886d53b6b8437ec0" host="ci-4081.3.6-n-912fd252f4" Jan 17 00:16:50.132625 containerd[1465]: 2026-01-17 00:16:50.094 [INFO][4499] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.4.6/26] block=192.168.4.0/26 handle="k8s-pod-network.4f572b073f9fa58f5b9d08296038d2e6152d0d60eb500596886d53b6b8437ec0" host="ci-4081.3.6-n-912fd252f4" Jan 17 00:16:50.132625 containerd[1465]: 2026-01-17 00:16:50.094 [INFO][4499] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.4.6/26] handle="k8s-pod-network.4f572b073f9fa58f5b9d08296038d2e6152d0d60eb500596886d53b6b8437ec0" host="ci-4081.3.6-n-912fd252f4" Jan 17 00:16:50.132625 containerd[1465]: 2026-01-17 00:16:50.094 [INFO][4499] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:50.132625 containerd[1465]: 2026-01-17 00:16:50.094 [INFO][4499] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.4.6/26] IPv6=[] ContainerID="4f572b073f9fa58f5b9d08296038d2e6152d0d60eb500596886d53b6b8437ec0" HandleID="k8s-pod-network.4f572b073f9fa58f5b9d08296038d2e6152d0d60eb500596886d53b6b8437ec0" Workload="ci--4081.3.6--n--912fd252f4-k8s-goldmane--666569f655--2b6w7-eth0" Jan 17 00:16:50.133795 containerd[1465]: 2026-01-17 00:16:50.099 [INFO][4487] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4f572b073f9fa58f5b9d08296038d2e6152d0d60eb500596886d53b6b8437ec0" Namespace="calico-system" Pod="goldmane-666569f655-2b6w7" WorkloadEndpoint="ci--4081.3.6--n--912fd252f4-k8s-goldmane--666569f655--2b6w7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--912fd252f4-k8s-goldmane--666569f655--2b6w7-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"abee5d80-98e2-4d1b-a6be-4919665c817d", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-912fd252f4", ContainerID:"", Pod:"goldmane-666569f655-2b6w7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.4.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7c2e220de9e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:50.133795 containerd[1465]: 2026-01-17 00:16:50.099 [INFO][4487] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.4.6/32] ContainerID="4f572b073f9fa58f5b9d08296038d2e6152d0d60eb500596886d53b6b8437ec0" Namespace="calico-system" Pod="goldmane-666569f655-2b6w7" WorkloadEndpoint="ci--4081.3.6--n--912fd252f4-k8s-goldmane--666569f655--2b6w7-eth0" Jan 17 00:16:50.133795 containerd[1465]: 2026-01-17 00:16:50.099 [INFO][4487] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7c2e220de9e ContainerID="4f572b073f9fa58f5b9d08296038d2e6152d0d60eb500596886d53b6b8437ec0" Namespace="calico-system" Pod="goldmane-666569f655-2b6w7" WorkloadEndpoint="ci--4081.3.6--n--912fd252f4-k8s-goldmane--666569f655--2b6w7-eth0" Jan 17 00:16:50.133795 containerd[1465]: 2026-01-17 00:16:50.109 [INFO][4487] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4f572b073f9fa58f5b9d08296038d2e6152d0d60eb500596886d53b6b8437ec0" Namespace="calico-system" Pod="goldmane-666569f655-2b6w7" WorkloadEndpoint="ci--4081.3.6--n--912fd252f4-k8s-goldmane--666569f655--2b6w7-eth0" Jan 17 00:16:50.133795 containerd[1465]: 2026-01-17 00:16:50.109 [INFO][4487] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4f572b073f9fa58f5b9d08296038d2e6152d0d60eb500596886d53b6b8437ec0" Namespace="calico-system" Pod="goldmane-666569f655-2b6w7" WorkloadEndpoint="ci--4081.3.6--n--912fd252f4-k8s-goldmane--666569f655--2b6w7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--912fd252f4-k8s-goldmane--666569f655--2b6w7-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"abee5d80-98e2-4d1b-a6be-4919665c817d", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-912fd252f4", ContainerID:"4f572b073f9fa58f5b9d08296038d2e6152d0d60eb500596886d53b6b8437ec0", Pod:"goldmane-666569f655-2b6w7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.4.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7c2e220de9e", MAC:"12:ae:00:82:c6:11", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:50.133795 containerd[1465]: 2026-01-17 00:16:50.122 [INFO][4487] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4f572b073f9fa58f5b9d08296038d2e6152d0d60eb500596886d53b6b8437ec0" Namespace="calico-system" Pod="goldmane-666569f655-2b6w7" WorkloadEndpoint="ci--4081.3.6--n--912fd252f4-k8s-goldmane--666569f655--2b6w7-eth0" Jan 17 00:16:50.177958 containerd[1465]: time="2026-01-17T00:16:50.176387833Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:16:50.177958 containerd[1465]: time="2026-01-17T00:16:50.176596394Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:16:50.177958 containerd[1465]: time="2026-01-17T00:16:50.176664949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:16:50.179437 containerd[1465]: time="2026-01-17T00:16:50.178393101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:16:50.192595 containerd[1465]: time="2026-01-17T00:16:50.192538875Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:16:50.199362 containerd[1465]: time="2026-01-17T00:16:50.199270190Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:16:50.200738 containerd[1465]: time="2026-01-17T00:16:50.199426375Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:16:50.200843 kubelet[2515]: E0117 00:16:50.199683 2515 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:16:50.200843 kubelet[2515]: E0117 00:16:50.199754 2515 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:16:50.200843 kubelet[2515]: E0117 00:16:50.200111 2515 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rt5z5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-598d5588f5-xr65t_calico-apiserver(fd23fc1c-2ea9-47e8-be5f-5279e384fd8c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:16:50.202722 kubelet[2515]: E0117 00:16:50.202619 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-598d5588f5-xr65t" podUID="fd23fc1c-2ea9-47e8-be5f-5279e384fd8c" Jan 17 00:16:50.203456 systemd[1]: run-netns-cni\x2da77e4716\x2db0fa\x2d9af4\x2de14b\x2d5c96aae43f2a.mount: Deactivated successfully. Jan 17 00:16:50.209349 containerd[1465]: time="2026-01-17T00:16:50.208293330Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:16:50.241741 systemd[1]: Started cri-containerd-4f572b073f9fa58f5b9d08296038d2e6152d0d60eb500596886d53b6b8437ec0.scope - libcontainer container 4f572b073f9fa58f5b9d08296038d2e6152d0d60eb500596886d53b6b8437ec0. Jan 17 00:16:50.362250 containerd[1465]: time="2026-01-17T00:16:50.362185041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-2b6w7,Uid:abee5d80-98e2-4d1b-a6be-4919665c817d,Namespace:calico-system,Attempt:1,} returns sandbox id \"4f572b073f9fa58f5b9d08296038d2e6152d0d60eb500596886d53b6b8437ec0\"" Jan 17 00:16:50.491954 containerd[1465]: time="2026-01-17T00:16:50.491501666Z" level=info msg="StopPodSandbox for \"42f46fe090b606811a7c214bf303a20fffb167baf6a0a4b82f2870b27920a981\"" Jan 17 00:16:50.491954 containerd[1465]: time="2026-01-17T00:16:50.491545314Z" level=info msg="StopPodSandbox for \"31c78df1dd67a548d2795fba2f046e3fcbda4b44064514813d75fa0a1542bf8d\"" Jan 17 00:16:50.569702 containerd[1465]: time="2026-01-17T00:16:50.569628588Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:16:50.572374 containerd[1465]: time="2026-01-17T00:16:50.572157779Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:16:50.572374 containerd[1465]: time="2026-01-17T00:16:50.572152598Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:16:50.572754 kubelet[2515]: E0117 00:16:50.572559 2515 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:16:50.572754 kubelet[2515]: E0117 00:16:50.572624 2515 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:16:50.573292 kubelet[2515]: E0117 00:16:50.572997 2515 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jj7nr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-lfkr9_calico-system(74b48e50-ea55-46c5-84cf-509f72a7af13): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:16:50.574289 containerd[1465]: time="2026-01-17T00:16:50.573932274Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:16:50.674725 systemd-networkd[1375]: calidb65c978d23: Gained IPv6LL Jan 17 00:16:50.714698 containerd[1465]: 2026-01-17 00:16:50.636 [INFO][4617] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="31c78df1dd67a548d2795fba2f046e3fcbda4b44064514813d75fa0a1542bf8d" Jan 17 00:16:50.714698 containerd[1465]: 2026-01-17 00:16:50.636 [INFO][4617] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="31c78df1dd67a548d2795fba2f046e3fcbda4b44064514813d75fa0a1542bf8d" iface="eth0" netns="/var/run/netns/cni-f5a3ab56-e4d1-c25e-4566-5f5a0079fe6d" Jan 17 00:16:50.714698 containerd[1465]: 2026-01-17 00:16:50.638 [INFO][4617] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="31c78df1dd67a548d2795fba2f046e3fcbda4b44064514813d75fa0a1542bf8d" iface="eth0" netns="/var/run/netns/cni-f5a3ab56-e4d1-c25e-4566-5f5a0079fe6d" Jan 17 00:16:50.714698 containerd[1465]: 2026-01-17 00:16:50.643 [INFO][4617] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="31c78df1dd67a548d2795fba2f046e3fcbda4b44064514813d75fa0a1542bf8d" iface="eth0" netns="/var/run/netns/cni-f5a3ab56-e4d1-c25e-4566-5f5a0079fe6d" Jan 17 00:16:50.714698 containerd[1465]: 2026-01-17 00:16:50.643 [INFO][4617] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="31c78df1dd67a548d2795fba2f046e3fcbda4b44064514813d75fa0a1542bf8d" Jan 17 00:16:50.714698 containerd[1465]: 2026-01-17 00:16:50.643 [INFO][4617] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="31c78df1dd67a548d2795fba2f046e3fcbda4b44064514813d75fa0a1542bf8d" Jan 17 00:16:50.714698 containerd[1465]: 2026-01-17 00:16:50.690 [INFO][4640] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="31c78df1dd67a548d2795fba2f046e3fcbda4b44064514813d75fa0a1542bf8d" HandleID="k8s-pod-network.31c78df1dd67a548d2795fba2f046e3fcbda4b44064514813d75fa0a1542bf8d" Workload="ci--4081.3.6--n--912fd252f4-k8s-coredns--668d6bf9bc--rslgw-eth0" Jan 17 00:16:50.714698 containerd[1465]: 2026-01-17 00:16:50.691 [INFO][4640] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:50.714698 containerd[1465]: 2026-01-17 00:16:50.691 [INFO][4640] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:50.714698 containerd[1465]: 2026-01-17 00:16:50.699 [WARNING][4640] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="31c78df1dd67a548d2795fba2f046e3fcbda4b44064514813d75fa0a1542bf8d" HandleID="k8s-pod-network.31c78df1dd67a548d2795fba2f046e3fcbda4b44064514813d75fa0a1542bf8d" Workload="ci--4081.3.6--n--912fd252f4-k8s-coredns--668d6bf9bc--rslgw-eth0" Jan 17 00:16:50.714698 containerd[1465]: 2026-01-17 00:16:50.699 [INFO][4640] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="31c78df1dd67a548d2795fba2f046e3fcbda4b44064514813d75fa0a1542bf8d" HandleID="k8s-pod-network.31c78df1dd67a548d2795fba2f046e3fcbda4b44064514813d75fa0a1542bf8d" Workload="ci--4081.3.6--n--912fd252f4-k8s-coredns--668d6bf9bc--rslgw-eth0" Jan 17 00:16:50.714698 containerd[1465]: 2026-01-17 00:16:50.702 [INFO][4640] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:50.714698 containerd[1465]: 2026-01-17 00:16:50.710 [INFO][4617] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="31c78df1dd67a548d2795fba2f046e3fcbda4b44064514813d75fa0a1542bf8d" Jan 17 00:16:50.719285 containerd[1465]: time="2026-01-17T00:16:50.716742850Z" level=info msg="TearDown network for sandbox \"31c78df1dd67a548d2795fba2f046e3fcbda4b44064514813d75fa0a1542bf8d\" successfully" Jan 17 00:16:50.719285 containerd[1465]: time="2026-01-17T00:16:50.716809586Z" level=info msg="StopPodSandbox for \"31c78df1dd67a548d2795fba2f046e3fcbda4b44064514813d75fa0a1542bf8d\" returns successfully" Jan 17 00:16:50.719392 kubelet[2515]: E0117 00:16:50.717349 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:16:50.723090 containerd[1465]: time="2026-01-17T00:16:50.723051499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rslgw,Uid:2f67ac25-9d9d-4a2d-8ba8-729f2f585a51,Namespace:kube-system,Attempt:1,}" Jan 17 00:16:50.724317 systemd[1]: run-netns-cni\x2df5a3ab56\x2de4d1\x2dc25e\x2d4566\x2d5f5a0079fe6d.mount: Deactivated successfully. Jan 17 00:16:50.739431 systemd-networkd[1375]: vxlan.calico: Gained IPv6LL Jan 17 00:16:50.741755 containerd[1465]: 2026-01-17 00:16:50.617 [INFO][4616] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="42f46fe090b606811a7c214bf303a20fffb167baf6a0a4b82f2870b27920a981" Jan 17 00:16:50.741755 containerd[1465]: 2026-01-17 00:16:50.618 [INFO][4616] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="42f46fe090b606811a7c214bf303a20fffb167baf6a0a4b82f2870b27920a981" iface="eth0" netns="/var/run/netns/cni-80de923c-3e5f-5300-2410-adad3acc37b7" Jan 17 00:16:50.741755 containerd[1465]: 2026-01-17 00:16:50.620 [INFO][4616] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="42f46fe090b606811a7c214bf303a20fffb167baf6a0a4b82f2870b27920a981" iface="eth0" netns="/var/run/netns/cni-80de923c-3e5f-5300-2410-adad3acc37b7" Jan 17 00:16:50.741755 containerd[1465]: 2026-01-17 00:16:50.621 [INFO][4616] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="42f46fe090b606811a7c214bf303a20fffb167baf6a0a4b82f2870b27920a981" iface="eth0" netns="/var/run/netns/cni-80de923c-3e5f-5300-2410-adad3acc37b7" Jan 17 00:16:50.741755 containerd[1465]: 2026-01-17 00:16:50.622 [INFO][4616] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="42f46fe090b606811a7c214bf303a20fffb167baf6a0a4b82f2870b27920a981" Jan 17 00:16:50.741755 containerd[1465]: 2026-01-17 00:16:50.622 [INFO][4616] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="42f46fe090b606811a7c214bf303a20fffb167baf6a0a4b82f2870b27920a981" Jan 17 00:16:50.741755 containerd[1465]: 2026-01-17 00:16:50.697 [INFO][4635] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="42f46fe090b606811a7c214bf303a20fffb167baf6a0a4b82f2870b27920a981" HandleID="k8s-pod-network.42f46fe090b606811a7c214bf303a20fffb167baf6a0a4b82f2870b27920a981" Workload="ci--4081.3.6--n--912fd252f4-k8s-calico--apiserver--598d5588f5--f9bzn-eth0" Jan 17 00:16:50.741755 containerd[1465]: 2026-01-17 00:16:50.697 [INFO][4635] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:50.741755 containerd[1465]: 2026-01-17 00:16:50.702 [INFO][4635] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:50.741755 containerd[1465]: 2026-01-17 00:16:50.729 [WARNING][4635] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="42f46fe090b606811a7c214bf303a20fffb167baf6a0a4b82f2870b27920a981" HandleID="k8s-pod-network.42f46fe090b606811a7c214bf303a20fffb167baf6a0a4b82f2870b27920a981" Workload="ci--4081.3.6--n--912fd252f4-k8s-calico--apiserver--598d5588f5--f9bzn-eth0" Jan 17 00:16:50.741755 containerd[1465]: 2026-01-17 00:16:50.730 [INFO][4635] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="42f46fe090b606811a7c214bf303a20fffb167baf6a0a4b82f2870b27920a981" HandleID="k8s-pod-network.42f46fe090b606811a7c214bf303a20fffb167baf6a0a4b82f2870b27920a981" Workload="ci--4081.3.6--n--912fd252f4-k8s-calico--apiserver--598d5588f5--f9bzn-eth0" Jan 17 00:16:50.741755 containerd[1465]: 2026-01-17 00:16:50.734 [INFO][4635] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:50.741755 containerd[1465]: 2026-01-17 00:16:50.738 [INFO][4616] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="42f46fe090b606811a7c214bf303a20fffb167baf6a0a4b82f2870b27920a981" Jan 17 00:16:50.745014 containerd[1465]: time="2026-01-17T00:16:50.741985347Z" level=info msg="TearDown network for sandbox \"42f46fe090b606811a7c214bf303a20fffb167baf6a0a4b82f2870b27920a981\" successfully" Jan 17 00:16:50.745014 containerd[1465]: time="2026-01-17T00:16:50.742012032Z" level=info msg="StopPodSandbox for \"42f46fe090b606811a7c214bf303a20fffb167baf6a0a4b82f2870b27920a981\" returns successfully" Jan 17 00:16:50.745014 containerd[1465]: time="2026-01-17T00:16:50.742948679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-598d5588f5-f9bzn,Uid:d3a2c65a-63b7-42fa-9521-230bac7a856c,Namespace:calico-apiserver,Attempt:1,}" Jan 17 00:16:50.748592 systemd[1]: run-netns-cni\x2d80de923c\x2d3e5f\x2d5300\x2d2410\x2dadad3acc37b7.mount: Deactivated successfully. Jan 17 00:16:50.912535 containerd[1465]: time="2026-01-17T00:16:50.912324939Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:16:50.915723 containerd[1465]: time="2026-01-17T00:16:50.915455981Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:16:50.916453 kubelet[2515]: E0117 00:16:50.915864 2515 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:16:50.916453 kubelet[2515]: E0117 00:16:50.915919 2515 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:16:50.916453 kubelet[2515]: E0117 00:16:50.916213 2515 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tngkr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-2b6w7_calico-system(abee5d80-98e2-4d1b-a6be-4919665c817d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:16:50.917005 containerd[1465]: time="2026-01-17T00:16:50.915595730Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:16:50.917005 containerd[1465]: time="2026-01-17T00:16:50.916906878Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:16:50.920342 kubelet[2515]: E0117 00:16:50.918337 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2b6w7" podUID="abee5d80-98e2-4d1b-a6be-4919665c817d" Jan 17 00:16:50.953833 kubelet[2515]: E0117 00:16:50.953435 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2b6w7" podUID="abee5d80-98e2-4d1b-a6be-4919665c817d" Jan 17 00:16:50.957404 kubelet[2515]: E0117 00:16:50.956997 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:16:50.957404 kubelet[2515]: E0117 00:16:50.957273 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-598d5588f5-xr65t" podUID="fd23fc1c-2ea9-47e8-be5f-5279e384fd8c" Jan 17 00:16:51.078536 systemd-networkd[1375]: cali4c73e0eb0a3: Link UP Jan 17 00:16:51.081879 systemd-networkd[1375]: cali4c73e0eb0a3: Gained carrier Jan 17 00:16:51.116466 containerd[1465]: 2026-01-17 00:16:50.868 [INFO][4648] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--912fd252f4-k8s-coredns--668d6bf9bc--rslgw-eth0 coredns-668d6bf9bc- kube-system 2f67ac25-9d9d-4a2d-8ba8-729f2f585a51 1070 0 2026-01-17 00:16:00 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-912fd252f4 coredns-668d6bf9bc-rslgw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4c73e0eb0a3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="cc981a477467822642f31aca339fd1397383b9c0915f685742360f2c7329aa11" Namespace="kube-system" Pod="coredns-668d6bf9bc-rslgw" WorkloadEndpoint="ci--4081.3.6--n--912fd252f4-k8s-coredns--668d6bf9bc--rslgw-" Jan 17 00:16:51.116466 containerd[1465]: 2026-01-17 00:16:50.868 [INFO][4648] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cc981a477467822642f31aca339fd1397383b9c0915f685742360f2c7329aa11" Namespace="kube-system" Pod="coredns-668d6bf9bc-rslgw" WorkloadEndpoint="ci--4081.3.6--n--912fd252f4-k8s-coredns--668d6bf9bc--rslgw-eth0" Jan 17 00:16:51.116466 containerd[1465]: 2026-01-17 00:16:50.944 [INFO][4672] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cc981a477467822642f31aca339fd1397383b9c0915f685742360f2c7329aa11" HandleID="k8s-pod-network.cc981a477467822642f31aca339fd1397383b9c0915f685742360f2c7329aa11" Workload="ci--4081.3.6--n--912fd252f4-k8s-coredns--668d6bf9bc--rslgw-eth0" Jan 17 00:16:51.116466 containerd[1465]: 2026-01-17 00:16:50.945 [INFO][4672] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="cc981a477467822642f31aca339fd1397383b9c0915f685742360f2c7329aa11" HandleID="k8s-pod-network.cc981a477467822642f31aca339fd1397383b9c0915f685742360f2c7329aa11" Workload="ci--4081.3.6--n--912fd252f4-k8s-coredns--668d6bf9bc--rslgw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fe30), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-912fd252f4", "pod":"coredns-668d6bf9bc-rslgw", "timestamp":"2026-01-17 00:16:50.94447498 +0000 UTC"}, Hostname:"ci-4081.3.6-n-912fd252f4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:16:51.116466 containerd[1465]: 2026-01-17 00:16:50.945 [INFO][4672] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:51.116466 containerd[1465]: 2026-01-17 00:16:50.945 [INFO][4672] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:51.116466 containerd[1465]: 2026-01-17 00:16:50.945 [INFO][4672] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-912fd252f4' Jan 17 00:16:51.116466 containerd[1465]: 2026-01-17 00:16:50.962 [INFO][4672] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cc981a477467822642f31aca339fd1397383b9c0915f685742360f2c7329aa11" host="ci-4081.3.6-n-912fd252f4" Jan 17 00:16:51.116466 containerd[1465]: 2026-01-17 00:16:50.992 [INFO][4672] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-912fd252f4" Jan 17 00:16:51.116466 containerd[1465]: 2026-01-17 00:16:51.020 [INFO][4672] ipam/ipam.go 511: Trying affinity for 192.168.4.0/26 host="ci-4081.3.6-n-912fd252f4" Jan 17 00:16:51.116466 containerd[1465]: 2026-01-17 00:16:51.033 [INFO][4672] ipam/ipam.go 158: Attempting to load block cidr=192.168.4.0/26 host="ci-4081.3.6-n-912fd252f4" Jan 17 00:16:51.116466 containerd[1465]: 2026-01-17 00:16:51.039 [INFO][4672] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.4.0/26 host="ci-4081.3.6-n-912fd252f4" Jan 17 00:16:51.116466 containerd[1465]: 2026-01-17 00:16:51.039 [INFO][4672] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.4.0/26 handle="k8s-pod-network.cc981a477467822642f31aca339fd1397383b9c0915f685742360f2c7329aa11" host="ci-4081.3.6-n-912fd252f4" Jan 17 00:16:51.116466 containerd[1465]: 2026-01-17 00:16:51.043 [INFO][4672] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.cc981a477467822642f31aca339fd1397383b9c0915f685742360f2c7329aa11 Jan 17 00:16:51.116466 containerd[1465]: 2026-01-17 00:16:51.051 [INFO][4672] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.4.0/26 handle="k8s-pod-network.cc981a477467822642f31aca339fd1397383b9c0915f685742360f2c7329aa11" host="ci-4081.3.6-n-912fd252f4" Jan 17 00:16:51.116466 containerd[1465]: 2026-01-17 00:16:51.059 [INFO][4672] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.4.7/26] block=192.168.4.0/26 handle="k8s-pod-network.cc981a477467822642f31aca339fd1397383b9c0915f685742360f2c7329aa11" host="ci-4081.3.6-n-912fd252f4" Jan 17 00:16:51.116466 containerd[1465]: 2026-01-17 00:16:51.059 [INFO][4672] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.4.7/26] handle="k8s-pod-network.cc981a477467822642f31aca339fd1397383b9c0915f685742360f2c7329aa11" host="ci-4081.3.6-n-912fd252f4" Jan 17 00:16:51.116466 containerd[1465]: 2026-01-17 00:16:51.059 [INFO][4672] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:51.116466 containerd[1465]: 2026-01-17 00:16:51.059 [INFO][4672] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.4.7/26] IPv6=[] ContainerID="cc981a477467822642f31aca339fd1397383b9c0915f685742360f2c7329aa11" HandleID="k8s-pod-network.cc981a477467822642f31aca339fd1397383b9c0915f685742360f2c7329aa11" Workload="ci--4081.3.6--n--912fd252f4-k8s-coredns--668d6bf9bc--rslgw-eth0" Jan 17 00:16:51.118027 containerd[1465]: 2026-01-17 00:16:51.063 [INFO][4648] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cc981a477467822642f31aca339fd1397383b9c0915f685742360f2c7329aa11" Namespace="kube-system" Pod="coredns-668d6bf9bc-rslgw" WorkloadEndpoint="ci--4081.3.6--n--912fd252f4-k8s-coredns--668d6bf9bc--rslgw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--912fd252f4-k8s-coredns--668d6bf9bc--rslgw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2f67ac25-9d9d-4a2d-8ba8-729f2f585a51", ResourceVersion:"1070", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-912fd252f4", ContainerID:"", Pod:"coredns-668d6bf9bc-rslgw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.4.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4c73e0eb0a3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:51.118027 containerd[1465]: 2026-01-17 00:16:51.064 [INFO][4648] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.4.7/32] ContainerID="cc981a477467822642f31aca339fd1397383b9c0915f685742360f2c7329aa11" Namespace="kube-system" Pod="coredns-668d6bf9bc-rslgw" WorkloadEndpoint="ci--4081.3.6--n--912fd252f4-k8s-coredns--668d6bf9bc--rslgw-eth0" Jan 17 00:16:51.118027 containerd[1465]: 2026-01-17 00:16:51.064 [INFO][4648] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4c73e0eb0a3 ContainerID="cc981a477467822642f31aca339fd1397383b9c0915f685742360f2c7329aa11" Namespace="kube-system" Pod="coredns-668d6bf9bc-rslgw" WorkloadEndpoint="ci--4081.3.6--n--912fd252f4-k8s-coredns--668d6bf9bc--rslgw-eth0" Jan 17 00:16:51.118027 containerd[1465]: 2026-01-17 00:16:51.083 [INFO][4648] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cc981a477467822642f31aca339fd1397383b9c0915f685742360f2c7329aa11" Namespace="kube-system" Pod="coredns-668d6bf9bc-rslgw" WorkloadEndpoint="ci--4081.3.6--n--912fd252f4-k8s-coredns--668d6bf9bc--rslgw-eth0" Jan 17 00:16:51.118027 containerd[1465]: 2026-01-17 00:16:51.084 [INFO][4648] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cc981a477467822642f31aca339fd1397383b9c0915f685742360f2c7329aa11" Namespace="kube-system" Pod="coredns-668d6bf9bc-rslgw" WorkloadEndpoint="ci--4081.3.6--n--912fd252f4-k8s-coredns--668d6bf9bc--rslgw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--912fd252f4-k8s-coredns--668d6bf9bc--rslgw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2f67ac25-9d9d-4a2d-8ba8-729f2f585a51", ResourceVersion:"1070", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-912fd252f4", ContainerID:"cc981a477467822642f31aca339fd1397383b9c0915f685742360f2c7329aa11", Pod:"coredns-668d6bf9bc-rslgw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.4.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4c73e0eb0a3", MAC:"fe:38:ac:8d:67:f3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:51.118027 containerd[1465]: 2026-01-17 00:16:51.110 [INFO][4648] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cc981a477467822642f31aca339fd1397383b9c0915f685742360f2c7329aa11" Namespace="kube-system" Pod="coredns-668d6bf9bc-rslgw" WorkloadEndpoint="ci--4081.3.6--n--912fd252f4-k8s-coredns--668d6bf9bc--rslgw-eth0" Jan 17 00:16:51.166561 containerd[1465]: time="2026-01-17T00:16:51.166360653Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:16:51.166561 containerd[1465]: time="2026-01-17T00:16:51.166487927Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:16:51.166809 containerd[1465]: time="2026-01-17T00:16:51.166526248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:16:51.166900 containerd[1465]: time="2026-01-17T00:16:51.166858328Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:16:51.202300 systemd-networkd[1375]: calica69db75bb1: Link UP Jan 17 00:16:51.206366 systemd-networkd[1375]: calica69db75bb1: Gained carrier Jan 17 00:16:51.250665 containerd[1465]: time="2026-01-17T00:16:51.250533399Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:16:51.261350 containerd[1465]: time="2026-01-17T00:16:51.260595581Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:16:51.262146 containerd[1465]: time="2026-01-17T00:16:51.261763272Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:16:51.263750 systemd[1]: Started cri-containerd-cc981a477467822642f31aca339fd1397383b9c0915f685742360f2c7329aa11.scope - libcontainer container cc981a477467822642f31aca339fd1397383b9c0915f685742360f2c7329aa11. Jan 17 00:16:51.267971 kubelet[2515]: E0117 00:16:51.267742 2515 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:16:51.267971 kubelet[2515]: E0117 00:16:51.267819 2515 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:16:51.271945 kubelet[2515]: E0117 00:16:51.267998 2515 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jj7nr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-lfkr9_calico-system(74b48e50-ea55-46c5-84cf-509f72a7af13): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:16:51.276964 kubelet[2515]: E0117 00:16:51.273832 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lfkr9" podUID="74b48e50-ea55-46c5-84cf-509f72a7af13" Jan 17 00:16:51.309918 containerd[1465]: 2026-01-17 00:16:50.893 [INFO][4659] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--912fd252f4-k8s-calico--apiserver--598d5588f5--f9bzn-eth0 calico-apiserver-598d5588f5- calico-apiserver d3a2c65a-63b7-42fa-9521-230bac7a856c 1069 0 2026-01-17 00:16:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:598d5588f5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-912fd252f4 calico-apiserver-598d5588f5-f9bzn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calica69db75bb1 [] [] }} ContainerID="c8dfa5dcc6abeca360d689283160bdc77d858583b6817ff045660b45b05c490a" Namespace="calico-apiserver" Pod="calico-apiserver-598d5588f5-f9bzn" WorkloadEndpoint="ci--4081.3.6--n--912fd252f4-k8s-calico--apiserver--598d5588f5--f9bzn-" Jan 17 00:16:51.309918 containerd[1465]: 2026-01-17 00:16:50.893 [INFO][4659] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c8dfa5dcc6abeca360d689283160bdc77d858583b6817ff045660b45b05c490a" Namespace="calico-apiserver" Pod="calico-apiserver-598d5588f5-f9bzn" WorkloadEndpoint="ci--4081.3.6--n--912fd252f4-k8s-calico--apiserver--598d5588f5--f9bzn-eth0" Jan 17 00:16:51.309918 containerd[1465]: 2026-01-17 00:16:50.944 [INFO][4678] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c8dfa5dcc6abeca360d689283160bdc77d858583b6817ff045660b45b05c490a" HandleID="k8s-pod-network.c8dfa5dcc6abeca360d689283160bdc77d858583b6817ff045660b45b05c490a" Workload="ci--4081.3.6--n--912fd252f4-k8s-calico--apiserver--598d5588f5--f9bzn-eth0" Jan 17 00:16:51.309918 containerd[1465]: 2026-01-17 00:16:50.947 [INFO][4678] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c8dfa5dcc6abeca360d689283160bdc77d858583b6817ff045660b45b05c490a" HandleID="k8s-pod-network.c8dfa5dcc6abeca360d689283160bdc77d858583b6817ff045660b45b05c490a" Workload="ci--4081.3.6--n--912fd252f4-k8s-calico--apiserver--598d5588f5--f9bzn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5130), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-912fd252f4", "pod":"calico-apiserver-598d5588f5-f9bzn", "timestamp":"2026-01-17 00:16:50.944733326 +0000 UTC"}, Hostname:"ci-4081.3.6-n-912fd252f4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:16:51.309918 containerd[1465]: 2026-01-17 00:16:50.947 [INFO][4678] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:51.309918 containerd[1465]: 2026-01-17 00:16:51.060 [INFO][4678] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:51.309918 containerd[1465]: 2026-01-17 00:16:51.061 [INFO][4678] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-912fd252f4' Jan 17 00:16:51.309918 containerd[1465]: 2026-01-17 00:16:51.092 [INFO][4678] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c8dfa5dcc6abeca360d689283160bdc77d858583b6817ff045660b45b05c490a" host="ci-4081.3.6-n-912fd252f4" Jan 17 00:16:51.309918 containerd[1465]: 2026-01-17 00:16:51.113 [INFO][4678] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-912fd252f4" Jan 17 00:16:51.309918 containerd[1465]: 2026-01-17 00:16:51.128 [INFO][4678] ipam/ipam.go 511: Trying affinity for 192.168.4.0/26 host="ci-4081.3.6-n-912fd252f4" Jan 17 00:16:51.309918 containerd[1465]: 2026-01-17 00:16:51.133 [INFO][4678] ipam/ipam.go 158: Attempting to load block cidr=192.168.4.0/26 host="ci-4081.3.6-n-912fd252f4" Jan 17 00:16:51.309918 containerd[1465]: 2026-01-17 00:16:51.140 [INFO][4678] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.4.0/26 host="ci-4081.3.6-n-912fd252f4" Jan 17 00:16:51.309918 containerd[1465]: 2026-01-17 00:16:51.140 [INFO][4678] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.4.0/26 handle="k8s-pod-network.c8dfa5dcc6abeca360d689283160bdc77d858583b6817ff045660b45b05c490a" host="ci-4081.3.6-n-912fd252f4" Jan 17 00:16:51.309918 containerd[1465]: 2026-01-17 00:16:51.143 [INFO][4678] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c8dfa5dcc6abeca360d689283160bdc77d858583b6817ff045660b45b05c490a Jan 17 00:16:51.309918 containerd[1465]: 2026-01-17 00:16:51.157 [INFO][4678] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.4.0/26 handle="k8s-pod-network.c8dfa5dcc6abeca360d689283160bdc77d858583b6817ff045660b45b05c490a" host="ci-4081.3.6-n-912fd252f4" Jan 17 00:16:51.309918 containerd[1465]: 2026-01-17 00:16:51.177 [INFO][4678] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.4.8/26] block=192.168.4.0/26 handle="k8s-pod-network.c8dfa5dcc6abeca360d689283160bdc77d858583b6817ff045660b45b05c490a" host="ci-4081.3.6-n-912fd252f4" Jan 17 00:16:51.309918 containerd[1465]: 2026-01-17 00:16:51.177 [INFO][4678] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.4.8/26] handle="k8s-pod-network.c8dfa5dcc6abeca360d689283160bdc77d858583b6817ff045660b45b05c490a" host="ci-4081.3.6-n-912fd252f4" Jan 17 00:16:51.309918 containerd[1465]: 2026-01-17 00:16:51.177 [INFO][4678] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:51.309918 containerd[1465]: 2026-01-17 00:16:51.177 [INFO][4678] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.4.8/26] IPv6=[] ContainerID="c8dfa5dcc6abeca360d689283160bdc77d858583b6817ff045660b45b05c490a" HandleID="k8s-pod-network.c8dfa5dcc6abeca360d689283160bdc77d858583b6817ff045660b45b05c490a" Workload="ci--4081.3.6--n--912fd252f4-k8s-calico--apiserver--598d5588f5--f9bzn-eth0" Jan 17 00:16:51.310630 containerd[1465]: 2026-01-17 00:16:51.188 [INFO][4659] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c8dfa5dcc6abeca360d689283160bdc77d858583b6817ff045660b45b05c490a" Namespace="calico-apiserver" Pod="calico-apiserver-598d5588f5-f9bzn" WorkloadEndpoint="ci--4081.3.6--n--912fd252f4-k8s-calico--apiserver--598d5588f5--f9bzn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--912fd252f4-k8s-calico--apiserver--598d5588f5--f9bzn-eth0", GenerateName:"calico-apiserver-598d5588f5-", Namespace:"calico-apiserver", SelfLink:"", UID:"d3a2c65a-63b7-42fa-9521-230bac7a856c", ResourceVersion:"1069", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"598d5588f5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-912fd252f4", ContainerID:"", Pod:"calico-apiserver-598d5588f5-f9bzn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.4.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calica69db75bb1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:51.310630 containerd[1465]: 2026-01-17 00:16:51.188 [INFO][4659] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.4.8/32] ContainerID="c8dfa5dcc6abeca360d689283160bdc77d858583b6817ff045660b45b05c490a" Namespace="calico-apiserver" Pod="calico-apiserver-598d5588f5-f9bzn" WorkloadEndpoint="ci--4081.3.6--n--912fd252f4-k8s-calico--apiserver--598d5588f5--f9bzn-eth0" Jan 17 00:16:51.310630 containerd[1465]: 2026-01-17 00:16:51.189 [INFO][4659] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calica69db75bb1 ContainerID="c8dfa5dcc6abeca360d689283160bdc77d858583b6817ff045660b45b05c490a" Namespace="calico-apiserver" Pod="calico-apiserver-598d5588f5-f9bzn" WorkloadEndpoint="ci--4081.3.6--n--912fd252f4-k8s-calico--apiserver--598d5588f5--f9bzn-eth0" Jan 17 00:16:51.310630 containerd[1465]: 2026-01-17 00:16:51.204 [INFO][4659] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c8dfa5dcc6abeca360d689283160bdc77d858583b6817ff045660b45b05c490a" Namespace="calico-apiserver" Pod="calico-apiserver-598d5588f5-f9bzn" WorkloadEndpoint="ci--4081.3.6--n--912fd252f4-k8s-calico--apiserver--598d5588f5--f9bzn-eth0" Jan 17 00:16:51.310630 containerd[1465]: 2026-01-17 00:16:51.207 [INFO][4659] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c8dfa5dcc6abeca360d689283160bdc77d858583b6817ff045660b45b05c490a" Namespace="calico-apiserver" Pod="calico-apiserver-598d5588f5-f9bzn" WorkloadEndpoint="ci--4081.3.6--n--912fd252f4-k8s-calico--apiserver--598d5588f5--f9bzn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--912fd252f4-k8s-calico--apiserver--598d5588f5--f9bzn-eth0", GenerateName:"calico-apiserver-598d5588f5-", Namespace:"calico-apiserver", SelfLink:"", UID:"d3a2c65a-63b7-42fa-9521-230bac7a856c", ResourceVersion:"1069", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"598d5588f5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-912fd252f4", ContainerID:"c8dfa5dcc6abeca360d689283160bdc77d858583b6817ff045660b45b05c490a", Pod:"calico-apiserver-598d5588f5-f9bzn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.4.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calica69db75bb1", MAC:"b2:48:e0:96:e7:84", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:51.310630 containerd[1465]: 2026-01-17 00:16:51.229 [INFO][4659] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c8dfa5dcc6abeca360d689283160bdc77d858583b6817ff045660b45b05c490a" Namespace="calico-apiserver" Pod="calico-apiserver-598d5588f5-f9bzn" WorkloadEndpoint="ci--4081.3.6--n--912fd252f4-k8s-calico--apiserver--598d5588f5--f9bzn-eth0" Jan 17 00:16:51.374198 containerd[1465]: time="2026-01-17T00:16:51.371939910Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:16:51.374198 containerd[1465]: time="2026-01-17T00:16:51.372022717Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:16:51.374198 containerd[1465]: time="2026-01-17T00:16:51.372062216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:16:51.374198 containerd[1465]: time="2026-01-17T00:16:51.372171943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:16:51.420729 containerd[1465]: time="2026-01-17T00:16:51.420660917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rslgw,Uid:2f67ac25-9d9d-4a2d-8ba8-729f2f585a51,Namespace:kube-system,Attempt:1,} returns sandbox id \"cc981a477467822642f31aca339fd1397383b9c0915f685742360f2c7329aa11\"" Jan 17 00:16:51.425002 kubelet[2515]: E0117 00:16:51.424537 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:16:51.440254 containerd[1465]: time="2026-01-17T00:16:51.440196178Z" level=info msg="CreateContainer within sandbox \"cc981a477467822642f31aca339fd1397383b9c0915f685742360f2c7329aa11\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:16:51.443017 systemd-networkd[1375]: cali7f26127017d: Gained IPv6LL Jan 17 00:16:51.443445 systemd[1]: run-containerd-runc-k8s.io-c8dfa5dcc6abeca360d689283160bdc77d858583b6817ff045660b45b05c490a-runc.rUGjGu.mount: Deactivated successfully. Jan 17 00:16:51.451454 systemd[1]: Started cri-containerd-c8dfa5dcc6abeca360d689283160bdc77d858583b6817ff045660b45b05c490a.scope - libcontainer container c8dfa5dcc6abeca360d689283160bdc77d858583b6817ff045660b45b05c490a. Jan 17 00:16:51.508102 containerd[1465]: time="2026-01-17T00:16:51.507554718Z" level=info msg="CreateContainer within sandbox \"cc981a477467822642f31aca339fd1397383b9c0915f685742360f2c7329aa11\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b4a9fa985f6322f050ec4da6f505dcaf86c1be3c94bcb99ebd3043f98755df44\"" Jan 17 00:16:51.508823 containerd[1465]: time="2026-01-17T00:16:51.508780685Z" level=info msg="StartContainer for \"b4a9fa985f6322f050ec4da6f505dcaf86c1be3c94bcb99ebd3043f98755df44\"" Jan 17 00:16:51.568702 systemd[1]: Started cri-containerd-b4a9fa985f6322f050ec4da6f505dcaf86c1be3c94bcb99ebd3043f98755df44.scope - libcontainer container b4a9fa985f6322f050ec4da6f505dcaf86c1be3c94bcb99ebd3043f98755df44. Jan 17 00:16:51.622788 containerd[1465]: time="2026-01-17T00:16:51.622225178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-598d5588f5-f9bzn,Uid:d3a2c65a-63b7-42fa-9521-230bac7a856c,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"c8dfa5dcc6abeca360d689283160bdc77d858583b6817ff045660b45b05c490a\"" Jan 17 00:16:51.627861 containerd[1465]: time="2026-01-17T00:16:51.627131854Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:16:51.638928 containerd[1465]: time="2026-01-17T00:16:51.638768974Z" level=info msg="StartContainer for \"b4a9fa985f6322f050ec4da6f505dcaf86c1be3c94bcb99ebd3043f98755df44\" returns successfully" Jan 17 00:16:51.949899 containerd[1465]: time="2026-01-17T00:16:51.949783269Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:16:51.952710 containerd[1465]: time="2026-01-17T00:16:51.952332914Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:16:51.952710 containerd[1465]: time="2026-01-17T00:16:51.952513802Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:16:51.953113 kubelet[2515]: E0117 00:16:51.952676 2515 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:16:51.953113 kubelet[2515]: E0117 00:16:51.952730 2515 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:16:51.953113 kubelet[2515]: E0117 00:16:51.952876 2515 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zznc6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-598d5588f5-f9bzn_calico-apiserver(d3a2c65a-63b7-42fa-9521-230bac7a856c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:16:51.955003 kubelet[2515]: E0117 00:16:51.954952 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-598d5588f5-f9bzn" podUID="d3a2c65a-63b7-42fa-9521-230bac7a856c" Jan 17 00:16:51.960927 kubelet[2515]: E0117 00:16:51.960316 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:16:51.964084 kubelet[2515]: E0117 00:16:51.964015 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:16:51.966495 kubelet[2515]: E0117 00:16:51.966224 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-598d5588f5-f9bzn" podUID="d3a2c65a-63b7-42fa-9521-230bac7a856c" Jan 17 00:16:51.969189 kubelet[2515]: E0117 00:16:51.968295 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lfkr9" podUID="74b48e50-ea55-46c5-84cf-509f72a7af13" Jan 17 00:16:51.972443 kubelet[2515]: E0117 00:16:51.966262 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2b6w7" podUID="abee5d80-98e2-4d1b-a6be-4919665c817d" Jan 17 00:16:51.987376 kubelet[2515]: I0117 00:16:51.987305 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-rslgw" podStartSLOduration=51.987282961 podStartE2EDuration="51.987282961s" podCreationTimestamp="2026-01-17 00:16:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:16:51.98514176 +0000 UTC m=+57.716099171" watchObservedRunningTime="2026-01-17 00:16:51.987282961 +0000 UTC m=+57.718240323" Jan 17 00:16:52.022195 systemd-networkd[1375]: cali7c2e220de9e: Gained IPv6LL Jan 17 00:16:52.274673 systemd-networkd[1375]: calica69db75bb1: Gained IPv6LL Jan 17 00:16:52.466738 systemd-networkd[1375]: cali4c73e0eb0a3: Gained IPv6LL Jan 17 00:16:52.968082 kubelet[2515]: E0117 00:16:52.966897 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:16:52.970004 kubelet[2515]: E0117 00:16:52.969799 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-598d5588f5-f9bzn" podUID="d3a2c65a-63b7-42fa-9521-230bac7a856c" Jan 17 00:16:53.968834 kubelet[2515]: E0117 00:16:53.968666 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:16:54.498402 containerd[1465]: time="2026-01-17T00:16:54.498348573Z" level=info msg="StopPodSandbox for \"f46bb503ee981dfedd9f80b4a9ee729ce585d60a11bb71e15886b10c6cabf01b\"" Jan 17 00:16:54.651164 containerd[1465]: 2026-01-17 00:16:54.579 [WARNING][4842] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f46bb503ee981dfedd9f80b4a9ee729ce585d60a11bb71e15886b10c6cabf01b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--912fd252f4-k8s-calico--kube--controllers--76574bc5--8kb79-eth0", GenerateName:"calico-kube-controllers-76574bc5-", Namespace:"calico-system", SelfLink:"", UID:"96db8296-fac0-44e6-a2a4-5921dbbfa75c", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76574bc5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-912fd252f4", ContainerID:"2f0ac97f9992fe1a48c6b021036206cd643ecad78100b76ea4f5b83a6dcd4734", Pod:"calico-kube-controllers-76574bc5-8kb79", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.4.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8dd1eee73de", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:54.651164 containerd[1465]: 2026-01-17 00:16:54.580 [INFO][4842] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f46bb503ee981dfedd9f80b4a9ee729ce585d60a11bb71e15886b10c6cabf01b" Jan 17 00:16:54.651164 containerd[1465]: 2026-01-17 00:16:54.580 [INFO][4842] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f46bb503ee981dfedd9f80b4a9ee729ce585d60a11bb71e15886b10c6cabf01b" iface="eth0" netns="" Jan 17 00:16:54.651164 containerd[1465]: 2026-01-17 00:16:54.580 [INFO][4842] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f46bb503ee981dfedd9f80b4a9ee729ce585d60a11bb71e15886b10c6cabf01b" Jan 17 00:16:54.651164 containerd[1465]: 2026-01-17 00:16:54.582 [INFO][4842] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f46bb503ee981dfedd9f80b4a9ee729ce585d60a11bb71e15886b10c6cabf01b" Jan 17 00:16:54.651164 containerd[1465]: 2026-01-17 00:16:54.635 [INFO][4849] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f46bb503ee981dfedd9f80b4a9ee729ce585d60a11bb71e15886b10c6cabf01b" HandleID="k8s-pod-network.f46bb503ee981dfedd9f80b4a9ee729ce585d60a11bb71e15886b10c6cabf01b" Workload="ci--4081.3.6--n--912fd252f4-k8s-calico--kube--controllers--76574bc5--8kb79-eth0" Jan 17 00:16:54.651164 containerd[1465]: 2026-01-17 00:16:54.635 [INFO][4849] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:54.651164 containerd[1465]: 2026-01-17 00:16:54.635 [INFO][4849] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:54.651164 containerd[1465]: 2026-01-17 00:16:54.644 [WARNING][4849] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f46bb503ee981dfedd9f80b4a9ee729ce585d60a11bb71e15886b10c6cabf01b" HandleID="k8s-pod-network.f46bb503ee981dfedd9f80b4a9ee729ce585d60a11bb71e15886b10c6cabf01b" Workload="ci--4081.3.6--n--912fd252f4-k8s-calico--kube--controllers--76574bc5--8kb79-eth0" Jan 17 00:16:54.651164 containerd[1465]: 2026-01-17 00:16:54.644 [INFO][4849] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f46bb503ee981dfedd9f80b4a9ee729ce585d60a11bb71e15886b10c6cabf01b" HandleID="k8s-pod-network.f46bb503ee981dfedd9f80b4a9ee729ce585d60a11bb71e15886b10c6cabf01b" Workload="ci--4081.3.6--n--912fd252f4-k8s-calico--kube--controllers--76574bc5--8kb79-eth0" Jan 17 00:16:54.651164 containerd[1465]: 2026-01-17 00:16:54.646 [INFO][4849] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:54.651164 containerd[1465]: 2026-01-17 00:16:54.648 [INFO][4842] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f46bb503ee981dfedd9f80b4a9ee729ce585d60a11bb71e15886b10c6cabf01b" Jan 17 00:16:54.652300 containerd[1465]: time="2026-01-17T00:16:54.651239244Z" level=info msg="TearDown network for sandbox \"f46bb503ee981dfedd9f80b4a9ee729ce585d60a11bb71e15886b10c6cabf01b\" successfully" Jan 17 00:16:54.652300 containerd[1465]: time="2026-01-17T00:16:54.651288956Z" level=info msg="StopPodSandbox for \"f46bb503ee981dfedd9f80b4a9ee729ce585d60a11bb71e15886b10c6cabf01b\" returns successfully" Jan 17 00:16:54.652300 containerd[1465]: time="2026-01-17T00:16:54.652109064Z" level=info msg="RemovePodSandbox for \"f46bb503ee981dfedd9f80b4a9ee729ce585d60a11bb71e15886b10c6cabf01b\"" Jan 17 00:16:54.652300 containerd[1465]: time="2026-01-17T00:16:54.652147074Z" level=info msg="Forcibly stopping sandbox \"f46bb503ee981dfedd9f80b4a9ee729ce585d60a11bb71e15886b10c6cabf01b\"" Jan 17 00:16:54.763510 containerd[1465]: 2026-01-17 00:16:54.713 [WARNING][4864] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f46bb503ee981dfedd9f80b4a9ee729ce585d60a11bb71e15886b10c6cabf01b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--912fd252f4-k8s-calico--kube--controllers--76574bc5--8kb79-eth0", GenerateName:"calico-kube-controllers-76574bc5-", Namespace:"calico-system", SelfLink:"", UID:"96db8296-fac0-44e6-a2a4-5921dbbfa75c", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76574bc5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-912fd252f4", ContainerID:"2f0ac97f9992fe1a48c6b021036206cd643ecad78100b76ea4f5b83a6dcd4734", Pod:"calico-kube-controllers-76574bc5-8kb79", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.4.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8dd1eee73de", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:54.763510 containerd[1465]: 2026-01-17 00:16:54.713 [INFO][4864] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f46bb503ee981dfedd9f80b4a9ee729ce585d60a11bb71e15886b10c6cabf01b" Jan 17 00:16:54.763510 containerd[1465]: 2026-01-17 00:16:54.713 [INFO][4864] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f46bb503ee981dfedd9f80b4a9ee729ce585d60a11bb71e15886b10c6cabf01b" iface="eth0" netns="" Jan 17 00:16:54.763510 containerd[1465]: 2026-01-17 00:16:54.713 [INFO][4864] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f46bb503ee981dfedd9f80b4a9ee729ce585d60a11bb71e15886b10c6cabf01b" Jan 17 00:16:54.763510 containerd[1465]: 2026-01-17 00:16:54.713 [INFO][4864] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f46bb503ee981dfedd9f80b4a9ee729ce585d60a11bb71e15886b10c6cabf01b" Jan 17 00:16:54.763510 containerd[1465]: 2026-01-17 00:16:54.746 [INFO][4871] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f46bb503ee981dfedd9f80b4a9ee729ce585d60a11bb71e15886b10c6cabf01b" HandleID="k8s-pod-network.f46bb503ee981dfedd9f80b4a9ee729ce585d60a11bb71e15886b10c6cabf01b" Workload="ci--4081.3.6--n--912fd252f4-k8s-calico--kube--controllers--76574bc5--8kb79-eth0" Jan 17 00:16:54.763510 containerd[1465]: 2026-01-17 00:16:54.746 [INFO][4871] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:54.763510 containerd[1465]: 2026-01-17 00:16:54.746 [INFO][4871] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:54.763510 containerd[1465]: 2026-01-17 00:16:54.756 [WARNING][4871] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f46bb503ee981dfedd9f80b4a9ee729ce585d60a11bb71e15886b10c6cabf01b" HandleID="k8s-pod-network.f46bb503ee981dfedd9f80b4a9ee729ce585d60a11bb71e15886b10c6cabf01b" Workload="ci--4081.3.6--n--912fd252f4-k8s-calico--kube--controllers--76574bc5--8kb79-eth0" Jan 17 00:16:54.763510 containerd[1465]: 2026-01-17 00:16:54.756 [INFO][4871] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f46bb503ee981dfedd9f80b4a9ee729ce585d60a11bb71e15886b10c6cabf01b" HandleID="k8s-pod-network.f46bb503ee981dfedd9f80b4a9ee729ce585d60a11bb71e15886b10c6cabf01b" Workload="ci--4081.3.6--n--912fd252f4-k8s-calico--kube--controllers--76574bc5--8kb79-eth0" Jan 17 00:16:54.763510 containerd[1465]: 2026-01-17 00:16:54.758 [INFO][4871] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:54.763510 containerd[1465]: 2026-01-17 00:16:54.760 [INFO][4864] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f46bb503ee981dfedd9f80b4a9ee729ce585d60a11bb71e15886b10c6cabf01b" Jan 17 00:16:54.763510 containerd[1465]: time="2026-01-17T00:16:54.763356689Z" level=info msg="TearDown network for sandbox \"f46bb503ee981dfedd9f80b4a9ee729ce585d60a11bb71e15886b10c6cabf01b\" successfully" Jan 17 00:16:54.774471 containerd[1465]: time="2026-01-17T00:16:54.774370775Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f46bb503ee981dfedd9f80b4a9ee729ce585d60a11bb71e15886b10c6cabf01b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:16:54.774651 containerd[1465]: time="2026-01-17T00:16:54.774505980Z" level=info msg="RemovePodSandbox \"f46bb503ee981dfedd9f80b4a9ee729ce585d60a11bb71e15886b10c6cabf01b\" returns successfully" Jan 17 00:16:54.775913 containerd[1465]: time="2026-01-17T00:16:54.775343233Z" level=info msg="StopPodSandbox for \"42f46fe090b606811a7c214bf303a20fffb167baf6a0a4b82f2870b27920a981\"" Jan 17 00:16:54.892880 containerd[1465]: 2026-01-17 00:16:54.838 [WARNING][4885] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="42f46fe090b606811a7c214bf303a20fffb167baf6a0a4b82f2870b27920a981" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--912fd252f4-k8s-calico--apiserver--598d5588f5--f9bzn-eth0", GenerateName:"calico-apiserver-598d5588f5-", Namespace:"calico-apiserver", SelfLink:"", UID:"d3a2c65a-63b7-42fa-9521-230bac7a856c", ResourceVersion:"1137", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"598d5588f5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-912fd252f4", ContainerID:"c8dfa5dcc6abeca360d689283160bdc77d858583b6817ff045660b45b05c490a", Pod:"calico-apiserver-598d5588f5-f9bzn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.4.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calica69db75bb1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:54.892880 containerd[1465]: 2026-01-17 00:16:54.838 [INFO][4885] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="42f46fe090b606811a7c214bf303a20fffb167baf6a0a4b82f2870b27920a981" Jan 17 00:16:54.892880 containerd[1465]: 2026-01-17 00:16:54.838 [INFO][4885] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="42f46fe090b606811a7c214bf303a20fffb167baf6a0a4b82f2870b27920a981" iface="eth0" netns="" Jan 17 00:16:54.892880 containerd[1465]: 2026-01-17 00:16:54.838 [INFO][4885] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="42f46fe090b606811a7c214bf303a20fffb167baf6a0a4b82f2870b27920a981" Jan 17 00:16:54.892880 containerd[1465]: 2026-01-17 00:16:54.838 [INFO][4885] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="42f46fe090b606811a7c214bf303a20fffb167baf6a0a4b82f2870b27920a981" Jan 17 00:16:54.892880 containerd[1465]: 2026-01-17 00:16:54.872 [INFO][4892] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="42f46fe090b606811a7c214bf303a20fffb167baf6a0a4b82f2870b27920a981" HandleID="k8s-pod-network.42f46fe090b606811a7c214bf303a20fffb167baf6a0a4b82f2870b27920a981" Workload="ci--4081.3.6--n--912fd252f4-k8s-calico--apiserver--598d5588f5--f9bzn-eth0" Jan 17 00:16:54.892880 containerd[1465]: 2026-01-17 00:16:54.872 [INFO][4892] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:54.892880 containerd[1465]: 2026-01-17 00:16:54.872 [INFO][4892] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:54.892880 containerd[1465]: 2026-01-17 00:16:54.883 [WARNING][4892] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="42f46fe090b606811a7c214bf303a20fffb167baf6a0a4b82f2870b27920a981" HandleID="k8s-pod-network.42f46fe090b606811a7c214bf303a20fffb167baf6a0a4b82f2870b27920a981" Workload="ci--4081.3.6--n--912fd252f4-k8s-calico--apiserver--598d5588f5--f9bzn-eth0" Jan 17 00:16:54.892880 containerd[1465]: 2026-01-17 00:16:54.883 [INFO][4892] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="42f46fe090b606811a7c214bf303a20fffb167baf6a0a4b82f2870b27920a981" HandleID="k8s-pod-network.42f46fe090b606811a7c214bf303a20fffb167baf6a0a4b82f2870b27920a981" Workload="ci--4081.3.6--n--912fd252f4-k8s-calico--apiserver--598d5588f5--f9bzn-eth0" Jan 17 00:16:54.892880 containerd[1465]: 2026-01-17 00:16:54.886 [INFO][4892] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:54.892880 containerd[1465]: 2026-01-17 00:16:54.889 [INFO][4885] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="42f46fe090b606811a7c214bf303a20fffb167baf6a0a4b82f2870b27920a981" Jan 17 00:16:54.893867 containerd[1465]: time="2026-01-17T00:16:54.893600423Z" level=info msg="TearDown network for sandbox \"42f46fe090b606811a7c214bf303a20fffb167baf6a0a4b82f2870b27920a981\" successfully" Jan 17 00:16:54.893867 containerd[1465]: time="2026-01-17T00:16:54.893800873Z" level=info msg="StopPodSandbox for \"42f46fe090b606811a7c214bf303a20fffb167baf6a0a4b82f2870b27920a981\" returns successfully" Jan 17 00:16:54.894782 containerd[1465]: time="2026-01-17T00:16:54.894677403Z" level=info msg="RemovePodSandbox for \"42f46fe090b606811a7c214bf303a20fffb167baf6a0a4b82f2870b27920a981\"" Jan 17 00:16:54.894927 containerd[1465]: time="2026-01-17T00:16:54.894789400Z" level=info msg="Forcibly stopping sandbox \"42f46fe090b606811a7c214bf303a20fffb167baf6a0a4b82f2870b27920a981\"" Jan 17 00:16:55.025995 containerd[1465]: 2026-01-17 00:16:54.958 [WARNING][4907] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="42f46fe090b606811a7c214bf303a20fffb167baf6a0a4b82f2870b27920a981" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--912fd252f4-k8s-calico--apiserver--598d5588f5--f9bzn-eth0", GenerateName:"calico-apiserver-598d5588f5-", Namespace:"calico-apiserver", SelfLink:"", UID:"d3a2c65a-63b7-42fa-9521-230bac7a856c", ResourceVersion:"1137", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"598d5588f5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-912fd252f4", ContainerID:"c8dfa5dcc6abeca360d689283160bdc77d858583b6817ff045660b45b05c490a", Pod:"calico-apiserver-598d5588f5-f9bzn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.4.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calica69db75bb1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:55.025995 containerd[1465]: 2026-01-17 00:16:54.960 [INFO][4907] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="42f46fe090b606811a7c214bf303a20fffb167baf6a0a4b82f2870b27920a981" Jan 17 00:16:55.025995 containerd[1465]: 2026-01-17 00:16:54.960 [INFO][4907] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="42f46fe090b606811a7c214bf303a20fffb167baf6a0a4b82f2870b27920a981" iface="eth0" netns="" Jan 17 00:16:55.025995 containerd[1465]: 2026-01-17 00:16:54.960 [INFO][4907] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="42f46fe090b606811a7c214bf303a20fffb167baf6a0a4b82f2870b27920a981" Jan 17 00:16:55.025995 containerd[1465]: 2026-01-17 00:16:54.960 [INFO][4907] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="42f46fe090b606811a7c214bf303a20fffb167baf6a0a4b82f2870b27920a981" Jan 17 00:16:55.025995 containerd[1465]: 2026-01-17 00:16:55.011 [INFO][4916] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="42f46fe090b606811a7c214bf303a20fffb167baf6a0a4b82f2870b27920a981" HandleID="k8s-pod-network.42f46fe090b606811a7c214bf303a20fffb167baf6a0a4b82f2870b27920a981" Workload="ci--4081.3.6--n--912fd252f4-k8s-calico--apiserver--598d5588f5--f9bzn-eth0" Jan 17 00:16:55.025995 containerd[1465]: 2026-01-17 00:16:55.011 [INFO][4916] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:55.025995 containerd[1465]: 2026-01-17 00:16:55.011 [INFO][4916] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:55.025995 containerd[1465]: 2026-01-17 00:16:55.018 [WARNING][4916] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="42f46fe090b606811a7c214bf303a20fffb167baf6a0a4b82f2870b27920a981" HandleID="k8s-pod-network.42f46fe090b606811a7c214bf303a20fffb167baf6a0a4b82f2870b27920a981" Workload="ci--4081.3.6--n--912fd252f4-k8s-calico--apiserver--598d5588f5--f9bzn-eth0" Jan 17 00:16:55.025995 containerd[1465]: 2026-01-17 00:16:55.019 [INFO][4916] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="42f46fe090b606811a7c214bf303a20fffb167baf6a0a4b82f2870b27920a981" HandleID="k8s-pod-network.42f46fe090b606811a7c214bf303a20fffb167baf6a0a4b82f2870b27920a981" Workload="ci--4081.3.6--n--912fd252f4-k8s-calico--apiserver--598d5588f5--f9bzn-eth0" Jan 17 00:16:55.025995 containerd[1465]: 2026-01-17 00:16:55.021 [INFO][4916] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:55.025995 containerd[1465]: 2026-01-17 00:16:55.023 [INFO][4907] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="42f46fe090b606811a7c214bf303a20fffb167baf6a0a4b82f2870b27920a981" Jan 17 00:16:55.025995 containerd[1465]: time="2026-01-17T00:16:55.025942838Z" level=info msg="TearDown network for sandbox \"42f46fe090b606811a7c214bf303a20fffb167baf6a0a4b82f2870b27920a981\" successfully" Jan 17 00:16:55.033279 containerd[1465]: time="2026-01-17T00:16:55.033203638Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"42f46fe090b606811a7c214bf303a20fffb167baf6a0a4b82f2870b27920a981\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:16:55.033279 containerd[1465]: time="2026-01-17T00:16:55.033286302Z" level=info msg="RemovePodSandbox \"42f46fe090b606811a7c214bf303a20fffb167baf6a0a4b82f2870b27920a981\" returns successfully" Jan 17 00:16:55.034380 containerd[1465]: time="2026-01-17T00:16:55.034325820Z" level=info msg="StopPodSandbox for \"44e891593804e101ca66ba664a3e7090ba8654b106de88e61702f2e86233a3e6\"" Jan 17 00:16:55.140036 containerd[1465]: 2026-01-17 00:16:55.086 [WARNING][4930] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="44e891593804e101ca66ba664a3e7090ba8654b106de88e61702f2e86233a3e6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--912fd252f4-k8s-coredns--668d6bf9bc--spn4s-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8cc8f574-3f41-42c0-ad2b-73a6264664c2", ResourceVersion:"1053", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-912fd252f4", ContainerID:"6b63a28789bb070adbe49a4eb6ad30d762074150d085ef9252bc2624b30fe4ed", Pod:"coredns-668d6bf9bc-spn4s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.4.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4de818d07d5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:55.140036 containerd[1465]: 2026-01-17 00:16:55.087 [INFO][4930] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="44e891593804e101ca66ba664a3e7090ba8654b106de88e61702f2e86233a3e6" Jan 17 00:16:55.140036 containerd[1465]: 2026-01-17 00:16:55.087 [INFO][4930] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="44e891593804e101ca66ba664a3e7090ba8654b106de88e61702f2e86233a3e6" iface="eth0" netns="" Jan 17 00:16:55.140036 containerd[1465]: 2026-01-17 00:16:55.087 [INFO][4930] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="44e891593804e101ca66ba664a3e7090ba8654b106de88e61702f2e86233a3e6" Jan 17 00:16:55.140036 containerd[1465]: 2026-01-17 00:16:55.088 [INFO][4930] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="44e891593804e101ca66ba664a3e7090ba8654b106de88e61702f2e86233a3e6" Jan 17 00:16:55.140036 containerd[1465]: 2026-01-17 00:16:55.122 [INFO][4937] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="44e891593804e101ca66ba664a3e7090ba8654b106de88e61702f2e86233a3e6" HandleID="k8s-pod-network.44e891593804e101ca66ba664a3e7090ba8654b106de88e61702f2e86233a3e6" Workload="ci--4081.3.6--n--912fd252f4-k8s-coredns--668d6bf9bc--spn4s-eth0" Jan 17 00:16:55.140036 containerd[1465]: 2026-01-17 00:16:55.122 [INFO][4937] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:55.140036 containerd[1465]: 2026-01-17 00:16:55.122 [INFO][4937] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:55.140036 containerd[1465]: 2026-01-17 00:16:55.130 [WARNING][4937] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="44e891593804e101ca66ba664a3e7090ba8654b106de88e61702f2e86233a3e6" HandleID="k8s-pod-network.44e891593804e101ca66ba664a3e7090ba8654b106de88e61702f2e86233a3e6" Workload="ci--4081.3.6--n--912fd252f4-k8s-coredns--668d6bf9bc--spn4s-eth0" Jan 17 00:16:55.140036 containerd[1465]: 2026-01-17 00:16:55.130 [INFO][4937] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="44e891593804e101ca66ba664a3e7090ba8654b106de88e61702f2e86233a3e6" HandleID="k8s-pod-network.44e891593804e101ca66ba664a3e7090ba8654b106de88e61702f2e86233a3e6" Workload="ci--4081.3.6--n--912fd252f4-k8s-coredns--668d6bf9bc--spn4s-eth0" Jan 17 00:16:55.140036 containerd[1465]: 2026-01-17 00:16:55.132 [INFO][4937] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:55.140036 containerd[1465]: 2026-01-17 00:16:55.136 [INFO][4930] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="44e891593804e101ca66ba664a3e7090ba8654b106de88e61702f2e86233a3e6" Jan 17 00:16:55.140036 containerd[1465]: time="2026-01-17T00:16:55.139894862Z" level=info msg="TearDown network for sandbox \"44e891593804e101ca66ba664a3e7090ba8654b106de88e61702f2e86233a3e6\" successfully" Jan 17 00:16:55.140036 containerd[1465]: time="2026-01-17T00:16:55.139924369Z" level=info msg="StopPodSandbox for \"44e891593804e101ca66ba664a3e7090ba8654b106de88e61702f2e86233a3e6\" returns successfully" Jan 17 00:16:55.142098 containerd[1465]: time="2026-01-17T00:16:55.140584528Z" level=info msg="RemovePodSandbox for \"44e891593804e101ca66ba664a3e7090ba8654b106de88e61702f2e86233a3e6\"" Jan 17 00:16:55.142098 containerd[1465]: time="2026-01-17T00:16:55.140622798Z" level=info msg="Forcibly stopping sandbox \"44e891593804e101ca66ba664a3e7090ba8654b106de88e61702f2e86233a3e6\"" Jan 17 00:16:55.251590 containerd[1465]: 2026-01-17 00:16:55.196 [WARNING][4951] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="44e891593804e101ca66ba664a3e7090ba8654b106de88e61702f2e86233a3e6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--912fd252f4-k8s-coredns--668d6bf9bc--spn4s-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8cc8f574-3f41-42c0-ad2b-73a6264664c2", ResourceVersion:"1053", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-912fd252f4", ContainerID:"6b63a28789bb070adbe49a4eb6ad30d762074150d085ef9252bc2624b30fe4ed", Pod:"coredns-668d6bf9bc-spn4s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.4.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4de818d07d5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:55.251590 containerd[1465]: 2026-01-17 00:16:55.196 [INFO][4951] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="44e891593804e101ca66ba664a3e7090ba8654b106de88e61702f2e86233a3e6" Jan 17 00:16:55.251590 containerd[1465]: 2026-01-17 00:16:55.197 [INFO][4951] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="44e891593804e101ca66ba664a3e7090ba8654b106de88e61702f2e86233a3e6" iface="eth0" netns="" Jan 17 00:16:55.251590 containerd[1465]: 2026-01-17 00:16:55.197 [INFO][4951] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="44e891593804e101ca66ba664a3e7090ba8654b106de88e61702f2e86233a3e6" Jan 17 00:16:55.251590 containerd[1465]: 2026-01-17 00:16:55.197 [INFO][4951] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="44e891593804e101ca66ba664a3e7090ba8654b106de88e61702f2e86233a3e6" Jan 17 00:16:55.251590 containerd[1465]: 2026-01-17 00:16:55.232 [INFO][4958] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="44e891593804e101ca66ba664a3e7090ba8654b106de88e61702f2e86233a3e6" HandleID="k8s-pod-network.44e891593804e101ca66ba664a3e7090ba8654b106de88e61702f2e86233a3e6" Workload="ci--4081.3.6--n--912fd252f4-k8s-coredns--668d6bf9bc--spn4s-eth0" Jan 17 00:16:55.251590 containerd[1465]: 2026-01-17 00:16:55.232 [INFO][4958] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:55.251590 containerd[1465]: 2026-01-17 00:16:55.232 [INFO][4958] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:55.251590 containerd[1465]: 2026-01-17 00:16:55.241 [WARNING][4958] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="44e891593804e101ca66ba664a3e7090ba8654b106de88e61702f2e86233a3e6" HandleID="k8s-pod-network.44e891593804e101ca66ba664a3e7090ba8654b106de88e61702f2e86233a3e6" Workload="ci--4081.3.6--n--912fd252f4-k8s-coredns--668d6bf9bc--spn4s-eth0" Jan 17 00:16:55.251590 containerd[1465]: 2026-01-17 00:16:55.241 [INFO][4958] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="44e891593804e101ca66ba664a3e7090ba8654b106de88e61702f2e86233a3e6" HandleID="k8s-pod-network.44e891593804e101ca66ba664a3e7090ba8654b106de88e61702f2e86233a3e6" Workload="ci--4081.3.6--n--912fd252f4-k8s-coredns--668d6bf9bc--spn4s-eth0" Jan 17 00:16:55.251590 containerd[1465]: 2026-01-17 00:16:55.244 [INFO][4958] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:55.251590 containerd[1465]: 2026-01-17 00:16:55.247 [INFO][4951] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="44e891593804e101ca66ba664a3e7090ba8654b106de88e61702f2e86233a3e6" Jan 17 00:16:55.251590 containerd[1465]: time="2026-01-17T00:16:55.250394669Z" level=info msg="TearDown network for sandbox \"44e891593804e101ca66ba664a3e7090ba8654b106de88e61702f2e86233a3e6\" successfully" Jan 17 00:16:55.260089 containerd[1465]: time="2026-01-17T00:16:55.259796494Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"44e891593804e101ca66ba664a3e7090ba8654b106de88e61702f2e86233a3e6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:16:55.260089 containerd[1465]: time="2026-01-17T00:16:55.259900930Z" level=info msg="RemovePodSandbox \"44e891593804e101ca66ba664a3e7090ba8654b106de88e61702f2e86233a3e6\" returns successfully" Jan 17 00:16:55.261284 containerd[1465]: time="2026-01-17T00:16:55.260823656Z" level=info msg="StopPodSandbox for \"0aace391784eacafec893208408d9d8dd1f5bece15b75bf4a62dd03336f5e417\"" Jan 17 00:16:55.386353 containerd[1465]: 2026-01-17 00:16:55.323 [WARNING][4971] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="0aace391784eacafec893208408d9d8dd1f5bece15b75bf4a62dd03336f5e417" WorkloadEndpoint="ci--4081.3.6--n--912fd252f4-k8s-whisker--6c748659fb--h9bq5-eth0" Jan 17 00:16:55.386353 containerd[1465]: 2026-01-17 00:16:55.323 [INFO][4971] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0aace391784eacafec893208408d9d8dd1f5bece15b75bf4a62dd03336f5e417" Jan 17 00:16:55.386353 containerd[1465]: 2026-01-17 00:16:55.323 [INFO][4971] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0aace391784eacafec893208408d9d8dd1f5bece15b75bf4a62dd03336f5e417" iface="eth0" netns="" Jan 17 00:16:55.386353 containerd[1465]: 2026-01-17 00:16:55.323 [INFO][4971] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0aace391784eacafec893208408d9d8dd1f5bece15b75bf4a62dd03336f5e417" Jan 17 00:16:55.386353 containerd[1465]: 2026-01-17 00:16:55.323 [INFO][4971] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0aace391784eacafec893208408d9d8dd1f5bece15b75bf4a62dd03336f5e417" Jan 17 00:16:55.386353 containerd[1465]: 2026-01-17 00:16:55.367 [INFO][4979] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0aace391784eacafec893208408d9d8dd1f5bece15b75bf4a62dd03336f5e417" HandleID="k8s-pod-network.0aace391784eacafec893208408d9d8dd1f5bece15b75bf4a62dd03336f5e417" Workload="ci--4081.3.6--n--912fd252f4-k8s-whisker--6c748659fb--h9bq5-eth0" Jan 17 00:16:55.386353 containerd[1465]: 2026-01-17 00:16:55.367 [INFO][4979] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:55.386353 containerd[1465]: 2026-01-17 00:16:55.367 [INFO][4979] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:55.386353 containerd[1465]: 2026-01-17 00:16:55.378 [WARNING][4979] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0aace391784eacafec893208408d9d8dd1f5bece15b75bf4a62dd03336f5e417" HandleID="k8s-pod-network.0aace391784eacafec893208408d9d8dd1f5bece15b75bf4a62dd03336f5e417" Workload="ci--4081.3.6--n--912fd252f4-k8s-whisker--6c748659fb--h9bq5-eth0" Jan 17 00:16:55.386353 containerd[1465]: 2026-01-17 00:16:55.378 [INFO][4979] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0aace391784eacafec893208408d9d8dd1f5bece15b75bf4a62dd03336f5e417" HandleID="k8s-pod-network.0aace391784eacafec893208408d9d8dd1f5bece15b75bf4a62dd03336f5e417" Workload="ci--4081.3.6--n--912fd252f4-k8s-whisker--6c748659fb--h9bq5-eth0" Jan 17 00:16:55.386353 containerd[1465]: 2026-01-17 00:16:55.380 [INFO][4979] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:55.386353 containerd[1465]: 2026-01-17 00:16:55.383 [INFO][4971] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0aace391784eacafec893208408d9d8dd1f5bece15b75bf4a62dd03336f5e417" Jan 17 00:16:55.388465 containerd[1465]: time="2026-01-17T00:16:55.387519099Z" level=info msg="TearDown network for sandbox \"0aace391784eacafec893208408d9d8dd1f5bece15b75bf4a62dd03336f5e417\" successfully" Jan 17 00:16:55.388465 containerd[1465]: time="2026-01-17T00:16:55.387554189Z" level=info msg="StopPodSandbox for \"0aace391784eacafec893208408d9d8dd1f5bece15b75bf4a62dd03336f5e417\" returns successfully" Jan 17 00:16:55.389927 containerd[1465]: time="2026-01-17T00:16:55.389890743Z" level=info msg="RemovePodSandbox for \"0aace391784eacafec893208408d9d8dd1f5bece15b75bf4a62dd03336f5e417\"" Jan 17 00:16:55.390066 containerd[1465]: time="2026-01-17T00:16:55.389934317Z" level=info msg="Forcibly stopping sandbox \"0aace391784eacafec893208408d9d8dd1f5bece15b75bf4a62dd03336f5e417\"" Jan 17 00:16:55.534310 containerd[1465]: 2026-01-17 00:16:55.465 [WARNING][4993] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="0aace391784eacafec893208408d9d8dd1f5bece15b75bf4a62dd03336f5e417" WorkloadEndpoint="ci--4081.3.6--n--912fd252f4-k8s-whisker--6c748659fb--h9bq5-eth0" Jan 17 00:16:55.534310 containerd[1465]: 2026-01-17 00:16:55.465 [INFO][4993] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0aace391784eacafec893208408d9d8dd1f5bece15b75bf4a62dd03336f5e417" Jan 17 00:16:55.534310 containerd[1465]: 2026-01-17 00:16:55.465 [INFO][4993] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0aace391784eacafec893208408d9d8dd1f5bece15b75bf4a62dd03336f5e417" iface="eth0" netns="" Jan 17 00:16:55.534310 containerd[1465]: 2026-01-17 00:16:55.465 [INFO][4993] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0aace391784eacafec893208408d9d8dd1f5bece15b75bf4a62dd03336f5e417" Jan 17 00:16:55.534310 containerd[1465]: 2026-01-17 00:16:55.465 [INFO][4993] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0aace391784eacafec893208408d9d8dd1f5bece15b75bf4a62dd03336f5e417" Jan 17 00:16:55.534310 containerd[1465]: 2026-01-17 00:16:55.516 [INFO][5002] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0aace391784eacafec893208408d9d8dd1f5bece15b75bf4a62dd03336f5e417" HandleID="k8s-pod-network.0aace391784eacafec893208408d9d8dd1f5bece15b75bf4a62dd03336f5e417" Workload="ci--4081.3.6--n--912fd252f4-k8s-whisker--6c748659fb--h9bq5-eth0" Jan 17 00:16:55.534310 containerd[1465]: 2026-01-17 00:16:55.516 [INFO][5002] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:55.534310 containerd[1465]: 2026-01-17 00:16:55.516 [INFO][5002] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:55.534310 containerd[1465]: 2026-01-17 00:16:55.526 [WARNING][5002] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0aace391784eacafec893208408d9d8dd1f5bece15b75bf4a62dd03336f5e417" HandleID="k8s-pod-network.0aace391784eacafec893208408d9d8dd1f5bece15b75bf4a62dd03336f5e417" Workload="ci--4081.3.6--n--912fd252f4-k8s-whisker--6c748659fb--h9bq5-eth0" Jan 17 00:16:55.534310 containerd[1465]: 2026-01-17 00:16:55.526 [INFO][5002] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0aace391784eacafec893208408d9d8dd1f5bece15b75bf4a62dd03336f5e417" HandleID="k8s-pod-network.0aace391784eacafec893208408d9d8dd1f5bece15b75bf4a62dd03336f5e417" Workload="ci--4081.3.6--n--912fd252f4-k8s-whisker--6c748659fb--h9bq5-eth0" Jan 17 00:16:55.534310 containerd[1465]: 2026-01-17 00:16:55.528 [INFO][5002] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:55.534310 containerd[1465]: 2026-01-17 00:16:55.531 [INFO][4993] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0aace391784eacafec893208408d9d8dd1f5bece15b75bf4a62dd03336f5e417" Jan 17 00:16:55.536496 containerd[1465]: time="2026-01-17T00:16:55.534365406Z" level=info msg="TearDown network for sandbox \"0aace391784eacafec893208408d9d8dd1f5bece15b75bf4a62dd03336f5e417\" successfully" Jan 17 00:16:55.545912 containerd[1465]: time="2026-01-17T00:16:55.545825449Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0aace391784eacafec893208408d9d8dd1f5bece15b75bf4a62dd03336f5e417\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:16:55.547477 containerd[1465]: time="2026-01-17T00:16:55.546171658Z" level=info msg="RemovePodSandbox \"0aace391784eacafec893208408d9d8dd1f5bece15b75bf4a62dd03336f5e417\" returns successfully" Jan 17 00:16:55.547477 containerd[1465]: time="2026-01-17T00:16:55.547199667Z" level=info msg="StopPodSandbox for \"31c78df1dd67a548d2795fba2f046e3fcbda4b44064514813d75fa0a1542bf8d\"" Jan 17 00:16:55.655717 containerd[1465]: 2026-01-17 00:16:55.606 [WARNING][5020] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="31c78df1dd67a548d2795fba2f046e3fcbda4b44064514813d75fa0a1542bf8d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--912fd252f4-k8s-coredns--668d6bf9bc--rslgw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2f67ac25-9d9d-4a2d-8ba8-729f2f585a51", ResourceVersion:"1117", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-912fd252f4", ContainerID:"cc981a477467822642f31aca339fd1397383b9c0915f685742360f2c7329aa11", Pod:"coredns-668d6bf9bc-rslgw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.4.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4c73e0eb0a3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:55.655717 containerd[1465]: 2026-01-17 00:16:55.606 [INFO][5020] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="31c78df1dd67a548d2795fba2f046e3fcbda4b44064514813d75fa0a1542bf8d" Jan 17 00:16:55.655717 containerd[1465]: 2026-01-17 00:16:55.606 [INFO][5020] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="31c78df1dd67a548d2795fba2f046e3fcbda4b44064514813d75fa0a1542bf8d" iface="eth0" netns="" Jan 17 00:16:55.655717 containerd[1465]: 2026-01-17 00:16:55.606 [INFO][5020] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="31c78df1dd67a548d2795fba2f046e3fcbda4b44064514813d75fa0a1542bf8d" Jan 17 00:16:55.655717 containerd[1465]: 2026-01-17 00:16:55.606 [INFO][5020] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="31c78df1dd67a548d2795fba2f046e3fcbda4b44064514813d75fa0a1542bf8d" Jan 17 00:16:55.655717 containerd[1465]: 2026-01-17 00:16:55.639 [INFO][5027] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="31c78df1dd67a548d2795fba2f046e3fcbda4b44064514813d75fa0a1542bf8d" HandleID="k8s-pod-network.31c78df1dd67a548d2795fba2f046e3fcbda4b44064514813d75fa0a1542bf8d" Workload="ci--4081.3.6--n--912fd252f4-k8s-coredns--668d6bf9bc--rslgw-eth0" Jan 17 00:16:55.655717 containerd[1465]: 2026-01-17 00:16:55.640 [INFO][5027] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:55.655717 containerd[1465]: 2026-01-17 00:16:55.640 [INFO][5027] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:55.655717 containerd[1465]: 2026-01-17 00:16:55.648 [WARNING][5027] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="31c78df1dd67a548d2795fba2f046e3fcbda4b44064514813d75fa0a1542bf8d" HandleID="k8s-pod-network.31c78df1dd67a548d2795fba2f046e3fcbda4b44064514813d75fa0a1542bf8d" Workload="ci--4081.3.6--n--912fd252f4-k8s-coredns--668d6bf9bc--rslgw-eth0" Jan 17 00:16:55.655717 containerd[1465]: 2026-01-17 00:16:55.648 [INFO][5027] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="31c78df1dd67a548d2795fba2f046e3fcbda4b44064514813d75fa0a1542bf8d" HandleID="k8s-pod-network.31c78df1dd67a548d2795fba2f046e3fcbda4b44064514813d75fa0a1542bf8d" Workload="ci--4081.3.6--n--912fd252f4-k8s-coredns--668d6bf9bc--rslgw-eth0" Jan 17 00:16:55.655717 containerd[1465]: 2026-01-17 00:16:55.651 [INFO][5027] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:55.655717 containerd[1465]: 2026-01-17 00:16:55.653 [INFO][5020] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="31c78df1dd67a548d2795fba2f046e3fcbda4b44064514813d75fa0a1542bf8d" Jan 17 00:16:55.656366 containerd[1465]: time="2026-01-17T00:16:55.655762148Z" level=info msg="TearDown network for sandbox \"31c78df1dd67a548d2795fba2f046e3fcbda4b44064514813d75fa0a1542bf8d\" successfully" Jan 17 00:16:55.656366 containerd[1465]: time="2026-01-17T00:16:55.655798646Z" level=info msg="StopPodSandbox for \"31c78df1dd67a548d2795fba2f046e3fcbda4b44064514813d75fa0a1542bf8d\" returns successfully" Jan 17 00:16:55.656714 containerd[1465]: time="2026-01-17T00:16:55.656678660Z" level=info msg="RemovePodSandbox for \"31c78df1dd67a548d2795fba2f046e3fcbda4b44064514813d75fa0a1542bf8d\"" Jan 17 00:16:55.656794 containerd[1465]: time="2026-01-17T00:16:55.656723106Z" level=info msg="Forcibly stopping sandbox \"31c78df1dd67a548d2795fba2f046e3fcbda4b44064514813d75fa0a1542bf8d\"" Jan 17 00:16:55.768743 containerd[1465]: 2026-01-17 00:16:55.713 [WARNING][5041] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="31c78df1dd67a548d2795fba2f046e3fcbda4b44064514813d75fa0a1542bf8d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--912fd252f4-k8s-coredns--668d6bf9bc--rslgw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2f67ac25-9d9d-4a2d-8ba8-729f2f585a51", ResourceVersion:"1117", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-912fd252f4", ContainerID:"cc981a477467822642f31aca339fd1397383b9c0915f685742360f2c7329aa11", Pod:"coredns-668d6bf9bc-rslgw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.4.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4c73e0eb0a3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:55.768743 containerd[1465]: 2026-01-17 00:16:55.713 [INFO][5041] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="31c78df1dd67a548d2795fba2f046e3fcbda4b44064514813d75fa0a1542bf8d" Jan 17 00:16:55.768743 containerd[1465]: 2026-01-17 00:16:55.713 [INFO][5041] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="31c78df1dd67a548d2795fba2f046e3fcbda4b44064514813d75fa0a1542bf8d" iface="eth0" netns="" Jan 17 00:16:55.768743 containerd[1465]: 2026-01-17 00:16:55.713 [INFO][5041] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="31c78df1dd67a548d2795fba2f046e3fcbda4b44064514813d75fa0a1542bf8d" Jan 17 00:16:55.768743 containerd[1465]: 2026-01-17 00:16:55.713 [INFO][5041] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="31c78df1dd67a548d2795fba2f046e3fcbda4b44064514813d75fa0a1542bf8d" Jan 17 00:16:55.768743 containerd[1465]: 2026-01-17 00:16:55.751 [INFO][5048] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="31c78df1dd67a548d2795fba2f046e3fcbda4b44064514813d75fa0a1542bf8d" HandleID="k8s-pod-network.31c78df1dd67a548d2795fba2f046e3fcbda4b44064514813d75fa0a1542bf8d" Workload="ci--4081.3.6--n--912fd252f4-k8s-coredns--668d6bf9bc--rslgw-eth0" Jan 17 00:16:55.768743 containerd[1465]: 2026-01-17 00:16:55.751 [INFO][5048] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:55.768743 containerd[1465]: 2026-01-17 00:16:55.751 [INFO][5048] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:55.768743 containerd[1465]: 2026-01-17 00:16:55.759 [WARNING][5048] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="31c78df1dd67a548d2795fba2f046e3fcbda4b44064514813d75fa0a1542bf8d" HandleID="k8s-pod-network.31c78df1dd67a548d2795fba2f046e3fcbda4b44064514813d75fa0a1542bf8d" Workload="ci--4081.3.6--n--912fd252f4-k8s-coredns--668d6bf9bc--rslgw-eth0" Jan 17 00:16:55.768743 containerd[1465]: 2026-01-17 00:16:55.759 [INFO][5048] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="31c78df1dd67a548d2795fba2f046e3fcbda4b44064514813d75fa0a1542bf8d" HandleID="k8s-pod-network.31c78df1dd67a548d2795fba2f046e3fcbda4b44064514813d75fa0a1542bf8d" Workload="ci--4081.3.6--n--912fd252f4-k8s-coredns--668d6bf9bc--rslgw-eth0" Jan 17 00:16:55.768743 containerd[1465]: 2026-01-17 00:16:55.761 [INFO][5048] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:55.768743 containerd[1465]: 2026-01-17 00:16:55.763 [INFO][5041] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="31c78df1dd67a548d2795fba2f046e3fcbda4b44064514813d75fa0a1542bf8d" Jan 17 00:16:55.768743 containerd[1465]: time="2026-01-17T00:16:55.766246902Z" level=info msg="TearDown network for sandbox \"31c78df1dd67a548d2795fba2f046e3fcbda4b44064514813d75fa0a1542bf8d\" successfully" Jan 17 00:16:55.774264 containerd[1465]: time="2026-01-17T00:16:55.774198677Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"31c78df1dd67a548d2795fba2f046e3fcbda4b44064514813d75fa0a1542bf8d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:16:55.775061 containerd[1465]: time="2026-01-17T00:16:55.774510395Z" level=info msg="RemovePodSandbox \"31c78df1dd67a548d2795fba2f046e3fcbda4b44064514813d75fa0a1542bf8d\" returns successfully" Jan 17 00:16:55.775357 containerd[1465]: time="2026-01-17T00:16:55.775178513Z" level=info msg="StopPodSandbox for \"80d79e1dd770d0cf15204a932b15a90b89db61e7e85ce4c49c2b8405f932d2e3\"" Jan 17 00:16:55.892546 containerd[1465]: 2026-01-17 00:16:55.834 [WARNING][5062] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="80d79e1dd770d0cf15204a932b15a90b89db61e7e85ce4c49c2b8405f932d2e3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--912fd252f4-k8s-calico--apiserver--598d5588f5--xr65t-eth0", GenerateName:"calico-apiserver-598d5588f5-", Namespace:"calico-apiserver", SelfLink:"", UID:"fd23fc1c-2ea9-47e8-be5f-5279e384fd8c", ResourceVersion:"1077", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"598d5588f5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-912fd252f4", ContainerID:"4905d5c0c90d49fb2ae3b33b21a919649a62e60c8463db003d5549d379eda7ef", Pod:"calico-apiserver-598d5588f5-xr65t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.4.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidb65c978d23", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:55.892546 containerd[1465]: 2026-01-17 00:16:55.836 [INFO][5062] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="80d79e1dd770d0cf15204a932b15a90b89db61e7e85ce4c49c2b8405f932d2e3" Jan 17 00:16:55.892546 containerd[1465]: 2026-01-17 00:16:55.836 [INFO][5062] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="80d79e1dd770d0cf15204a932b15a90b89db61e7e85ce4c49c2b8405f932d2e3" iface="eth0" netns="" Jan 17 00:16:55.892546 containerd[1465]: 2026-01-17 00:16:55.837 [INFO][5062] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="80d79e1dd770d0cf15204a932b15a90b89db61e7e85ce4c49c2b8405f932d2e3" Jan 17 00:16:55.892546 containerd[1465]: 2026-01-17 00:16:55.837 [INFO][5062] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="80d79e1dd770d0cf15204a932b15a90b89db61e7e85ce4c49c2b8405f932d2e3" Jan 17 00:16:55.892546 containerd[1465]: 2026-01-17 00:16:55.875 [INFO][5069] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="80d79e1dd770d0cf15204a932b15a90b89db61e7e85ce4c49c2b8405f932d2e3" HandleID="k8s-pod-network.80d79e1dd770d0cf15204a932b15a90b89db61e7e85ce4c49c2b8405f932d2e3" Workload="ci--4081.3.6--n--912fd252f4-k8s-calico--apiserver--598d5588f5--xr65t-eth0" Jan 17 00:16:55.892546 containerd[1465]: 2026-01-17 00:16:55.875 [INFO][5069] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:55.892546 containerd[1465]: 2026-01-17 00:16:55.875 [INFO][5069] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:55.892546 containerd[1465]: 2026-01-17 00:16:55.884 [WARNING][5069] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="80d79e1dd770d0cf15204a932b15a90b89db61e7e85ce4c49c2b8405f932d2e3" HandleID="k8s-pod-network.80d79e1dd770d0cf15204a932b15a90b89db61e7e85ce4c49c2b8405f932d2e3" Workload="ci--4081.3.6--n--912fd252f4-k8s-calico--apiserver--598d5588f5--xr65t-eth0" Jan 17 00:16:55.892546 containerd[1465]: 2026-01-17 00:16:55.884 [INFO][5069] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="80d79e1dd770d0cf15204a932b15a90b89db61e7e85ce4c49c2b8405f932d2e3" HandleID="k8s-pod-network.80d79e1dd770d0cf15204a932b15a90b89db61e7e85ce4c49c2b8405f932d2e3" Workload="ci--4081.3.6--n--912fd252f4-k8s-calico--apiserver--598d5588f5--xr65t-eth0" Jan 17 00:16:55.892546 containerd[1465]: 2026-01-17 00:16:55.887 [INFO][5069] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:55.892546 containerd[1465]: 2026-01-17 00:16:55.889 [INFO][5062] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="80d79e1dd770d0cf15204a932b15a90b89db61e7e85ce4c49c2b8405f932d2e3" Jan 17 00:16:55.893582 containerd[1465]: time="2026-01-17T00:16:55.893435575Z" level=info msg="TearDown network for sandbox \"80d79e1dd770d0cf15204a932b15a90b89db61e7e85ce4c49c2b8405f932d2e3\" successfully" Jan 17 00:16:55.893582 containerd[1465]: time="2026-01-17T00:16:55.893472642Z" level=info msg="StopPodSandbox for \"80d79e1dd770d0cf15204a932b15a90b89db61e7e85ce4c49c2b8405f932d2e3\" returns successfully" Jan 17 00:16:55.894573 containerd[1465]: time="2026-01-17T00:16:55.894535629Z" level=info msg="RemovePodSandbox for \"80d79e1dd770d0cf15204a932b15a90b89db61e7e85ce4c49c2b8405f932d2e3\"" Jan 17 00:16:55.894717 containerd[1465]: time="2026-01-17T00:16:55.894583622Z" level=info msg="Forcibly stopping sandbox \"80d79e1dd770d0cf15204a932b15a90b89db61e7e85ce4c49c2b8405f932d2e3\"" Jan 17 00:16:56.001040 containerd[1465]: 2026-01-17 00:16:55.943 [WARNING][5083] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="80d79e1dd770d0cf15204a932b15a90b89db61e7e85ce4c49c2b8405f932d2e3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--912fd252f4-k8s-calico--apiserver--598d5588f5--xr65t-eth0", GenerateName:"calico-apiserver-598d5588f5-", Namespace:"calico-apiserver", SelfLink:"", UID:"fd23fc1c-2ea9-47e8-be5f-5279e384fd8c", ResourceVersion:"1077", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"598d5588f5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-912fd252f4", ContainerID:"4905d5c0c90d49fb2ae3b33b21a919649a62e60c8463db003d5549d379eda7ef", Pod:"calico-apiserver-598d5588f5-xr65t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.4.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidb65c978d23", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:56.001040 containerd[1465]: 2026-01-17 00:16:55.943 [INFO][5083] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="80d79e1dd770d0cf15204a932b15a90b89db61e7e85ce4c49c2b8405f932d2e3" Jan 17 00:16:56.001040 containerd[1465]: 2026-01-17 00:16:55.943 [INFO][5083] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="80d79e1dd770d0cf15204a932b15a90b89db61e7e85ce4c49c2b8405f932d2e3" iface="eth0" netns="" Jan 17 00:16:56.001040 containerd[1465]: 2026-01-17 00:16:55.943 [INFO][5083] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="80d79e1dd770d0cf15204a932b15a90b89db61e7e85ce4c49c2b8405f932d2e3" Jan 17 00:16:56.001040 containerd[1465]: 2026-01-17 00:16:55.943 [INFO][5083] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="80d79e1dd770d0cf15204a932b15a90b89db61e7e85ce4c49c2b8405f932d2e3" Jan 17 00:16:56.001040 containerd[1465]: 2026-01-17 00:16:55.974 [INFO][5090] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="80d79e1dd770d0cf15204a932b15a90b89db61e7e85ce4c49c2b8405f932d2e3" HandleID="k8s-pod-network.80d79e1dd770d0cf15204a932b15a90b89db61e7e85ce4c49c2b8405f932d2e3" Workload="ci--4081.3.6--n--912fd252f4-k8s-calico--apiserver--598d5588f5--xr65t-eth0" Jan 17 00:16:56.001040 containerd[1465]: 2026-01-17 00:16:55.975 [INFO][5090] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:56.001040 containerd[1465]: 2026-01-17 00:16:55.975 [INFO][5090] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:56.001040 containerd[1465]: 2026-01-17 00:16:55.983 [WARNING][5090] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="80d79e1dd770d0cf15204a932b15a90b89db61e7e85ce4c49c2b8405f932d2e3" HandleID="k8s-pod-network.80d79e1dd770d0cf15204a932b15a90b89db61e7e85ce4c49c2b8405f932d2e3" Workload="ci--4081.3.6--n--912fd252f4-k8s-calico--apiserver--598d5588f5--xr65t-eth0" Jan 17 00:16:56.001040 containerd[1465]: 2026-01-17 00:16:55.983 [INFO][5090] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="80d79e1dd770d0cf15204a932b15a90b89db61e7e85ce4c49c2b8405f932d2e3" HandleID="k8s-pod-network.80d79e1dd770d0cf15204a932b15a90b89db61e7e85ce4c49c2b8405f932d2e3" Workload="ci--4081.3.6--n--912fd252f4-k8s-calico--apiserver--598d5588f5--xr65t-eth0" Jan 17 00:16:56.001040 containerd[1465]: 2026-01-17 00:16:55.989 [INFO][5090] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:56.001040 containerd[1465]: 2026-01-17 00:16:55.992 [INFO][5083] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="80d79e1dd770d0cf15204a932b15a90b89db61e7e85ce4c49c2b8405f932d2e3" Jan 17 00:16:56.001040 containerd[1465]: time="2026-01-17T00:16:56.000771247Z" level=info msg="TearDown network for sandbox \"80d79e1dd770d0cf15204a932b15a90b89db61e7e85ce4c49c2b8405f932d2e3\" successfully" Jan 17 00:16:56.012829 containerd[1465]: time="2026-01-17T00:16:56.012554004Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"80d79e1dd770d0cf15204a932b15a90b89db61e7e85ce4c49c2b8405f932d2e3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:16:56.012829 containerd[1465]: time="2026-01-17T00:16:56.012671009Z" level=info msg="RemovePodSandbox \"80d79e1dd770d0cf15204a932b15a90b89db61e7e85ce4c49c2b8405f932d2e3\" returns successfully" Jan 17 00:16:56.013362 containerd[1465]: time="2026-01-17T00:16:56.013310322Z" level=info msg="StopPodSandbox for \"80ad9c1b6bf14b9fab9d04fd157851b2cedf40cac83c9e27578d81291132a0a3\"" Jan 17 00:16:56.123445 containerd[1465]: 2026-01-17 00:16:56.072 [WARNING][5104] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="80ad9c1b6bf14b9fab9d04fd157851b2cedf40cac83c9e27578d81291132a0a3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--912fd252f4-k8s-goldmane--666569f655--2b6w7-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"abee5d80-98e2-4d1b-a6be-4919665c817d", ResourceVersion:"1107", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-912fd252f4", ContainerID:"4f572b073f9fa58f5b9d08296038d2e6152d0d60eb500596886d53b6b8437ec0", Pod:"goldmane-666569f655-2b6w7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.4.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7c2e220de9e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:56.123445 containerd[1465]: 2026-01-17 00:16:56.073 [INFO][5104] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="80ad9c1b6bf14b9fab9d04fd157851b2cedf40cac83c9e27578d81291132a0a3" Jan 17 00:16:56.123445 containerd[1465]: 2026-01-17 00:16:56.073 [INFO][5104] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="80ad9c1b6bf14b9fab9d04fd157851b2cedf40cac83c9e27578d81291132a0a3" iface="eth0" netns="" Jan 17 00:16:56.123445 containerd[1465]: 2026-01-17 00:16:56.073 [INFO][5104] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="80ad9c1b6bf14b9fab9d04fd157851b2cedf40cac83c9e27578d81291132a0a3" Jan 17 00:16:56.123445 containerd[1465]: 2026-01-17 00:16:56.073 [INFO][5104] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="80ad9c1b6bf14b9fab9d04fd157851b2cedf40cac83c9e27578d81291132a0a3" Jan 17 00:16:56.123445 containerd[1465]: 2026-01-17 00:16:56.106 [INFO][5111] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="80ad9c1b6bf14b9fab9d04fd157851b2cedf40cac83c9e27578d81291132a0a3" HandleID="k8s-pod-network.80ad9c1b6bf14b9fab9d04fd157851b2cedf40cac83c9e27578d81291132a0a3" Workload="ci--4081.3.6--n--912fd252f4-k8s-goldmane--666569f655--2b6w7-eth0" Jan 17 00:16:56.123445 containerd[1465]: 2026-01-17 00:16:56.106 [INFO][5111] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:56.123445 containerd[1465]: 2026-01-17 00:16:56.106 [INFO][5111] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:56.123445 containerd[1465]: 2026-01-17 00:16:56.115 [WARNING][5111] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="80ad9c1b6bf14b9fab9d04fd157851b2cedf40cac83c9e27578d81291132a0a3" HandleID="k8s-pod-network.80ad9c1b6bf14b9fab9d04fd157851b2cedf40cac83c9e27578d81291132a0a3" Workload="ci--4081.3.6--n--912fd252f4-k8s-goldmane--666569f655--2b6w7-eth0" Jan 17 00:16:56.123445 containerd[1465]: 2026-01-17 00:16:56.115 [INFO][5111] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="80ad9c1b6bf14b9fab9d04fd157851b2cedf40cac83c9e27578d81291132a0a3" HandleID="k8s-pod-network.80ad9c1b6bf14b9fab9d04fd157851b2cedf40cac83c9e27578d81291132a0a3" Workload="ci--4081.3.6--n--912fd252f4-k8s-goldmane--666569f655--2b6w7-eth0" Jan 17 00:16:56.123445 containerd[1465]: 2026-01-17 00:16:56.117 [INFO][5111] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:56.123445 containerd[1465]: 2026-01-17 00:16:56.120 [INFO][5104] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="80ad9c1b6bf14b9fab9d04fd157851b2cedf40cac83c9e27578d81291132a0a3" Jan 17 00:16:56.123990 containerd[1465]: time="2026-01-17T00:16:56.123450170Z" level=info msg="TearDown network for sandbox \"80ad9c1b6bf14b9fab9d04fd157851b2cedf40cac83c9e27578d81291132a0a3\" successfully" Jan 17 00:16:56.123990 containerd[1465]: time="2026-01-17T00:16:56.123492894Z" level=info msg="StopPodSandbox for \"80ad9c1b6bf14b9fab9d04fd157851b2cedf40cac83c9e27578d81291132a0a3\" returns successfully" Jan 17 00:16:56.124518 containerd[1465]: time="2026-01-17T00:16:56.124488794Z" level=info msg="RemovePodSandbox for \"80ad9c1b6bf14b9fab9d04fd157851b2cedf40cac83c9e27578d81291132a0a3\"" Jan 17 00:16:56.124601 containerd[1465]: time="2026-01-17T00:16:56.124553157Z" level=info msg="Forcibly stopping sandbox \"80ad9c1b6bf14b9fab9d04fd157851b2cedf40cac83c9e27578d81291132a0a3\"" Jan 17 00:16:56.241776 containerd[1465]: 2026-01-17 00:16:56.183 [WARNING][5126] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="80ad9c1b6bf14b9fab9d04fd157851b2cedf40cac83c9e27578d81291132a0a3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--912fd252f4-k8s-goldmane--666569f655--2b6w7-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"abee5d80-98e2-4d1b-a6be-4919665c817d", ResourceVersion:"1107", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-912fd252f4", ContainerID:"4f572b073f9fa58f5b9d08296038d2e6152d0d60eb500596886d53b6b8437ec0", Pod:"goldmane-666569f655-2b6w7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.4.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7c2e220de9e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:56.241776 containerd[1465]: 2026-01-17 00:16:56.184 [INFO][5126] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="80ad9c1b6bf14b9fab9d04fd157851b2cedf40cac83c9e27578d81291132a0a3" Jan 17 00:16:56.241776 containerd[1465]: 2026-01-17 00:16:56.184 [INFO][5126] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="80ad9c1b6bf14b9fab9d04fd157851b2cedf40cac83c9e27578d81291132a0a3" iface="eth0" netns="" Jan 17 00:16:56.241776 containerd[1465]: 2026-01-17 00:16:56.184 [INFO][5126] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="80ad9c1b6bf14b9fab9d04fd157851b2cedf40cac83c9e27578d81291132a0a3" Jan 17 00:16:56.241776 containerd[1465]: 2026-01-17 00:16:56.184 [INFO][5126] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="80ad9c1b6bf14b9fab9d04fd157851b2cedf40cac83c9e27578d81291132a0a3" Jan 17 00:16:56.241776 containerd[1465]: 2026-01-17 00:16:56.222 [INFO][5133] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="80ad9c1b6bf14b9fab9d04fd157851b2cedf40cac83c9e27578d81291132a0a3" HandleID="k8s-pod-network.80ad9c1b6bf14b9fab9d04fd157851b2cedf40cac83c9e27578d81291132a0a3" Workload="ci--4081.3.6--n--912fd252f4-k8s-goldmane--666569f655--2b6w7-eth0" Jan 17 00:16:56.241776 containerd[1465]: 2026-01-17 00:16:56.222 [INFO][5133] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:56.241776 containerd[1465]: 2026-01-17 00:16:56.222 [INFO][5133] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:56.241776 containerd[1465]: 2026-01-17 00:16:56.232 [WARNING][5133] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="80ad9c1b6bf14b9fab9d04fd157851b2cedf40cac83c9e27578d81291132a0a3" HandleID="k8s-pod-network.80ad9c1b6bf14b9fab9d04fd157851b2cedf40cac83c9e27578d81291132a0a3" Workload="ci--4081.3.6--n--912fd252f4-k8s-goldmane--666569f655--2b6w7-eth0" Jan 17 00:16:56.241776 containerd[1465]: 2026-01-17 00:16:56.232 [INFO][5133] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="80ad9c1b6bf14b9fab9d04fd157851b2cedf40cac83c9e27578d81291132a0a3" HandleID="k8s-pod-network.80ad9c1b6bf14b9fab9d04fd157851b2cedf40cac83c9e27578d81291132a0a3" Workload="ci--4081.3.6--n--912fd252f4-k8s-goldmane--666569f655--2b6w7-eth0" Jan 17 00:16:56.241776 containerd[1465]: 2026-01-17 00:16:56.235 [INFO][5133] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:56.241776 containerd[1465]: 2026-01-17 00:16:56.238 [INFO][5126] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="80ad9c1b6bf14b9fab9d04fd157851b2cedf40cac83c9e27578d81291132a0a3" Jan 17 00:16:56.242927 containerd[1465]: time="2026-01-17T00:16:56.241843180Z" level=info msg="TearDown network for sandbox \"80ad9c1b6bf14b9fab9d04fd157851b2cedf40cac83c9e27578d81291132a0a3\" successfully" Jan 17 00:16:56.248771 containerd[1465]: time="2026-01-17T00:16:56.248706483Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"80ad9c1b6bf14b9fab9d04fd157851b2cedf40cac83c9e27578d81291132a0a3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:16:56.249121 containerd[1465]: time="2026-01-17T00:16:56.248811619Z" level=info msg="RemovePodSandbox \"80ad9c1b6bf14b9fab9d04fd157851b2cedf40cac83c9e27578d81291132a0a3\" returns successfully" Jan 17 00:16:56.249744 containerd[1465]: time="2026-01-17T00:16:56.249702025Z" level=info msg="StopPodSandbox for \"fa5e46525d5ac77c5f52c1a58e410f601e78f7da697af73c2781cd381e43d311\"" Jan 17 00:16:56.361454 containerd[1465]: 2026-01-17 00:16:56.307 [WARNING][5147] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fa5e46525d5ac77c5f52c1a58e410f601e78f7da697af73c2781cd381e43d311" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--912fd252f4-k8s-csi--node--driver--lfkr9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"74b48e50-ea55-46c5-84cf-509f72a7af13", ResourceVersion:"1112", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-912fd252f4", ContainerID:"40055b2d548f0919ceb15ca9d2f8ef74f4887c30555ad16c6cad49f91a219090", Pod:"csi-node-driver-lfkr9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.4.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7f26127017d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:56.361454 containerd[1465]: 2026-01-17 00:16:56.308 [INFO][5147] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fa5e46525d5ac77c5f52c1a58e410f601e78f7da697af73c2781cd381e43d311" Jan 17 00:16:56.361454 containerd[1465]: 2026-01-17 00:16:56.308 [INFO][5147] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fa5e46525d5ac77c5f52c1a58e410f601e78f7da697af73c2781cd381e43d311" iface="eth0" netns="" Jan 17 00:16:56.361454 containerd[1465]: 2026-01-17 00:16:56.308 [INFO][5147] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fa5e46525d5ac77c5f52c1a58e410f601e78f7da697af73c2781cd381e43d311" Jan 17 00:16:56.361454 containerd[1465]: 2026-01-17 00:16:56.308 [INFO][5147] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fa5e46525d5ac77c5f52c1a58e410f601e78f7da697af73c2781cd381e43d311" Jan 17 00:16:56.361454 containerd[1465]: 2026-01-17 00:16:56.344 [INFO][5154] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fa5e46525d5ac77c5f52c1a58e410f601e78f7da697af73c2781cd381e43d311" HandleID="k8s-pod-network.fa5e46525d5ac77c5f52c1a58e410f601e78f7da697af73c2781cd381e43d311" Workload="ci--4081.3.6--n--912fd252f4-k8s-csi--node--driver--lfkr9-eth0" Jan 17 00:16:56.361454 containerd[1465]: 2026-01-17 00:16:56.344 [INFO][5154] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:56.361454 containerd[1465]: 2026-01-17 00:16:56.344 [INFO][5154] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:56.361454 containerd[1465]: 2026-01-17 00:16:56.353 [WARNING][5154] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fa5e46525d5ac77c5f52c1a58e410f601e78f7da697af73c2781cd381e43d311" HandleID="k8s-pod-network.fa5e46525d5ac77c5f52c1a58e410f601e78f7da697af73c2781cd381e43d311" Workload="ci--4081.3.6--n--912fd252f4-k8s-csi--node--driver--lfkr9-eth0" Jan 17 00:16:56.361454 containerd[1465]: 2026-01-17 00:16:56.353 [INFO][5154] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fa5e46525d5ac77c5f52c1a58e410f601e78f7da697af73c2781cd381e43d311" HandleID="k8s-pod-network.fa5e46525d5ac77c5f52c1a58e410f601e78f7da697af73c2781cd381e43d311" Workload="ci--4081.3.6--n--912fd252f4-k8s-csi--node--driver--lfkr9-eth0" Jan 17 00:16:56.361454 containerd[1465]: 2026-01-17 00:16:56.356 [INFO][5154] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:56.361454 containerd[1465]: 2026-01-17 00:16:56.358 [INFO][5147] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fa5e46525d5ac77c5f52c1a58e410f601e78f7da697af73c2781cd381e43d311" Jan 17 00:16:56.361454 containerd[1465]: time="2026-01-17T00:16:56.361376924Z" level=info msg="TearDown network for sandbox \"fa5e46525d5ac77c5f52c1a58e410f601e78f7da697af73c2781cd381e43d311\" successfully" Jan 17 00:16:56.362304 containerd[1465]: time="2026-01-17T00:16:56.361836874Z" level=info msg="StopPodSandbox for \"fa5e46525d5ac77c5f52c1a58e410f601e78f7da697af73c2781cd381e43d311\" returns successfully" Jan 17 00:16:56.363041 containerd[1465]: time="2026-01-17T00:16:56.362616727Z" level=info msg="RemovePodSandbox for \"fa5e46525d5ac77c5f52c1a58e410f601e78f7da697af73c2781cd381e43d311\"" Jan 17 00:16:56.363041 containerd[1465]: time="2026-01-17T00:16:56.362666546Z" level=info msg="Forcibly stopping sandbox \"fa5e46525d5ac77c5f52c1a58e410f601e78f7da697af73c2781cd381e43d311\"" Jan 17 00:16:56.481651 containerd[1465]: 2026-01-17 00:16:56.422 [WARNING][5168] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fa5e46525d5ac77c5f52c1a58e410f601e78f7da697af73c2781cd381e43d311" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--912fd252f4-k8s-csi--node--driver--lfkr9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"74b48e50-ea55-46c5-84cf-509f72a7af13", ResourceVersion:"1112", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 16, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-912fd252f4", ContainerID:"40055b2d548f0919ceb15ca9d2f8ef74f4887c30555ad16c6cad49f91a219090", Pod:"csi-node-driver-lfkr9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.4.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7f26127017d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:16:56.481651 containerd[1465]: 2026-01-17 00:16:56.423 [INFO][5168] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fa5e46525d5ac77c5f52c1a58e410f601e78f7da697af73c2781cd381e43d311" Jan 17 00:16:56.481651 containerd[1465]: 2026-01-17 00:16:56.423 [INFO][5168] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fa5e46525d5ac77c5f52c1a58e410f601e78f7da697af73c2781cd381e43d311" iface="eth0" netns="" Jan 17 00:16:56.481651 containerd[1465]: 2026-01-17 00:16:56.423 [INFO][5168] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fa5e46525d5ac77c5f52c1a58e410f601e78f7da697af73c2781cd381e43d311" Jan 17 00:16:56.481651 containerd[1465]: 2026-01-17 00:16:56.423 [INFO][5168] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fa5e46525d5ac77c5f52c1a58e410f601e78f7da697af73c2781cd381e43d311" Jan 17 00:16:56.481651 containerd[1465]: 2026-01-17 00:16:56.463 [INFO][5175] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fa5e46525d5ac77c5f52c1a58e410f601e78f7da697af73c2781cd381e43d311" HandleID="k8s-pod-network.fa5e46525d5ac77c5f52c1a58e410f601e78f7da697af73c2781cd381e43d311" Workload="ci--4081.3.6--n--912fd252f4-k8s-csi--node--driver--lfkr9-eth0" Jan 17 00:16:56.481651 containerd[1465]: 2026-01-17 00:16:56.463 [INFO][5175] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:16:56.481651 containerd[1465]: 2026-01-17 00:16:56.463 [INFO][5175] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:16:56.481651 containerd[1465]: 2026-01-17 00:16:56.472 [WARNING][5175] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fa5e46525d5ac77c5f52c1a58e410f601e78f7da697af73c2781cd381e43d311" HandleID="k8s-pod-network.fa5e46525d5ac77c5f52c1a58e410f601e78f7da697af73c2781cd381e43d311" Workload="ci--4081.3.6--n--912fd252f4-k8s-csi--node--driver--lfkr9-eth0" Jan 17 00:16:56.481651 containerd[1465]: 2026-01-17 00:16:56.472 [INFO][5175] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fa5e46525d5ac77c5f52c1a58e410f601e78f7da697af73c2781cd381e43d311" HandleID="k8s-pod-network.fa5e46525d5ac77c5f52c1a58e410f601e78f7da697af73c2781cd381e43d311" Workload="ci--4081.3.6--n--912fd252f4-k8s-csi--node--driver--lfkr9-eth0" Jan 17 00:16:56.481651 containerd[1465]: 2026-01-17 00:16:56.474 [INFO][5175] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:16:56.481651 containerd[1465]: 2026-01-17 00:16:56.477 [INFO][5168] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fa5e46525d5ac77c5f52c1a58e410f601e78f7da697af73c2781cd381e43d311" Jan 17 00:16:56.482372 containerd[1465]: time="2026-01-17T00:16:56.481702183Z" level=info msg="TearDown network for sandbox \"fa5e46525d5ac77c5f52c1a58e410f601e78f7da697af73c2781cd381e43d311\" successfully" Jan 17 00:16:56.489945 containerd[1465]: time="2026-01-17T00:16:56.489783846Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fa5e46525d5ac77c5f52c1a58e410f601e78f7da697af73c2781cd381e43d311\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:16:56.489945 containerd[1465]: time="2026-01-17T00:16:56.489936898Z" level=info msg="RemovePodSandbox \"fa5e46525d5ac77c5f52c1a58e410f601e78f7da697af73c2781cd381e43d311\" returns successfully" Jan 17 00:17:00.559206 systemd[1]: Started sshd@7-64.227.98.118:22-4.153.228.146:47280.service - OpenSSH per-connection server daemon (4.153.228.146:47280). Jan 17 00:17:01.025816 sshd[5193]: Accepted publickey for core from 4.153.228.146 port 47280 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:17:01.030020 sshd[5193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:17:01.042089 systemd-logind[1450]: New session 8 of user core. Jan 17 00:17:01.045377 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 00:17:02.260859 sshd[5193]: pam_unix(sshd:session): session closed for user core Jan 17 00:17:02.265730 systemd[1]: sshd@7-64.227.98.118:22-4.153.228.146:47280.service: Deactivated successfully. Jan 17 00:17:02.269263 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 00:17:02.273065 systemd-logind[1450]: Session 8 logged out. Waiting for processes to exit. Jan 17 00:17:02.275197 systemd-logind[1450]: Removed session 8. Jan 17 00:17:02.484693 containerd[1465]: time="2026-01-17T00:17:02.484617482Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:17:02.821186 containerd[1465]: time="2026-01-17T00:17:02.820817764Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:17:02.825032 containerd[1465]: time="2026-01-17T00:17:02.824867257Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:17:02.825644 containerd[1465]: time="2026-01-17T00:17:02.824931672Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:17:02.826812 kubelet[2515]: E0117 00:17:02.826562 2515 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:17:02.827468 kubelet[2515]: E0117 00:17:02.826820 2515 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:17:02.827468 kubelet[2515]: E0117 00:17:02.827281 2515 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x69kb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-76574bc5-8kb79_calico-system(96db8296-fac0-44e6-a2a4-5921dbbfa75c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:17:02.829002 kubelet[2515]: E0117 00:17:02.828660 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76574bc5-8kb79" podUID="96db8296-fac0-44e6-a2a4-5921dbbfa75c" Jan 17 00:17:02.829195 containerd[1465]: time="2026-01-17T00:17:02.828693736Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:17:03.159221 containerd[1465]: time="2026-01-17T00:17:03.159119436Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:17:03.162170 containerd[1465]: time="2026-01-17T00:17:03.162055207Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:17:03.163631 containerd[1465]: time="2026-01-17T00:17:03.162072333Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:17:03.163699 kubelet[2515]: E0117 00:17:03.162493 2515 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:17:03.163699 kubelet[2515]: E0117 00:17:03.162568 2515 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:17:03.163699 kubelet[2515]: E0117 00:17:03.162871 2515 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:d9e40a58538b470a9311e65e534c32dd,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vfkq5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78cdb99998-slhfc_calico-system(1d8c80dc-ca7e-4704-80bc-010f68ebac60): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:17:03.166260 containerd[1465]: time="2026-01-17T00:17:03.166213633Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:17:03.525852 containerd[1465]: time="2026-01-17T00:17:03.525291140Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:17:03.529289 containerd[1465]: time="2026-01-17T00:17:03.529070482Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:17:03.529728 containerd[1465]: time="2026-01-17T00:17:03.529104153Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:17:03.530010 kubelet[2515]: E0117 00:17:03.529883 2515 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:17:03.530010 kubelet[2515]: E0117 00:17:03.529963 2515 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:17:03.530749 kubelet[2515]: E0117 00:17:03.530211 2515 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vfkq5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78cdb99998-slhfc_calico-system(1d8c80dc-ca7e-4704-80bc-010f68ebac60): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:17:03.532435 containerd[1465]: time="2026-01-17T00:17:03.531956655Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:17:03.532551 kubelet[2515]: E0117 00:17:03.532249 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78cdb99998-slhfc" podUID="1d8c80dc-ca7e-4704-80bc-010f68ebac60" Jan 17 00:17:03.885234 containerd[1465]: time="2026-01-17T00:17:03.884796892Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:17:03.910725 containerd[1465]: time="2026-01-17T00:17:03.910397365Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:17:03.910725 containerd[1465]: time="2026-01-17T00:17:03.910461044Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:17:03.911794 kubelet[2515]: E0117 00:17:03.910825 2515 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:17:03.911794 kubelet[2515]: E0117 00:17:03.910882 2515 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:17:03.927024 kubelet[2515]: E0117 00:17:03.926914 2515 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tngkr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-2b6w7_calico-system(abee5d80-98e2-4d1b-a6be-4919665c817d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:17:03.928302 kubelet[2515]: E0117 00:17:03.928186 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2b6w7" podUID="abee5d80-98e2-4d1b-a6be-4919665c817d" Jan 17 00:17:04.485246 containerd[1465]: time="2026-01-17T00:17:04.484928652Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:17:04.824268 containerd[1465]: time="2026-01-17T00:17:04.824106144Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:17:04.828080 containerd[1465]: time="2026-01-17T00:17:04.827808738Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:17:04.828080 containerd[1465]: time="2026-01-17T00:17:04.827839281Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:17:04.828630 kubelet[2515]: E0117 00:17:04.828540 2515 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:17:04.828769 kubelet[2515]: E0117 00:17:04.828630 2515 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:17:04.828937 kubelet[2515]: E0117 00:17:04.828820 2515 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jj7nr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-lfkr9_calico-system(74b48e50-ea55-46c5-84cf-509f72a7af13): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:17:04.832506 containerd[1465]: time="2026-01-17T00:17:04.832445630Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:17:05.167137 containerd[1465]: time="2026-01-17T00:17:05.167065069Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:17:05.169693 containerd[1465]: time="2026-01-17T00:17:05.169605100Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:17:05.171662 containerd[1465]: time="2026-01-17T00:17:05.169665524Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:17:05.173695 kubelet[2515]: E0117 00:17:05.173617 2515 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:17:05.175003 kubelet[2515]: E0117 00:17:05.173706 2515 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:17:05.175003 kubelet[2515]: E0117 00:17:05.173876 2515 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jj7nr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-lfkr9_calico-system(74b48e50-ea55-46c5-84cf-509f72a7af13): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:17:05.175647 kubelet[2515]: E0117 00:17:05.175587 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lfkr9" podUID="74b48e50-ea55-46c5-84cf-509f72a7af13" Jan 17 00:17:05.483153 containerd[1465]: time="2026-01-17T00:17:05.482735305Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:17:05.799657 containerd[1465]: time="2026-01-17T00:17:05.799378551Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:17:05.802153 containerd[1465]: time="2026-01-17T00:17:05.801950116Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:17:05.802153 containerd[1465]: time="2026-01-17T00:17:05.801954984Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:17:05.802591 kubelet[2515]: E0117 00:17:05.802423 2515 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:17:05.802680 kubelet[2515]: E0117 00:17:05.802600 2515 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:17:05.802923 kubelet[2515]: E0117 00:17:05.802832 2515 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rt5z5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-598d5588f5-xr65t_calico-apiserver(fd23fc1c-2ea9-47e8-be5f-5279e384fd8c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:17:05.804517 kubelet[2515]: E0117 00:17:05.804390 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-598d5588f5-xr65t" podUID="fd23fc1c-2ea9-47e8-be5f-5279e384fd8c" Jan 17 00:17:07.346919 systemd[1]: Started sshd@8-64.227.98.118:22-4.153.228.146:45734.service - OpenSSH per-connection server daemon (4.153.228.146:45734). Jan 17 00:17:07.479806 kubelet[2515]: E0117 00:17:07.479299 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:17:07.481121 containerd[1465]: time="2026-01-17T00:17:07.480710359Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:17:07.786996 containerd[1465]: time="2026-01-17T00:17:07.786748396Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:17:07.789864 containerd[1465]: time="2026-01-17T00:17:07.789767723Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:17:07.790357 containerd[1465]: time="2026-01-17T00:17:07.789827794Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:17:07.790850 kubelet[2515]: E0117 00:17:07.790669 2515 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:17:07.790850 kubelet[2515]: E0117 00:17:07.790750 2515 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:17:07.792472 kubelet[2515]: E0117 00:17:07.791200 2515 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zznc6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-598d5588f5-f9bzn_calico-apiserver(d3a2c65a-63b7-42fa-9521-230bac7a856c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:17:07.795384 kubelet[2515]: E0117 00:17:07.793616 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-598d5588f5-f9bzn" podUID="d3a2c65a-63b7-42fa-9521-230bac7a856c" Jan 17 00:17:07.807399 sshd[5214]: Accepted publickey for core from 4.153.228.146 port 45734 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:17:07.810687 sshd[5214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:17:07.820482 systemd-logind[1450]: New session 9 of user core. Jan 17 00:17:07.830671 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 00:17:08.277815 sshd[5214]: pam_unix(sshd:session): session closed for user core Jan 17 00:17:08.283974 systemd[1]: sshd@8-64.227.98.118:22-4.153.228.146:45734.service: Deactivated successfully. Jan 17 00:17:08.287752 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 00:17:08.289400 systemd-logind[1450]: Session 9 logged out. Waiting for processes to exit. Jan 17 00:17:08.291209 systemd-logind[1450]: Removed session 9. Jan 17 00:17:13.354037 systemd[1]: Started sshd@9-64.227.98.118:22-4.153.228.146:45738.service - OpenSSH per-connection server daemon (4.153.228.146:45738). Jan 17 00:17:13.801292 sshd[5235]: Accepted publickey for core from 4.153.228.146 port 45738 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:17:13.804030 sshd[5235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:17:13.812908 systemd-logind[1450]: New session 10 of user core. Jan 17 00:17:13.815742 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 00:17:14.241692 sshd[5235]: pam_unix(sshd:session): session closed for user core Jan 17 00:17:14.253172 systemd[1]: sshd@9-64.227.98.118:22-4.153.228.146:45738.service: Deactivated successfully. Jan 17 00:17:14.256217 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 00:17:14.257779 systemd-logind[1450]: Session 10 logged out. Waiting for processes to exit. Jan 17 00:17:14.260505 systemd-logind[1450]: Removed session 10. Jan 17 00:17:14.321189 systemd[1]: Started sshd@10-64.227.98.118:22-4.153.228.146:45746.service - OpenSSH per-connection server daemon (4.153.228.146:45746). Jan 17 00:17:14.497690 kubelet[2515]: E0117 00:17:14.497502 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2b6w7" podUID="abee5d80-98e2-4d1b-a6be-4919665c817d" Jan 17 00:17:14.767843 sshd[5248]: Accepted publickey for core from 4.153.228.146 port 45746 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:17:14.770341 sshd[5248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:17:14.779618 systemd-logind[1450]: New session 11 of user core. Jan 17 00:17:14.791765 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 00:17:15.322172 sshd[5248]: pam_unix(sshd:session): session closed for user core Jan 17 00:17:15.328981 systemd[1]: sshd@10-64.227.98.118:22-4.153.228.146:45746.service: Deactivated successfully. Jan 17 00:17:15.333111 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 00:17:15.336730 systemd-logind[1450]: Session 11 logged out. Waiting for processes to exit. Jan 17 00:17:15.339516 systemd-logind[1450]: Removed session 11. Jan 17 00:17:15.406990 systemd[1]: Started sshd@11-64.227.98.118:22-4.153.228.146:35682.service - OpenSSH per-connection server daemon (4.153.228.146:35682). Jan 17 00:17:15.874254 sshd[5258]: Accepted publickey for core from 4.153.228.146 port 35682 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:17:15.876762 sshd[5258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:17:15.890558 systemd-logind[1450]: New session 12 of user core. Jan 17 00:17:15.907675 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 00:17:16.008352 kubelet[2515]: E0117 00:17:16.008310 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:17:16.314477 sshd[5258]: pam_unix(sshd:session): session closed for user core Jan 17 00:17:16.320826 systemd-logind[1450]: Session 12 logged out. Waiting for processes to exit. Jan 17 00:17:16.321400 systemd[1]: sshd@11-64.227.98.118:22-4.153.228.146:35682.service: Deactivated successfully. Jan 17 00:17:16.324520 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 00:17:16.325644 systemd-logind[1450]: Removed session 12. Jan 17 00:17:17.480376 kubelet[2515]: E0117 00:17:17.479900 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:17:17.480376 kubelet[2515]: E0117 00:17:17.479946 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:17:18.481367 kubelet[2515]: E0117 00:17:18.480623 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:17:18.482871 kubelet[2515]: E0117 00:17:18.482028 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76574bc5-8kb79" podUID="96db8296-fac0-44e6-a2a4-5921dbbfa75c" Jan 17 00:17:18.488166 kubelet[2515]: E0117 00:17:18.487557 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78cdb99998-slhfc" podUID="1d8c80dc-ca7e-4704-80bc-010f68ebac60" Jan 17 00:17:19.484222 kubelet[2515]: E0117 00:17:19.483921 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lfkr9" podUID="74b48e50-ea55-46c5-84cf-509f72a7af13" Jan 17 00:17:20.485217 kubelet[2515]: E0117 00:17:20.485158 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-598d5588f5-f9bzn" podUID="d3a2c65a-63b7-42fa-9521-230bac7a856c" Jan 17 00:17:20.487391 kubelet[2515]: E0117 00:17:20.487318 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-598d5588f5-xr65t" podUID="fd23fc1c-2ea9-47e8-be5f-5279e384fd8c" Jan 17 00:17:21.407069 systemd[1]: Started sshd@12-64.227.98.118:22-4.153.228.146:35692.service - OpenSSH per-connection server daemon (4.153.228.146:35692). Jan 17 00:17:21.834520 sshd[5298]: Accepted publickey for core from 4.153.228.146 port 35692 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:17:21.836515 sshd[5298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:17:21.843991 systemd-logind[1450]: New session 13 of user core. Jan 17 00:17:21.850909 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 00:17:22.235652 sshd[5298]: pam_unix(sshd:session): session closed for user core Jan 17 00:17:22.242037 systemd[1]: sshd@12-64.227.98.118:22-4.153.228.146:35692.service: Deactivated successfully. Jan 17 00:17:22.246144 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 00:17:22.247631 systemd-logind[1450]: Session 13 logged out. Waiting for processes to exit. Jan 17 00:17:22.249687 systemd-logind[1450]: Removed session 13. Jan 17 00:17:26.485443 containerd[1465]: time="2026-01-17T00:17:26.484399978Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:17:26.820594 containerd[1465]: time="2026-01-17T00:17:26.820135565Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:17:26.823034 containerd[1465]: time="2026-01-17T00:17:26.822817877Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:17:26.823034 containerd[1465]: time="2026-01-17T00:17:26.822952804Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:17:26.823301 kubelet[2515]: E0117 00:17:26.823228 2515 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:17:26.823301 kubelet[2515]: E0117 00:17:26.823290 2515 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:17:26.823913 kubelet[2515]: E0117 00:17:26.823469 2515 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tngkr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-2b6w7_calico-system(abee5d80-98e2-4d1b-a6be-4919665c817d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:17:26.824781 kubelet[2515]: E0117 00:17:26.824701 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2b6w7" podUID="abee5d80-98e2-4d1b-a6be-4919665c817d" Jan 17 00:17:27.313010 systemd[1]: Started sshd@13-64.227.98.118:22-4.153.228.146:56664.service - OpenSSH per-connection server daemon (4.153.228.146:56664). Jan 17 00:17:27.809762 sshd[5311]: Accepted publickey for core from 4.153.228.146 port 56664 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:17:27.819167 sshd[5311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:17:27.831553 systemd-logind[1450]: New session 14 of user core. Jan 17 00:17:27.835715 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 00:17:28.336354 sshd[5311]: pam_unix(sshd:session): session closed for user core Jan 17 00:17:28.345861 systemd-logind[1450]: Session 14 logged out. Waiting for processes to exit. Jan 17 00:17:28.346707 systemd[1]: sshd@13-64.227.98.118:22-4.153.228.146:56664.service: Deactivated successfully. Jan 17 00:17:28.353926 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 00:17:28.358442 systemd-logind[1450]: Removed session 14. Jan 17 00:17:29.481160 containerd[1465]: time="2026-01-17T00:17:29.480482355Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:17:29.818886 containerd[1465]: time="2026-01-17T00:17:29.818712398Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:17:29.825641 containerd[1465]: time="2026-01-17T00:17:29.825555162Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:17:29.825825 containerd[1465]: time="2026-01-17T00:17:29.825698317Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:17:29.827332 kubelet[2515]: E0117 00:17:29.826021 2515 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:17:29.827332 kubelet[2515]: E0117 00:17:29.826094 2515 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:17:29.827332 kubelet[2515]: E0117 00:17:29.826233 2515 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:d9e40a58538b470a9311e65e534c32dd,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vfkq5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78cdb99998-slhfc_calico-system(1d8c80dc-ca7e-4704-80bc-010f68ebac60): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:17:29.830747 containerd[1465]: time="2026-01-17T00:17:29.830707055Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:17:30.158547 containerd[1465]: time="2026-01-17T00:17:30.158242634Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:17:30.162264 containerd[1465]: time="2026-01-17T00:17:30.161366053Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:17:30.162264 containerd[1465]: time="2026-01-17T00:17:30.161451895Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:17:30.162534 kubelet[2515]: E0117 00:17:30.161693 2515 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:17:30.162534 kubelet[2515]: E0117 00:17:30.161753 2515 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:17:30.163017 kubelet[2515]: E0117 00:17:30.161887 2515 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vfkq5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78cdb99998-slhfc_calico-system(1d8c80dc-ca7e-4704-80bc-010f68ebac60): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:17:30.164575 kubelet[2515]: E0117 00:17:30.164479 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78cdb99998-slhfc" podUID="1d8c80dc-ca7e-4704-80bc-010f68ebac60" Jan 17 00:17:33.416797 systemd[1]: Started sshd@14-64.227.98.118:22-4.153.228.146:56666.service - OpenSSH per-connection server daemon (4.153.228.146:56666). Jan 17 00:17:33.483167 containerd[1465]: time="2026-01-17T00:17:33.483049298Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:17:33.822076 containerd[1465]: time="2026-01-17T00:17:33.821747151Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:17:33.826514 containerd[1465]: time="2026-01-17T00:17:33.825601151Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:17:33.826514 containerd[1465]: time="2026-01-17T00:17:33.825640663Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:17:33.827204 kubelet[2515]: E0117 00:17:33.826675 2515 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:17:33.827204 kubelet[2515]: E0117 00:17:33.826745 2515 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:17:33.827204 kubelet[2515]: E0117 00:17:33.827121 2515 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jj7nr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-lfkr9_calico-system(74b48e50-ea55-46c5-84cf-509f72a7af13): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:17:33.829219 containerd[1465]: time="2026-01-17T00:17:33.828207519Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:17:33.833451 sshd[5334]: Accepted publickey for core from 4.153.228.146 port 56666 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:17:33.836151 sshd[5334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:17:33.854420 systemd-logind[1450]: New session 15 of user core. Jan 17 00:17:33.865777 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 00:17:34.135611 containerd[1465]: time="2026-01-17T00:17:34.135434439Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:17:34.138703 containerd[1465]: time="2026-01-17T00:17:34.138621329Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:17:34.138882 containerd[1465]: time="2026-01-17T00:17:34.138762658Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:17:34.139239 kubelet[2515]: E0117 00:17:34.139134 2515 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:17:34.139239 kubelet[2515]: E0117 00:17:34.139207 2515 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:17:34.140935 kubelet[2515]: E0117 00:17:34.139545 2515 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x69kb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-76574bc5-8kb79_calico-system(96db8296-fac0-44e6-a2a4-5921dbbfa75c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:17:34.141223 containerd[1465]: time="2026-01-17T00:17:34.140714411Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:17:34.141292 kubelet[2515]: E0117 00:17:34.141043 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76574bc5-8kb79" podUID="96db8296-fac0-44e6-a2a4-5921dbbfa75c" Jan 17 00:17:34.299862 sshd[5334]: pam_unix(sshd:session): session closed for user core Jan 17 00:17:34.306975 systemd[1]: sshd@14-64.227.98.118:22-4.153.228.146:56666.service: Deactivated successfully. Jan 17 00:17:34.312950 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 00:17:34.316336 systemd-logind[1450]: Session 15 logged out. Waiting for processes to exit. Jan 17 00:17:34.319433 systemd-logind[1450]: Removed session 15. Jan 17 00:17:34.489191 containerd[1465]: time="2026-01-17T00:17:34.488890265Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:17:34.493448 containerd[1465]: time="2026-01-17T00:17:34.492850432Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:17:34.493448 containerd[1465]: time="2026-01-17T00:17:34.492876194Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:17:34.500104 kubelet[2515]: E0117 00:17:34.500021 2515 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:17:34.500441 kubelet[2515]: E0117 00:17:34.500384 2515 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:17:34.501032 kubelet[2515]: E0117 00:17:34.500942 2515 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rt5z5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-598d5588f5-xr65t_calico-apiserver(fd23fc1c-2ea9-47e8-be5f-5279e384fd8c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:17:34.506514 containerd[1465]: time="2026-01-17T00:17:34.505633744Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:17:34.510012 kubelet[2515]: E0117 00:17:34.509657 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-598d5588f5-xr65t" podUID="fd23fc1c-2ea9-47e8-be5f-5279e384fd8c" Jan 17 00:17:34.815291 containerd[1465]: time="2026-01-17T00:17:34.814828863Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:17:34.817831 containerd[1465]: time="2026-01-17T00:17:34.817603413Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:17:34.817831 containerd[1465]: time="2026-01-17T00:17:34.817758081Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:17:34.819105 kubelet[2515]: E0117 00:17:34.818179 2515 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:17:34.819105 kubelet[2515]: E0117 00:17:34.818244 2515 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:17:34.819105 kubelet[2515]: E0117 00:17:34.818435 2515 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jj7nr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-lfkr9_calico-system(74b48e50-ea55-46c5-84cf-509f72a7af13): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:17:34.819698 kubelet[2515]: E0117 00:17:34.819645 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lfkr9" podUID="74b48e50-ea55-46c5-84cf-509f72a7af13" Jan 17 00:17:35.480385 kubelet[2515]: E0117 00:17:35.480339 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:17:35.481373 containerd[1465]: time="2026-01-17T00:17:35.481329592Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:17:35.822937 containerd[1465]: time="2026-01-17T00:17:35.822061900Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:17:35.826440 containerd[1465]: time="2026-01-17T00:17:35.825549459Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:17:35.826440 containerd[1465]: time="2026-01-17T00:17:35.825685997Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:17:35.826745 kubelet[2515]: E0117 00:17:35.826010 2515 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:17:35.826745 kubelet[2515]: E0117 00:17:35.826081 2515 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:17:35.826745 kubelet[2515]: E0117 00:17:35.826289 2515 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zznc6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-598d5588f5-f9bzn_calico-apiserver(d3a2c65a-63b7-42fa-9521-230bac7a856c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:17:35.828621 kubelet[2515]: E0117 00:17:35.828479 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-598d5588f5-f9bzn" podUID="d3a2c65a-63b7-42fa-9521-230bac7a856c" Jan 17 00:17:39.400971 systemd[1]: Started sshd@15-64.227.98.118:22-4.153.228.146:48608.service - OpenSSH per-connection server daemon (4.153.228.146:48608). Jan 17 00:17:39.887964 sshd[5347]: Accepted publickey for core from 4.153.228.146 port 48608 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:17:39.891937 sshd[5347]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:17:39.900507 systemd-logind[1450]: New session 16 of user core. Jan 17 00:17:39.906758 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 00:17:40.413095 sshd[5347]: pam_unix(sshd:session): session closed for user core Jan 17 00:17:40.417256 systemd[1]: sshd@15-64.227.98.118:22-4.153.228.146:48608.service: Deactivated successfully. Jan 17 00:17:40.423139 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 00:17:40.428400 systemd-logind[1450]: Session 16 logged out. Waiting for processes to exit. Jan 17 00:17:40.430335 systemd-logind[1450]: Removed session 16. Jan 17 00:17:40.489017 systemd[1]: Started sshd@16-64.227.98.118:22-4.153.228.146:48622.service - OpenSSH per-connection server daemon (4.153.228.146:48622). Jan 17 00:17:40.907578 sshd[5360]: Accepted publickey for core from 4.153.228.146 port 48622 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:17:40.909547 sshd[5360]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:17:40.922199 systemd-logind[1450]: New session 17 of user core. Jan 17 00:17:40.926762 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 00:17:41.484608 kubelet[2515]: E0117 00:17:41.481518 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2b6w7" podUID="abee5d80-98e2-4d1b-a6be-4919665c817d" Jan 17 00:17:41.793310 sshd[5360]: pam_unix(sshd:session): session closed for user core Jan 17 00:17:41.805898 systemd-logind[1450]: Session 17 logged out. Waiting for processes to exit. Jan 17 00:17:41.806428 systemd[1]: sshd@16-64.227.98.118:22-4.153.228.146:48622.service: Deactivated successfully. Jan 17 00:17:41.814571 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 00:17:41.818826 systemd-logind[1450]: Removed session 17. Jan 17 00:17:41.876918 systemd[1]: Started sshd@17-64.227.98.118:22-4.153.228.146:48632.service - OpenSSH per-connection server daemon (4.153.228.146:48632). Jan 17 00:17:42.336481 sshd[5371]: Accepted publickey for core from 4.153.228.146 port 48632 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:17:42.345915 sshd[5371]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:17:42.358972 systemd-logind[1450]: New session 18 of user core. Jan 17 00:17:42.364780 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 00:17:42.491441 kubelet[2515]: E0117 00:17:42.488645 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78cdb99998-slhfc" podUID="1d8c80dc-ca7e-4704-80bc-010f68ebac60" Jan 17 00:17:43.540248 sshd[5371]: pam_unix(sshd:session): session closed for user core Jan 17 00:17:43.547940 systemd-logind[1450]: Session 18 logged out. Waiting for processes to exit. Jan 17 00:17:43.549400 systemd[1]: sshd@17-64.227.98.118:22-4.153.228.146:48632.service: Deactivated successfully. Jan 17 00:17:43.557040 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 00:17:43.558775 systemd-logind[1450]: Removed session 18. Jan 17 00:17:43.642134 systemd[1]: Started sshd@18-64.227.98.118:22-4.153.228.146:48640.service - OpenSSH per-connection server daemon (4.153.228.146:48640). Jan 17 00:17:44.122103 sshd[5392]: Accepted publickey for core from 4.153.228.146 port 48640 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:17:44.124897 sshd[5392]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:17:44.132594 systemd-logind[1450]: New session 19 of user core. Jan 17 00:17:44.140928 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 00:17:45.123660 sshd[5392]: pam_unix(sshd:session): session closed for user core Jan 17 00:17:45.136629 systemd[1]: sshd@18-64.227.98.118:22-4.153.228.146:48640.service: Deactivated successfully. Jan 17 00:17:45.147386 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 00:17:45.155389 systemd-logind[1450]: Session 19 logged out. Waiting for processes to exit. Jan 17 00:17:45.160888 systemd-logind[1450]: Removed session 19. Jan 17 00:17:45.198195 systemd[1]: Started sshd@19-64.227.98.118:22-4.153.228.146:55756.service - OpenSSH per-connection server daemon (4.153.228.146:55756). Jan 17 00:17:45.481045 kubelet[2515]: E0117 00:17:45.480969 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76574bc5-8kb79" podUID="96db8296-fac0-44e6-a2a4-5921dbbfa75c" Jan 17 00:17:45.639013 sshd[5404]: Accepted publickey for core from 4.153.228.146 port 55756 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:17:45.641754 sshd[5404]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:17:45.656714 systemd-logind[1450]: New session 20 of user core. Jan 17 00:17:45.661713 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 00:17:45.909062 systemd[1]: run-containerd-runc-k8s.io-ac3dd0b6b60f39296cfb3ed837af022e675a3fdab4313d4b9b349c6b674e4333-runc.SBG4xY.mount: Deactivated successfully. Jan 17 00:17:46.161120 sshd[5404]: pam_unix(sshd:session): session closed for user core Jan 17 00:17:46.171789 systemd[1]: sshd@19-64.227.98.118:22-4.153.228.146:55756.service: Deactivated successfully. Jan 17 00:17:46.179271 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 00:17:46.184519 systemd-logind[1450]: Session 20 logged out. Waiting for processes to exit. Jan 17 00:17:46.186237 systemd-logind[1450]: Removed session 20. Jan 17 00:17:47.484241 kubelet[2515]: E0117 00:17:47.484107 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lfkr9" podUID="74b48e50-ea55-46c5-84cf-509f72a7af13" Jan 17 00:17:48.485179 kubelet[2515]: E0117 00:17:48.485115 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-598d5588f5-xr65t" podUID="fd23fc1c-2ea9-47e8-be5f-5279e384fd8c" Jan 17 00:17:50.485460 kubelet[2515]: E0117 00:17:50.483917 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-598d5588f5-f9bzn" podUID="d3a2c65a-63b7-42fa-9521-230bac7a856c" Jan 17 00:17:51.277752 systemd[1]: Started sshd@20-64.227.98.118:22-4.153.228.146:55772.service - OpenSSH per-connection server daemon (4.153.228.146:55772). Jan 17 00:17:51.739949 sshd[5439]: Accepted publickey for core from 4.153.228.146 port 55772 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:17:51.741897 sshd[5439]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:17:51.754148 systemd-logind[1450]: New session 21 of user core. Jan 17 00:17:51.762804 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 17 00:17:52.207867 sshd[5439]: pam_unix(sshd:session): session closed for user core Jan 17 00:17:52.214266 systemd[1]: sshd@20-64.227.98.118:22-4.153.228.146:55772.service: Deactivated successfully. Jan 17 00:17:52.221105 systemd[1]: session-21.scope: Deactivated successfully. Jan 17 00:17:52.225950 systemd-logind[1450]: Session 21 logged out. Waiting for processes to exit. Jan 17 00:17:52.230598 systemd-logind[1450]: Removed session 21. Jan 17 00:17:52.481932 kubelet[2515]: E0117 00:17:52.481771 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2b6w7" podUID="abee5d80-98e2-4d1b-a6be-4919665c817d" Jan 17 00:17:55.479195 kubelet[2515]: E0117 00:17:55.479141 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:17:56.479456 kubelet[2515]: E0117 00:17:56.479295 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 00:17:56.489236 kubelet[2515]: E0117 00:17:56.487940 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78cdb99998-slhfc" podUID="1d8c80dc-ca7e-4704-80bc-010f68ebac60" Jan 17 00:17:57.289607 systemd[1]: Started sshd@21-64.227.98.118:22-4.153.228.146:60028.service - OpenSSH per-connection server daemon (4.153.228.146:60028). Jan 17 00:17:57.694521 sshd[5454]: Accepted publickey for core from 4.153.228.146 port 60028 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:17:57.698287 sshd[5454]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:17:57.712013 systemd-logind[1450]: New session 22 of user core. Jan 17 00:17:57.718693 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 17 00:17:58.156237 sshd[5454]: pam_unix(sshd:session): session closed for user core Jan 17 00:17:58.165144 systemd[1]: sshd@21-64.227.98.118:22-4.153.228.146:60028.service: Deactivated successfully. Jan 17 00:17:58.172232 systemd[1]: session-22.scope: Deactivated successfully. Jan 17 00:17:58.174332 systemd-logind[1450]: Session 22 logged out. Waiting for processes to exit. Jan 17 00:17:58.175939 systemd-logind[1450]: Removed session 22. Jan 17 00:18:00.487328 kubelet[2515]: E0117 00:18:00.487202 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76574bc5-8kb79" podUID="96db8296-fac0-44e6-a2a4-5921dbbfa75c" Jan 17 00:18:00.489633 kubelet[2515]: E0117 00:18:00.488823 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lfkr9" podUID="74b48e50-ea55-46c5-84cf-509f72a7af13" Jan 17 00:18:03.246655 systemd[1]: Started sshd@22-64.227.98.118:22-4.153.228.146:60044.service - OpenSSH per-connection server daemon (4.153.228.146:60044). Jan 17 00:18:03.482278 kubelet[2515]: E0117 00:18:03.482230 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-598d5588f5-xr65t" podUID="fd23fc1c-2ea9-47e8-be5f-5279e384fd8c" Jan 17 00:18:03.703526 sshd[5469]: Accepted publickey for core from 4.153.228.146 port 60044 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:18:03.709176 sshd[5469]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:18:03.719723 systemd-logind[1450]: New session 23 of user core. Jan 17 00:18:03.725722 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 17 00:18:04.190337 sshd[5469]: pam_unix(sshd:session): session closed for user core Jan 17 00:18:04.198720 systemd-logind[1450]: Session 23 logged out. Waiting for processes to exit. Jan 17 00:18:04.198972 systemd[1]: sshd@22-64.227.98.118:22-4.153.228.146:60044.service: Deactivated successfully. Jan 17 00:18:04.201925 systemd[1]: session-23.scope: Deactivated successfully. Jan 17 00:18:04.203967 systemd-logind[1450]: Removed session 23. Jan 17 00:18:05.481847 kubelet[2515]: E0117 00:18:05.481761 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-598d5588f5-f9bzn" podUID="d3a2c65a-63b7-42fa-9521-230bac7a856c" Jan 17 00:18:06.484369 kubelet[2515]: E0117 00:18:06.483494 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2b6w7" podUID="abee5d80-98e2-4d1b-a6be-4919665c817d"