Jan 17 00:19:12.063750 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 16 22:25:55 -00 2026 Jan 17 00:19:12.063791 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:19:12.063812 kernel: BIOS-provided physical RAM map: Jan 17 00:19:12.063822 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 17 00:19:12.063832 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 17 00:19:12.063841 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 17 00:19:12.063852 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Jan 17 00:19:12.063862 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Jan 17 00:19:12.063872 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 17 00:19:12.063885 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 17 00:19:12.063895 kernel: NX (Execute Disable) protection: active Jan 17 00:19:12.063905 kernel: APIC: Static calls initialized Jan 17 00:19:12.063949 kernel: SMBIOS 2.8 present. Jan 17 00:19:12.063960 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Jan 17 00:19:12.063973 kernel: Hypervisor detected: KVM Jan 17 00:19:12.063990 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 17 00:19:12.064008 kernel: kvm-clock: using sched offset of 3506905567 cycles Jan 17 00:19:12.064020 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 17 00:19:12.064031 kernel: tsc: Detected 1995.312 MHz processor Jan 17 00:19:12.064045 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 00:19:12.064057 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 00:19:12.064070 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Jan 17 00:19:12.064085 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 17 00:19:12.064098 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 00:19:12.064115 kernel: ACPI: Early table checksum verification disabled Jan 17 00:19:12.064129 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Jan 17 00:19:12.064144 kernel: ACPI: RSDT 0x000000007FFE19FD 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:19:12.064160 kernel: ACPI: FACP 0x000000007FFE17E1 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:19:12.064172 kernel: ACPI: DSDT 0x000000007FFE0040 0017A1 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:19:12.064187 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jan 17 00:19:12.064201 kernel: ACPI: APIC 0x000000007FFE1855 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:19:12.064216 kernel: ACPI: HPET 0x000000007FFE18D5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:19:12.064230 kernel: ACPI: SRAT 0x000000007FFE190D 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:19:12.064246 kernel: ACPI: WAET 0x000000007FFE19D5 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:19:12.064257 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe17e1-0x7ffe1854] Jan 17 00:19:12.064268 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe17e0] Jan 17 00:19:12.064280 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jan 17 00:19:12.064291 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe1855-0x7ffe18d4] Jan 17 00:19:12.064302 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe18d5-0x7ffe190c] Jan 17 00:19:12.064315 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe190d-0x7ffe19d4] Jan 17 00:19:12.064333 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe19d5-0x7ffe19fc] Jan 17 00:19:12.064348 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 17 00:19:12.064360 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 17 00:19:12.064373 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 17 00:19:12.064386 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 17 00:19:12.064404 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Jan 17 00:19:12.064417 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Jan 17 00:19:12.064433 kernel: Zone ranges: Jan 17 00:19:12.064445 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 00:19:12.064458 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Jan 17 00:19:12.064470 kernel: Normal empty Jan 17 00:19:12.064483 kernel: Movable zone start for each node Jan 17 00:19:12.064495 kernel: Early memory node ranges Jan 17 00:19:12.064507 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 17 00:19:12.064519 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Jan 17 00:19:12.064531 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Jan 17 00:19:12.064546 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 00:19:12.064558 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 17 00:19:12.064575 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Jan 17 00:19:12.064587 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 17 00:19:12.064600 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 17 00:19:12.064612 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 17 00:19:12.064624 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 17 00:19:12.064636 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 17 00:19:12.064649 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 17 00:19:12.064664 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 17 00:19:12.064676 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 17 00:19:12.064689 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 00:19:12.064702 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 17 00:19:12.064716 kernel: TSC deadline timer available Jan 17 00:19:12.064730 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 17 00:19:12.064745 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 17 00:19:12.064758 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jan 17 00:19:12.064778 kernel: Booting paravirtualized kernel on KVM Jan 17 00:19:12.064794 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 00:19:12.064814 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 17 00:19:12.064827 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Jan 17 00:19:12.064840 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Jan 17 00:19:12.064853 kernel: pcpu-alloc: [0] 0 1 Jan 17 00:19:12.064866 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 17 00:19:12.064881 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:19:12.064894 kernel: random: crng init done Jan 17 00:19:12.064908 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 00:19:12.065315 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 17 00:19:12.065329 kernel: Fallback order for Node 0: 0 Jan 17 00:19:12.065342 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Jan 17 00:19:12.065354 kernel: Policy zone: DMA32 Jan 17 00:19:12.065366 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 00:19:12.065379 kernel: Memory: 1971204K/2096612K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 125148K reserved, 0K cma-reserved) Jan 17 00:19:12.065392 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 17 00:19:12.065406 kernel: Kernel/User page tables isolation: enabled Jan 17 00:19:12.065420 kernel: ftrace: allocating 37989 entries in 149 pages Jan 17 00:19:12.065439 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 00:19:12.065454 kernel: Dynamic Preempt: voluntary Jan 17 00:19:12.065469 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 00:19:12.065485 kernel: rcu: RCU event tracing is enabled. Jan 17 00:19:12.065500 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 17 00:19:12.065514 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 00:19:12.065527 kernel: Rude variant of Tasks RCU enabled. Jan 17 00:19:12.065541 kernel: Tracing variant of Tasks RCU enabled. Jan 17 00:19:12.065553 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 00:19:12.065570 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 17 00:19:12.065584 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 17 00:19:12.065597 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 00:19:12.065610 kernel: Console: colour VGA+ 80x25 Jan 17 00:19:12.065630 kernel: printk: console [tty0] enabled Jan 17 00:19:12.065643 kernel: printk: console [ttyS0] enabled Jan 17 00:19:12.065656 kernel: ACPI: Core revision 20230628 Jan 17 00:19:12.065669 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 17 00:19:12.065682 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 00:19:12.065698 kernel: x2apic enabled Jan 17 00:19:12.065711 kernel: APIC: Switched APIC routing to: physical x2apic Jan 17 00:19:12.065724 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 17 00:19:12.065737 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3985c314e25, max_idle_ns: 881590612270 ns Jan 17 00:19:12.065750 kernel: Calibrating delay loop (skipped) preset value.. 3990.62 BogoMIPS (lpj=1995312) Jan 17 00:19:12.065763 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 17 00:19:12.065775 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 17 00:19:12.065788 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 00:19:12.065815 kernel: Spectre V2 : Mitigation: Retpolines Jan 17 00:19:12.065828 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 17 00:19:12.065841 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 17 00:19:12.065858 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 17 00:19:12.065871 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 17 00:19:12.065884 kernel: MDS: Mitigation: Clear CPU buffers Jan 17 00:19:12.065897 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 17 00:19:12.065911 kernel: active return thunk: its_return_thunk Jan 17 00:19:12.065943 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 17 00:19:12.065961 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 17 00:19:12.065975 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 17 00:19:12.065989 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 17 00:19:12.066003 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 17 00:19:12.066017 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 17 00:19:12.066031 kernel: Freeing SMP alternatives memory: 32K Jan 17 00:19:12.066045 kernel: pid_max: default: 32768 minimum: 301 Jan 17 00:19:12.066058 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 00:19:12.066076 kernel: landlock: Up and running. Jan 17 00:19:12.066091 kernel: SELinux: Initializing. Jan 17 00:19:12.066106 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 17 00:19:12.066122 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 17 00:19:12.066136 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Jan 17 00:19:12.066152 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:19:12.066169 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:19:12.066184 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:19:12.066200 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Jan 17 00:19:12.066218 kernel: signal: max sigframe size: 1776 Jan 17 00:19:12.066232 kernel: rcu: Hierarchical SRCU implementation. Jan 17 00:19:12.066245 kernel: rcu: Max phase no-delay instances is 400. Jan 17 00:19:12.066259 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 17 00:19:12.066272 kernel: smp: Bringing up secondary CPUs ... Jan 17 00:19:12.066286 kernel: smpboot: x86: Booting SMP configuration: Jan 17 00:19:12.066301 kernel: .... node #0, CPUs: #1 Jan 17 00:19:12.066314 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 00:19:12.066332 kernel: smpboot: Max logical packages: 1 Jan 17 00:19:12.066350 kernel: smpboot: Total of 2 processors activated (7981.24 BogoMIPS) Jan 17 00:19:12.066363 kernel: devtmpfs: initialized Jan 17 00:19:12.066376 kernel: x86/mm: Memory block size: 128MB Jan 17 00:19:12.066390 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 00:19:12.066404 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 17 00:19:12.066417 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 00:19:12.066430 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 00:19:12.066444 kernel: audit: initializing netlink subsys (disabled) Jan 17 00:19:12.066458 kernel: audit: type=2000 audit(1768609150.937:1): state=initialized audit_enabled=0 res=1 Jan 17 00:19:12.066474 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 00:19:12.066487 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 00:19:12.066501 kernel: cpuidle: using governor menu Jan 17 00:19:12.066514 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 00:19:12.066528 kernel: dca service started, version 1.12.1 Jan 17 00:19:12.066541 kernel: PCI: Using configuration type 1 for base access Jan 17 00:19:12.066554 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 00:19:12.066568 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 00:19:12.066582 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 00:19:12.066599 kernel: ACPI: Added _OSI(Module Device) Jan 17 00:19:12.066613 kernel: ACPI: Added _OSI(Processor Device) Jan 17 00:19:12.066628 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 00:19:12.066643 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 00:19:12.066670 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 00:19:12.066684 kernel: ACPI: Interpreter enabled Jan 17 00:19:12.066698 kernel: ACPI: PM: (supports S0 S5) Jan 17 00:19:12.066712 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 00:19:12.066725 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 00:19:12.066741 kernel: PCI: Using E820 reservations for host bridge windows Jan 17 00:19:12.066756 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 17 00:19:12.066771 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 17 00:19:12.067241 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 17 00:19:12.067492 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 17 00:19:12.067644 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 17 00:19:12.067661 kernel: acpiphp: Slot [3] registered Jan 17 00:19:12.067680 kernel: acpiphp: Slot [4] registered Jan 17 00:19:12.067694 kernel: acpiphp: Slot [5] registered Jan 17 00:19:12.067707 kernel: acpiphp: Slot [6] registered Jan 17 00:19:12.067720 kernel: acpiphp: Slot [7] registered Jan 17 00:19:12.067733 kernel: acpiphp: Slot [8] registered Jan 17 00:19:12.067746 kernel: acpiphp: Slot [9] registered Jan 17 00:19:12.067759 kernel: acpiphp: Slot [10] registered Jan 17 00:19:12.067772 kernel: acpiphp: Slot [11] registered Jan 17 00:19:12.067786 kernel: acpiphp: Slot [12] registered Jan 17 00:19:12.067799 kernel: acpiphp: Slot [13] registered Jan 17 00:19:12.067816 kernel: acpiphp: Slot [14] registered Jan 17 00:19:12.067829 kernel: acpiphp: Slot [15] registered Jan 17 00:19:12.067842 kernel: acpiphp: Slot [16] registered Jan 17 00:19:12.067855 kernel: acpiphp: Slot [17] registered Jan 17 00:19:12.067869 kernel: acpiphp: Slot [18] registered Jan 17 00:19:12.067882 kernel: acpiphp: Slot [19] registered Jan 17 00:19:12.067895 kernel: acpiphp: Slot [20] registered Jan 17 00:19:12.067910 kernel: acpiphp: Slot [21] registered Jan 17 00:19:12.067953 kernel: acpiphp: Slot [22] registered Jan 17 00:19:12.067973 kernel: acpiphp: Slot [23] registered Jan 17 00:19:12.067990 kernel: acpiphp: Slot [24] registered Jan 17 00:19:12.068006 kernel: acpiphp: Slot [25] registered Jan 17 00:19:12.068021 kernel: acpiphp: Slot [26] registered Jan 17 00:19:12.068036 kernel: acpiphp: Slot [27] registered Jan 17 00:19:12.068051 kernel: acpiphp: Slot [28] registered Jan 17 00:19:12.068065 kernel: acpiphp: Slot [29] registered Jan 17 00:19:12.068080 kernel: acpiphp: Slot [30] registered Jan 17 00:19:12.068094 kernel: acpiphp: Slot [31] registered Jan 17 00:19:12.068108 kernel: PCI host bridge to bus 0000:00 Jan 17 00:19:12.068285 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 17 00:19:12.068418 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 17 00:19:12.068554 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 17 00:19:12.068884 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 17 00:19:12.069051 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jan 17 00:19:12.069204 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 17 00:19:12.069394 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 17 00:19:12.069583 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 17 00:19:12.069755 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jan 17 00:19:12.069898 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Jan 17 00:19:12.070071 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 17 00:19:12.070217 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 17 00:19:12.070358 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 17 00:19:12.070514 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 17 00:19:12.070688 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Jan 17 00:19:12.070829 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Jan 17 00:19:12.071030 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 17 00:19:12.071174 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 17 00:19:12.071311 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 17 00:19:12.071490 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jan 17 00:19:12.071634 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jan 17 00:19:12.071772 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Jan 17 00:19:12.071911 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Jan 17 00:19:12.072074 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jan 17 00:19:12.072211 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 17 00:19:12.072389 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 17 00:19:12.072540 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Jan 17 00:19:12.072691 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Jan 17 00:19:12.072834 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Jan 17 00:19:12.073023 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 17 00:19:12.073166 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Jan 17 00:19:12.073316 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Jan 17 00:19:12.073455 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Jan 17 00:19:12.073619 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Jan 17 00:19:12.073755 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Jan 17 00:19:12.073890 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Jan 17 00:19:12.078171 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Jan 17 00:19:12.078412 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Jan 17 00:19:12.078598 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Jan 17 00:19:12.078777 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Jan 17 00:19:12.078997 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Jan 17 00:19:12.079177 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Jan 17 00:19:12.079321 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Jan 17 00:19:12.079470 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Jan 17 00:19:12.079617 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Jan 17 00:19:12.079774 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Jan 17 00:19:12.080001 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Jan 17 00:19:12.080172 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Jan 17 00:19:12.080191 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 17 00:19:12.080207 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 17 00:19:12.080220 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 17 00:19:12.080234 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 17 00:19:12.080248 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 17 00:19:12.080262 kernel: iommu: Default domain type: Translated Jan 17 00:19:12.080282 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 00:19:12.080294 kernel: PCI: Using ACPI for IRQ routing Jan 17 00:19:12.080305 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 17 00:19:12.080317 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 17 00:19:12.080330 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Jan 17 00:19:12.080485 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 17 00:19:12.080634 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 17 00:19:12.080781 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 17 00:19:12.080804 kernel: vgaarb: loaded Jan 17 00:19:12.080817 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 17 00:19:12.080830 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 17 00:19:12.080842 kernel: clocksource: Switched to clocksource kvm-clock Jan 17 00:19:12.080855 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 00:19:12.080868 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 00:19:12.080881 kernel: pnp: PnP ACPI init Jan 17 00:19:12.080893 kernel: pnp: PnP ACPI: found 4 devices Jan 17 00:19:12.080907 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 00:19:12.083031 kernel: NET: Registered PF_INET protocol family Jan 17 00:19:12.083061 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 00:19:12.083077 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 17 00:19:12.083093 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 00:19:12.083108 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 17 00:19:12.083124 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 17 00:19:12.083140 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 17 00:19:12.083156 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 17 00:19:12.083171 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 17 00:19:12.083194 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 00:19:12.083222 kernel: NET: Registered PF_XDP protocol family Jan 17 00:19:12.083448 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 17 00:19:12.083582 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 17 00:19:12.083714 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 17 00:19:12.083851 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 17 00:19:12.084006 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jan 17 00:19:12.084162 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 17 00:19:12.084327 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 17 00:19:12.084350 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 17 00:19:12.084505 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7b0 took 41208 usecs Jan 17 00:19:12.084525 kernel: PCI: CLS 0 bytes, default 64 Jan 17 00:19:12.084542 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 17 00:19:12.084559 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x3985c314e25, max_idle_ns: 881590612270 ns Jan 17 00:19:12.084575 kernel: Initialise system trusted keyrings Jan 17 00:19:12.084592 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 17 00:19:12.084607 kernel: Key type asymmetric registered Jan 17 00:19:12.084626 kernel: Asymmetric key parser 'x509' registered Jan 17 00:19:12.084642 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 00:19:12.084658 kernel: io scheduler mq-deadline registered Jan 17 00:19:12.084674 kernel: io scheduler kyber registered Jan 17 00:19:12.084690 kernel: io scheduler bfq registered Jan 17 00:19:12.084706 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 00:19:12.084722 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 17 00:19:12.084738 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 17 00:19:12.084754 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 17 00:19:12.084773 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 00:19:12.084789 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 00:19:12.084805 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 17 00:19:12.084821 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 17 00:19:12.084837 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 17 00:19:12.084851 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 17 00:19:12.087194 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 17 00:19:12.087374 kernel: rtc_cmos 00:03: registered as rtc0 Jan 17 00:19:12.087537 kernel: rtc_cmos 00:03: setting system clock to 2026-01-17T00:19:11 UTC (1768609151) Jan 17 00:19:12.087673 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 17 00:19:12.087690 kernel: intel_pstate: CPU model not supported Jan 17 00:19:12.087706 kernel: NET: Registered PF_INET6 protocol family Jan 17 00:19:12.087720 kernel: Segment Routing with IPv6 Jan 17 00:19:12.087734 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 00:19:12.087748 kernel: NET: Registered PF_PACKET protocol family Jan 17 00:19:12.087762 kernel: Key type dns_resolver registered Jan 17 00:19:12.087777 kernel: IPI shorthand broadcast: enabled Jan 17 00:19:12.087795 kernel: sched_clock: Marking stable (1241005377, 230494291)->(1531720649, -60220981) Jan 17 00:19:12.087809 kernel: registered taskstats version 1 Jan 17 00:19:12.087824 kernel: Loading compiled-in X.509 certificates Jan 17 00:19:12.087840 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: b6a847a3a522371f15b0d5425f12279a240740e4' Jan 17 00:19:12.087856 kernel: Key type .fscrypt registered Jan 17 00:19:12.087871 kernel: Key type fscrypt-provisioning registered Jan 17 00:19:12.087887 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 00:19:12.087903 kernel: ima: Allocated hash algorithm: sha1 Jan 17 00:19:12.087948 kernel: ima: No architecture policies found Jan 17 00:19:12.087969 kernel: clk: Disabling unused clocks Jan 17 00:19:12.087985 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 17 00:19:12.088001 kernel: Write protecting the kernel read-only data: 36864k Jan 17 00:19:12.088016 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 17 00:19:12.088052 kernel: Run /init as init process Jan 17 00:19:12.088070 kernel: with arguments: Jan 17 00:19:12.088084 kernel: /init Jan 17 00:19:12.088097 kernel: with environment: Jan 17 00:19:12.088110 kernel: HOME=/ Jan 17 00:19:12.088127 kernel: TERM=linux Jan 17 00:19:12.088144 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:19:12.088163 systemd[1]: Detected virtualization kvm. Jan 17 00:19:12.088179 systemd[1]: Detected architecture x86-64. Jan 17 00:19:12.088194 systemd[1]: Running in initrd. Jan 17 00:19:12.088208 systemd[1]: No hostname configured, using default hostname. Jan 17 00:19:12.088222 systemd[1]: Hostname set to . Jan 17 00:19:12.088239 systemd[1]: Initializing machine ID from VM UUID. Jan 17 00:19:12.088253 systemd[1]: Queued start job for default target initrd.target. Jan 17 00:19:12.088268 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:19:12.088283 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:19:12.088300 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 00:19:12.088315 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:19:12.088329 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 00:19:12.088343 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 00:19:12.088362 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 00:19:12.088378 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 00:19:12.088392 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:19:12.088406 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:19:12.088421 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:19:12.088435 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:19:12.088449 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:19:12.088469 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:19:12.088486 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:19:12.088502 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:19:12.088518 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 00:19:12.088536 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 00:19:12.088557 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:19:12.088575 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:19:12.088592 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:19:12.088610 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:19:12.088626 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 00:19:12.088648 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:19:12.088683 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 00:19:12.088703 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 00:19:12.088719 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:19:12.088737 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:19:12.088751 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:19:12.088804 systemd-journald[185]: Collecting audit messages is disabled. Jan 17 00:19:12.088847 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 00:19:12.088866 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:19:12.088881 systemd-journald[185]: Journal started Jan 17 00:19:12.088914 systemd-journald[185]: Runtime Journal (/run/log/journal/57bc7b41efec4a2bad74b8e640301a4f) is 4.9M, max 39.3M, 34.4M free. Jan 17 00:19:12.086498 systemd-modules-load[186]: Inserted module 'overlay' Jan 17 00:19:12.199756 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:19:12.199802 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 00:19:12.199825 kernel: Bridge firewalling registered Jan 17 00:19:12.134018 systemd-modules-load[186]: Inserted module 'br_netfilter' Jan 17 00:19:12.201510 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 00:19:12.203357 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:19:12.204862 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:19:12.212280 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:19:12.226293 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:19:12.232656 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 00:19:12.243194 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:19:12.249114 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:19:12.254042 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:19:12.266041 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 00:19:12.275255 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:19:12.288242 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:19:12.290291 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:19:12.303240 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:19:12.306569 dracut-cmdline[214]: dracut-dracut-053 Jan 17 00:19:12.306977 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:19:12.312508 dracut-cmdline[214]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:19:12.352564 systemd-resolved[225]: Positive Trust Anchors: Jan 17 00:19:12.352581 systemd-resolved[225]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:19:12.352616 systemd-resolved[225]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:19:12.356946 systemd-resolved[225]: Defaulting to hostname 'linux'. Jan 17 00:19:12.358596 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:19:12.361108 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:19:12.439963 kernel: SCSI subsystem initialized Jan 17 00:19:12.455968 kernel: Loading iSCSI transport class v2.0-870. Jan 17 00:19:12.474005 kernel: iscsi: registered transport (tcp) Jan 17 00:19:12.502694 kernel: iscsi: registered transport (qla4xxx) Jan 17 00:19:12.502784 kernel: QLogic iSCSI HBA Driver Jan 17 00:19:12.567371 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 00:19:12.578276 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 00:19:12.618072 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 00:19:12.618211 kernel: device-mapper: uevent: version 1.0.3 Jan 17 00:19:12.620331 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 00:19:12.683010 kernel: raid6: avx2x4 gen() 16270 MB/s Jan 17 00:19:12.700986 kernel: raid6: avx2x2 gen() 15682 MB/s Jan 17 00:19:12.719228 kernel: raid6: avx2x1 gen() 10826 MB/s Jan 17 00:19:12.719341 kernel: raid6: using algorithm avx2x4 gen() 16270 MB/s Jan 17 00:19:12.739436 kernel: raid6: .... xor() 5846 MB/s, rmw enabled Jan 17 00:19:12.739549 kernel: raid6: using avx2x2 recovery algorithm Jan 17 00:19:12.771009 kernel: xor: automatically using best checksumming function avx Jan 17 00:19:12.973993 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 00:19:12.992404 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:19:13.000377 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:19:13.030984 systemd-udevd[404]: Using default interface naming scheme 'v255'. Jan 17 00:19:13.037896 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:19:13.046363 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 00:19:13.076379 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Jan 17 00:19:13.128817 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:19:13.136320 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:19:13.222160 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:19:13.231834 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 00:19:13.278101 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 00:19:13.282587 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:19:13.284169 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:19:13.286031 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:19:13.294671 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 00:19:13.326763 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:19:13.362005 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Jan 17 00:19:13.381971 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 17 00:19:13.382356 kernel: ACPI: bus type USB registered Jan 17 00:19:13.395771 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 00:19:13.395858 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 00:19:13.395873 kernel: GPT:9289727 != 125829119 Jan 17 00:19:13.395884 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 00:19:13.395910 kernel: GPT:9289727 != 125829119 Jan 17 00:19:13.395946 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 00:19:13.395961 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 00:19:13.399955 kernel: usbcore: registered new interface driver usbfs Jan 17 00:19:13.405973 kernel: usbcore: registered new interface driver hub Jan 17 00:19:13.411961 kernel: usbcore: registered new device driver usb Jan 17 00:19:13.423968 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Jan 17 00:19:13.426974 kernel: scsi host0: Virtio SCSI HBA Jan 17 00:19:13.435042 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Jan 17 00:19:13.455104 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:19:13.456325 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:19:13.460229 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:19:13.465002 kernel: libata version 3.00 loaded. Jan 17 00:19:13.463617 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:19:13.463805 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:19:13.464756 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:19:13.471953 kernel: AVX2 version of gcm_enc/dec engaged. Jan 17 00:19:13.473128 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:19:13.480976 kernel: AES CTR mode by8 optimization enabled Jan 17 00:19:13.484112 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 17 00:19:13.494716 kernel: scsi host1: ata_piix Jan 17 00:19:13.505456 kernel: scsi host2: ata_piix Jan 17 00:19:13.516282 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Jan 17 00:19:13.516400 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Jan 17 00:19:13.542970 kernel: BTRFS: device fsid a67b5ac0-cdfd-426d-9386-e029282f433a devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (454) Jan 17 00:19:13.566983 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (451) Jan 17 00:19:13.585855 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 17 00:19:13.687072 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Jan 17 00:19:13.687489 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Jan 17 00:19:13.687689 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Jan 17 00:19:13.687877 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Jan 17 00:19:13.688166 kernel: hub 1-0:1.0: USB hub found Jan 17 00:19:13.688404 kernel: hub 1-0:1.0: 2 ports detected Jan 17 00:19:13.689165 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:19:13.699598 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 17 00:19:13.705523 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 17 00:19:13.708435 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 17 00:19:13.721071 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 00:19:13.734324 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 00:19:13.740249 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:19:13.745054 disk-uuid[541]: Primary Header is updated. Jan 17 00:19:13.745054 disk-uuid[541]: Secondary Entries is updated. Jan 17 00:19:13.745054 disk-uuid[541]: Secondary Header is updated. Jan 17 00:19:13.754235 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 00:19:13.759030 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 00:19:13.770388 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 00:19:13.793334 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:19:14.768974 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 00:19:14.772161 disk-uuid[542]: The operation has completed successfully. Jan 17 00:19:14.822183 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 00:19:14.822357 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 00:19:14.858405 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 00:19:14.864391 sh[565]: Success Jan 17 00:19:14.884285 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 17 00:19:14.967936 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 00:19:14.979330 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 00:19:14.981248 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 00:19:15.006911 kernel: BTRFS info (device dm-0): first mount of filesystem a67b5ac0-cdfd-426d-9386-e029282f433a Jan 17 00:19:15.007119 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:19:15.007144 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 00:19:15.012641 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 00:19:15.012785 kernel: BTRFS info (device dm-0): using free space tree Jan 17 00:19:15.025894 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 00:19:15.028403 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 00:19:15.039489 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 00:19:15.044265 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 00:19:15.063287 kernel: BTRFS info (device vda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:19:15.063413 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:19:15.063428 kernel: BTRFS info (device vda6): using free space tree Jan 17 00:19:15.071026 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 00:19:15.091572 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 00:19:15.095367 kernel: BTRFS info (device vda6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:19:15.105233 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 00:19:15.112231 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 00:19:15.274395 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:19:15.288161 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:19:15.296473 ignition[654]: Ignition 2.19.0 Jan 17 00:19:15.296493 ignition[654]: Stage: fetch-offline Jan 17 00:19:15.296549 ignition[654]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:19:15.300146 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:19:15.296564 ignition[654]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 00:19:15.296718 ignition[654]: parsed url from cmdline: "" Jan 17 00:19:15.296724 ignition[654]: no config URL provided Jan 17 00:19:15.296733 ignition[654]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:19:15.296747 ignition[654]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:19:15.296755 ignition[654]: failed to fetch config: resource requires networking Jan 17 00:19:15.297364 ignition[654]: Ignition finished successfully Jan 17 00:19:15.328185 systemd-networkd[754]: lo: Link UP Jan 17 00:19:15.328202 systemd-networkd[754]: lo: Gained carrier Jan 17 00:19:15.330905 systemd-networkd[754]: Enumeration completed Jan 17 00:19:15.331548 systemd-networkd[754]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 17 00:19:15.331553 systemd-networkd[754]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Jan 17 00:19:15.332779 systemd-networkd[754]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:19:15.332782 systemd-networkd[754]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:19:15.333887 systemd-networkd[754]: eth0: Link UP Jan 17 00:19:15.333893 systemd-networkd[754]: eth0: Gained carrier Jan 17 00:19:15.333907 systemd-networkd[754]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 17 00:19:15.335726 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:19:15.337469 systemd[1]: Reached target network.target - Network. Jan 17 00:19:15.341449 systemd-networkd[754]: eth1: Link UP Jan 17 00:19:15.341455 systemd-networkd[754]: eth1: Gained carrier Jan 17 00:19:15.341475 systemd-networkd[754]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:19:15.345782 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 00:19:15.355030 systemd-networkd[754]: eth1: DHCPv4 address 10.124.0.47/20 acquired from 169.254.169.253 Jan 17 00:19:15.357143 systemd-networkd[754]: eth0: DHCPv4 address 209.38.74.55/20, gateway 209.38.64.1 acquired from 169.254.169.253 Jan 17 00:19:15.373152 ignition[757]: Ignition 2.19.0 Jan 17 00:19:15.373170 ignition[757]: Stage: fetch Jan 17 00:19:15.373478 ignition[757]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:19:15.373502 ignition[757]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 00:19:15.373673 ignition[757]: parsed url from cmdline: "" Jan 17 00:19:15.373680 ignition[757]: no config URL provided Jan 17 00:19:15.373690 ignition[757]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:19:15.373704 ignition[757]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:19:15.373734 ignition[757]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Jan 17 00:19:15.391709 ignition[757]: GET result: OK Jan 17 00:19:15.391850 ignition[757]: parsing config with SHA512: 13397e93001dfc3a8192472c4f571c73a15d807426da8c58c447d47265a35fe121dc01d50abaf6e1a74e100c1cfcb2cbd290a3db7bf69254806b61e314df47cf Jan 17 00:19:15.396393 unknown[757]: fetched base config from "system" Jan 17 00:19:15.396409 unknown[757]: fetched base config from "system" Jan 17 00:19:15.396758 ignition[757]: fetch: fetch complete Jan 17 00:19:15.396417 unknown[757]: fetched user config from "digitalocean" Jan 17 00:19:15.396764 ignition[757]: fetch: fetch passed Jan 17 00:19:15.399455 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 00:19:15.396815 ignition[757]: Ignition finished successfully Jan 17 00:19:15.409249 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 00:19:15.442554 ignition[764]: Ignition 2.19.0 Jan 17 00:19:15.443419 ignition[764]: Stage: kargs Jan 17 00:19:15.443733 ignition[764]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:19:15.443750 ignition[764]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 00:19:15.447210 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 00:19:15.445286 ignition[764]: kargs: kargs passed Jan 17 00:19:15.445381 ignition[764]: Ignition finished successfully Jan 17 00:19:15.459438 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 00:19:15.485576 ignition[770]: Ignition 2.19.0 Jan 17 00:19:15.485590 ignition[770]: Stage: disks Jan 17 00:19:15.485879 ignition[770]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:19:15.485897 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 00:19:15.490320 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 00:19:15.487846 ignition[770]: disks: disks passed Jan 17 00:19:15.487977 ignition[770]: Ignition finished successfully Jan 17 00:19:15.499451 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 00:19:15.501025 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 00:19:15.502687 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:19:15.504570 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:19:15.506362 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:19:15.523584 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 00:19:15.545174 systemd-fsck[778]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 17 00:19:15.551952 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 00:19:15.560129 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 00:19:15.714801 kernel: EXT4-fs (vda9): mounted filesystem ab055cfb-d92d-4784-aa05-26ea844796bc r/w with ordered data mode. Quota mode: none. Jan 17 00:19:15.714099 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 00:19:15.716815 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 00:19:15.728526 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:19:15.732186 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 00:19:15.734024 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Jan 17 00:19:15.742983 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (786) Jan 17 00:19:15.749041 kernel: BTRFS info (device vda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:19:15.749563 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 17 00:19:15.758408 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:19:15.758437 kernel: BTRFS info (device vda6): using free space tree Jan 17 00:19:15.751508 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 00:19:15.751554 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:19:15.764271 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 00:19:15.773311 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 00:19:15.785221 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 00:19:15.796429 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:19:15.867968 coreos-metadata[789]: Jan 17 00:19:15.865 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 17 00:19:15.873154 initrd-setup-root[816]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 00:19:15.879342 coreos-metadata[789]: Jan 17 00:19:15.879 INFO Fetch successful Jan 17 00:19:15.882567 initrd-setup-root[823]: cut: /sysroot/etc/group: No such file or directory Jan 17 00:19:15.888525 coreos-metadata[789]: Jan 17 00:19:15.887 INFO wrote hostname ci-4081.3.6-n-8cc98427e3 to /sysroot/etc/hostname Jan 17 00:19:15.890754 coreos-metadata[788]: Jan 17 00:19:15.890 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 17 00:19:15.890337 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 00:19:15.898425 initrd-setup-root[831]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 00:19:15.902695 coreos-metadata[788]: Jan 17 00:19:15.902 INFO Fetch successful Jan 17 00:19:15.906368 initrd-setup-root[838]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 00:19:15.911681 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Jan 17 00:19:15.911870 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Jan 17 00:19:16.037966 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 00:19:16.044206 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 00:19:16.059462 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 00:19:16.075162 kernel: BTRFS info (device vda6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:19:16.072711 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 00:19:16.097973 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 00:19:16.113687 ignition[907]: INFO : Ignition 2.19.0 Jan 17 00:19:16.113687 ignition[907]: INFO : Stage: mount Jan 17 00:19:16.115829 ignition[907]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:19:16.115829 ignition[907]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 00:19:16.118793 ignition[907]: INFO : mount: mount passed Jan 17 00:19:16.118793 ignition[907]: INFO : Ignition finished successfully Jan 17 00:19:16.117665 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 00:19:16.135270 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 00:19:16.153266 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:19:16.167454 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (918) Jan 17 00:19:16.167517 kernel: BTRFS info (device vda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:19:16.170514 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:19:16.173011 kernel: BTRFS info (device vda6): using free space tree Jan 17 00:19:16.179963 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 00:19:16.182583 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:19:16.221823 ignition[935]: INFO : Ignition 2.19.0 Jan 17 00:19:16.221823 ignition[935]: INFO : Stage: files Jan 17 00:19:16.224390 ignition[935]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:19:16.224390 ignition[935]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 00:19:16.224390 ignition[935]: DEBUG : files: compiled without relabeling support, skipping Jan 17 00:19:16.228298 ignition[935]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 00:19:16.228298 ignition[935]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 00:19:16.231180 ignition[935]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 00:19:16.231180 ignition[935]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 00:19:16.234816 ignition[935]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 00:19:16.234816 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 17 00:19:16.234816 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 17 00:19:16.232237 unknown[935]: wrote ssh authorized keys file for user: core Jan 17 00:19:16.291378 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 17 00:19:16.360032 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 17 00:19:16.360032 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 17 00:19:16.363052 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 00:19:16.363052 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:19:16.363052 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:19:16.363052 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:19:16.363052 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:19:16.363052 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:19:16.363052 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:19:16.363052 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:19:16.363052 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:19:16.363052 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 17 00:19:16.363052 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 17 00:19:16.363052 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 17 00:19:16.363052 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jan 17 00:19:16.505196 systemd-networkd[754]: eth1: Gained IPv6LL Jan 17 00:19:16.666400 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 17 00:19:16.826187 systemd-networkd[754]: eth0: Gained IPv6LL Jan 17 00:19:17.064477 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 17 00:19:17.064477 ignition[935]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 17 00:19:17.067547 ignition[935]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:19:17.067547 ignition[935]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:19:17.067547 ignition[935]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 17 00:19:17.067547 ignition[935]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 17 00:19:17.067547 ignition[935]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 00:19:17.067547 ignition[935]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:19:17.067547 ignition[935]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:19:17.067547 ignition[935]: INFO : files: files passed Jan 17 00:19:17.067547 ignition[935]: INFO : Ignition finished successfully Jan 17 00:19:17.068784 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 00:19:17.080148 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 00:19:17.082125 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 00:19:17.090243 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 00:19:17.090376 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 00:19:17.111452 initrd-setup-root-after-ignition[964]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:19:17.113171 initrd-setup-root-after-ignition[968]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:19:17.115027 initrd-setup-root-after-ignition[964]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:19:17.116588 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:19:17.119254 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 00:19:17.126218 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 00:19:17.167624 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 00:19:17.167800 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 00:19:17.169441 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 00:19:17.170310 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 00:19:17.172263 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 00:19:17.177144 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 00:19:17.204126 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:19:17.210221 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 00:19:17.238265 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:19:17.239328 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:19:17.241293 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 00:19:17.242982 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 00:19:17.243119 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:19:17.244834 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 00:19:17.245816 systemd[1]: Stopped target basic.target - Basic System. Jan 17 00:19:17.247440 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 00:19:17.249006 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:19:17.250602 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 00:19:17.252583 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 00:19:17.254346 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:19:17.256214 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 00:19:17.258214 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 00:19:17.260092 systemd[1]: Stopped target swap.target - Swaps. Jan 17 00:19:17.261706 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 00:19:17.261844 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:19:17.263997 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:19:17.265139 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:19:17.266841 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 00:19:17.269049 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:19:17.271270 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 00:19:17.271462 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 00:19:17.273819 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 00:19:17.274036 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:19:17.275278 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 00:19:17.275421 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 00:19:17.276811 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 17 00:19:17.276973 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 00:19:17.285260 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 00:19:17.286111 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 00:19:17.286314 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:19:17.291231 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 00:19:17.294556 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 00:19:17.294823 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:19:17.298793 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 00:19:17.299008 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:19:17.307731 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 00:19:17.307838 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 00:19:17.319918 ignition[988]: INFO : Ignition 2.19.0 Jan 17 00:19:17.321953 ignition[988]: INFO : Stage: umount Jan 17 00:19:17.321953 ignition[988]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:19:17.321953 ignition[988]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 00:19:17.327958 ignition[988]: INFO : umount: umount passed Jan 17 00:19:17.327958 ignition[988]: INFO : Ignition finished successfully Jan 17 00:19:17.326665 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 00:19:17.326813 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 00:19:17.329603 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 00:19:17.329765 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 00:19:17.339238 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 00:19:17.339343 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 00:19:17.344171 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 00:19:17.344266 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 00:19:17.345887 systemd[1]: Stopped target network.target - Network. Jan 17 00:19:17.347445 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 00:19:17.347545 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:19:17.349240 systemd[1]: Stopped target paths.target - Path Units. Jan 17 00:19:17.350964 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 00:19:17.354003 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:19:17.355625 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 00:19:17.357225 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 00:19:17.359210 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 00:19:17.359281 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:19:17.360996 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 00:19:17.361058 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:19:17.362813 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 00:19:17.362895 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 00:19:17.365072 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 00:19:17.365160 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 00:19:17.367226 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 00:19:17.369046 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 00:19:17.372358 systemd-networkd[754]: eth1: DHCPv6 lease lost Jan 17 00:19:17.372576 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 00:19:17.374293 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 00:19:17.374462 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 00:19:17.376207 systemd-networkd[754]: eth0: DHCPv6 lease lost Jan 17 00:19:17.376905 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 00:19:17.377410 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 00:19:17.380875 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 00:19:17.381297 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 00:19:17.382805 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 00:19:17.383006 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 00:19:17.389103 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 00:19:17.389214 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:19:17.396202 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 00:19:17.398388 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 00:19:17.398498 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:19:17.400807 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 00:19:17.400901 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:19:17.402516 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 00:19:17.402597 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 00:19:17.404778 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 00:19:17.404857 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:19:17.408749 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:19:17.428616 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 00:19:17.428968 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:19:17.431242 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 00:19:17.431351 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 00:19:17.433212 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 00:19:17.433269 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:19:17.435047 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 00:19:17.435124 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:19:17.437685 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 00:19:17.437739 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 00:19:17.439506 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:19:17.439562 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:19:17.447173 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 00:19:17.447978 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 00:19:17.448066 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:19:17.450045 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:19:17.450124 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:19:17.453025 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 00:19:17.453191 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 00:19:17.463506 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 00:19:17.463659 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 00:19:17.464829 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 00:19:17.472309 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 00:19:17.485679 systemd[1]: Switching root. Jan 17 00:19:17.545620 systemd-journald[185]: Journal stopped Jan 17 00:19:18.893788 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). Jan 17 00:19:18.893853 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 00:19:18.893869 kernel: SELinux: policy capability open_perms=1 Jan 17 00:19:18.893881 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 00:19:18.893893 kernel: SELinux: policy capability always_check_network=0 Jan 17 00:19:18.893903 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 00:19:18.893915 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 00:19:18.895065 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 00:19:18.895081 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 00:19:18.895092 kernel: audit: type=1403 audit(1768609157.863:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 00:19:18.895107 systemd[1]: Successfully loaded SELinux policy in 60.702ms. Jan 17 00:19:18.895133 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.778ms. Jan 17 00:19:18.895147 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:19:18.895160 systemd[1]: Detected virtualization kvm. Jan 17 00:19:18.895172 systemd[1]: Detected architecture x86-64. Jan 17 00:19:18.895187 systemd[1]: Detected first boot. Jan 17 00:19:18.895203 systemd[1]: Hostname set to . Jan 17 00:19:18.895215 systemd[1]: Initializing machine ID from VM UUID. Jan 17 00:19:18.895227 zram_generator::config[1034]: No configuration found. Jan 17 00:19:18.895247 systemd[1]: Populated /etc with preset unit settings. Jan 17 00:19:18.895267 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 17 00:19:18.895285 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 17 00:19:18.895304 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 17 00:19:18.895325 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 00:19:18.895344 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 00:19:18.895356 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 00:19:18.895368 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 00:19:18.895385 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 00:19:18.895398 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 00:19:18.895413 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 00:19:18.895425 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 00:19:18.895438 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:19:18.895453 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:19:18.895464 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 00:19:18.895480 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 00:19:18.895493 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 00:19:18.895504 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:19:18.895516 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 17 00:19:18.895528 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:19:18.895539 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 17 00:19:18.895554 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 17 00:19:18.895566 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 17 00:19:18.895578 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 00:19:18.895590 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:19:18.895601 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:19:18.895613 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:19:18.895624 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:19:18.895637 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 00:19:18.895651 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 00:19:18.895662 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:19:18.895676 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:19:18.895689 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:19:18.895710 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 00:19:18.895730 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 00:19:18.895748 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 00:19:18.895766 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 00:19:18.895784 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:19:18.895800 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 00:19:18.895813 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 00:19:18.895824 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 00:19:18.895836 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 00:19:18.895848 systemd[1]: Reached target machines.target - Containers. Jan 17 00:19:18.895863 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 00:19:18.895884 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:19:18.895903 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:19:18.895959 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 00:19:18.895978 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:19:18.895996 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:19:18.896015 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:19:18.896034 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 00:19:18.896049 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:19:18.896065 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 00:19:18.896078 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 17 00:19:18.896094 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 17 00:19:18.896106 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 17 00:19:18.896119 systemd[1]: Stopped systemd-fsck-usr.service. Jan 17 00:19:18.896131 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:19:18.896142 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:19:18.896154 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 00:19:18.896166 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 00:19:18.896177 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:19:18.896189 systemd[1]: verity-setup.service: Deactivated successfully. Jan 17 00:19:18.896204 systemd[1]: Stopped verity-setup.service. Jan 17 00:19:18.896216 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:19:18.896227 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 00:19:18.896239 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 00:19:18.896250 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 00:19:18.896261 kernel: loop: module loaded Jan 17 00:19:18.898001 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 00:19:18.898039 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 00:19:18.898060 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 00:19:18.898090 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:19:18.898173 systemd-journald[1107]: Collecting audit messages is disabled. Jan 17 00:19:18.898211 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 00:19:18.898229 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 00:19:18.898253 systemd-journald[1107]: Journal started Jan 17 00:19:18.898297 systemd-journald[1107]: Runtime Journal (/run/log/journal/57bc7b41efec4a2bad74b8e640301a4f) is 4.9M, max 39.3M, 34.4M free. Jan 17 00:19:18.512695 systemd[1]: Queued start job for default target multi-user.target. Jan 17 00:19:18.532817 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 17 00:19:18.533346 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 17 00:19:18.902957 kernel: fuse: init (API version 7.39) Jan 17 00:19:18.905959 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:19:18.908694 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:19:18.908852 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:19:18.911156 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:19:18.911325 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:19:18.912439 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 00:19:18.912576 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 00:19:18.914412 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:19:18.914607 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:19:18.915771 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:19:18.916901 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 00:19:18.918169 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 00:19:18.935084 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 00:19:18.948063 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 00:19:18.953035 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 00:19:18.955099 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 00:19:18.955151 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:19:18.959352 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 00:19:18.967190 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 00:19:18.971115 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 00:19:18.972145 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:19:18.980183 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 00:19:18.995331 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 00:19:18.996418 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:19:19.001208 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 00:19:19.002275 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:19:19.003873 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:19:19.007383 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 00:19:19.016030 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 00:19:19.019644 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 00:19:19.020852 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 00:19:19.022226 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 00:19:19.044451 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 00:19:19.068277 kernel: ACPI: bus type drm_connector registered Jan 17 00:19:19.066371 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:19:19.087051 systemd-journald[1107]: Time spent on flushing to /var/log/journal/57bc7b41efec4a2bad74b8e640301a4f is 106.347ms for 984 entries. Jan 17 00:19:19.087051 systemd-journald[1107]: System Journal (/var/log/journal/57bc7b41efec4a2bad74b8e640301a4f) is 8.0M, max 195.6M, 187.6M free. Jan 17 00:19:19.208540 systemd-journald[1107]: Received client request to flush runtime journal. Jan 17 00:19:19.208613 kernel: loop0: detected capacity change from 0 to 142488 Jan 17 00:19:19.208641 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 00:19:19.091249 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:19:19.091437 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:19:19.092580 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:19:19.125056 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 00:19:19.128080 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 00:19:19.130173 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 00:19:19.137167 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 00:19:19.201100 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 00:19:19.211668 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:19:19.219032 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 00:19:19.227338 udevadm[1160]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 17 00:19:19.231463 kernel: loop1: detected capacity change from 0 to 140768 Jan 17 00:19:19.244329 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 00:19:19.246344 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 00:19:19.279269 kernel: loop2: detected capacity change from 0 to 229808 Jan 17 00:19:19.299650 systemd-tmpfiles[1165]: ACLs are not supported, ignoring. Jan 17 00:19:19.300341 systemd-tmpfiles[1165]: ACLs are not supported, ignoring. Jan 17 00:19:19.315587 kernel: loop3: detected capacity change from 0 to 8 Jan 17 00:19:19.338183 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:19:19.343108 kernel: loop4: detected capacity change from 0 to 142488 Jan 17 00:19:19.371035 kernel: loop5: detected capacity change from 0 to 140768 Jan 17 00:19:19.388981 kernel: loop6: detected capacity change from 0 to 229808 Jan 17 00:19:19.400881 kernel: loop7: detected capacity change from 0 to 8 Jan 17 00:19:19.402132 (sd-merge)[1176]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Jan 17 00:19:19.403296 (sd-merge)[1176]: Merged extensions into '/usr'. Jan 17 00:19:19.413362 systemd[1]: Reloading requested from client PID 1149 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 00:19:19.413394 systemd[1]: Reloading... Jan 17 00:19:19.557955 zram_generator::config[1199]: No configuration found. Jan 17 00:19:19.860200 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:19:19.915990 systemd[1]: Reloading finished in 501 ms. Jan 17 00:19:19.920956 ldconfig[1144]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 00:19:19.950908 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 00:19:19.953210 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 00:19:19.968397 systemd[1]: Starting ensure-sysext.service... Jan 17 00:19:19.981230 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:19:20.007223 systemd[1]: Reloading requested from client PID 1246 ('systemctl') (unit ensure-sysext.service)... Jan 17 00:19:20.007251 systemd[1]: Reloading... Jan 17 00:19:20.061261 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 00:19:20.062760 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 00:19:20.065258 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 00:19:20.065728 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Jan 17 00:19:20.065828 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Jan 17 00:19:20.074391 systemd-tmpfiles[1247]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:19:20.076163 systemd-tmpfiles[1247]: Skipping /boot Jan 17 00:19:20.100124 systemd-tmpfiles[1247]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:19:20.102119 systemd-tmpfiles[1247]: Skipping /boot Jan 17 00:19:20.141961 zram_generator::config[1277]: No configuration found. Jan 17 00:19:20.302398 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:19:20.357031 systemd[1]: Reloading finished in 349 ms. Jan 17 00:19:20.379570 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 00:19:20.385827 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:19:20.404355 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:19:20.409248 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 00:19:20.419246 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 00:19:20.426351 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:19:20.432434 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:19:20.441302 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 00:19:20.447122 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:19:20.447325 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:19:20.450277 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:19:20.463446 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:19:20.473309 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:19:20.474282 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:19:20.474431 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:19:20.476740 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:19:20.476916 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:19:20.477961 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:19:20.481112 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 00:19:20.482724 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:19:20.487762 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:19:20.488326 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:19:20.498846 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:19:20.500125 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:19:20.500319 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:19:20.504360 systemd[1]: Finished ensure-sysext.service. Jan 17 00:19:20.506390 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:19:20.508184 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:19:20.514400 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:19:20.514720 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:19:20.518638 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 00:19:20.526764 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:19:20.545359 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 17 00:19:20.558472 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 00:19:20.560078 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:19:20.560621 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:19:20.566494 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 00:19:20.576209 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:19:20.578252 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 00:19:20.581994 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 00:19:20.584340 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:19:20.584485 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:19:20.587990 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 00:19:20.595420 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 00:19:20.607130 systemd-udevd[1329]: Using default interface naming scheme 'v255'. Jan 17 00:19:20.629050 augenrules[1361]: No rules Jan 17 00:19:20.630190 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:19:20.655010 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:19:20.665181 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:19:20.683788 systemd-resolved[1328]: Positive Trust Anchors: Jan 17 00:19:20.683814 systemd-resolved[1328]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:19:20.683869 systemd-resolved[1328]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:19:20.690503 systemd-resolved[1328]: Using system hostname 'ci-4081.3.6-n-8cc98427e3'. Jan 17 00:19:20.693486 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:19:20.694997 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:19:20.712389 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 17 00:19:20.713861 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 00:19:20.782048 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1382) Jan 17 00:19:20.786180 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Jan 17 00:19:20.787185 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:19:20.787324 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:19:20.794213 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:19:20.797734 systemd-networkd[1371]: lo: Link UP Jan 17 00:19:20.798064 systemd-networkd[1371]: lo: Gained carrier Jan 17 00:19:20.801825 systemd-networkd[1371]: Enumeration completed Jan 17 00:19:20.803242 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:19:20.804633 systemd-networkd[1371]: eth1: Configuring with /run/systemd/network/10-e2:3b:7b:1d:7d:33.network. Jan 17 00:19:20.807167 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:19:20.809157 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:19:20.809207 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 00:19:20.809210 systemd-networkd[1371]: eth1: Link UP Jan 17 00:19:20.809217 systemd-networkd[1371]: eth1: Gained carrier Jan 17 00:19:20.809224 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:19:20.809398 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:19:20.814114 systemd[1]: Reached target network.target - Network. Jan 17 00:19:20.816077 systemd-timesyncd[1343]: Network configuration changed, trying to establish connection. Jan 17 00:19:20.822571 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 00:19:20.824612 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:19:20.824833 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:19:20.834963 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 17 00:19:20.844968 kernel: ISO 9660 Extensions: RRIP_1991A Jan 17 00:19:20.846374 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Jan 17 00:19:20.851120 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:19:20.851330 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:19:20.853911 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:19:20.862292 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:19:20.862636 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:19:20.864815 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:19:20.922542 systemd-networkd[1371]: eth0: Configuring with /run/systemd/network/10-e6:d6:ee:f5:a3:42.network. Jan 17 00:19:20.924331 systemd-timesyncd[1343]: Network configuration changed, trying to establish connection. Jan 17 00:19:20.924805 systemd-networkd[1371]: eth0: Link UP Jan 17 00:19:20.924893 systemd-networkd[1371]: eth0: Gained carrier Jan 17 00:19:20.929711 systemd-timesyncd[1343]: Network configuration changed, trying to establish connection. Jan 17 00:19:20.933461 systemd-timesyncd[1343]: Network configuration changed, trying to establish connection. Jan 17 00:19:20.936246 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 00:19:20.944262 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 00:19:20.963002 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 17 00:19:20.977571 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 00:19:20.989971 kernel: ACPI: button: Power Button [PWRF] Jan 17 00:19:20.994019 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 17 00:19:21.021519 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 17 00:19:21.111957 kernel: mousedev: PS/2 mouse device common for all mice Jan 17 00:19:21.134330 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:19:21.205366 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 17 00:19:21.210971 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 17 00:19:21.232951 kernel: Console: switching to colour dummy device 80x25 Jan 17 00:19:21.233056 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 17 00:19:21.233072 kernel: [drm] features: -context_init Jan 17 00:19:21.238983 kernel: [drm] number of scanouts: 1 Jan 17 00:19:21.239946 kernel: [drm] number of cap sets: 0 Jan 17 00:19:21.243943 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jan 17 00:19:21.250052 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:19:21.252097 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:19:21.252414 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:19:21.255166 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:19:21.263413 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:19:21.268910 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 17 00:19:21.269026 kernel: Console: switching to colour frame buffer device 128x48 Jan 17 00:19:21.278235 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 17 00:19:21.291954 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:19:21.292877 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:19:21.299696 kernel: EDAC MC: Ver: 3.0.0 Jan 17 00:19:21.303164 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:19:21.335715 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 00:19:21.346021 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 00:19:21.366049 lvm[1428]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:19:21.367988 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:19:21.398431 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 00:19:21.400220 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:19:21.400373 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:19:21.400565 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 00:19:21.400661 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 00:19:21.401168 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 00:19:21.403168 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 00:19:21.403342 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 00:19:21.403421 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 00:19:21.403463 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:19:21.403534 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:19:21.407062 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 00:19:21.409348 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 00:19:21.417649 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 00:19:21.419987 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 00:19:21.422131 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 00:19:21.423770 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:19:21.425507 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:19:21.426903 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:19:21.427164 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:19:21.432175 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 00:19:21.436154 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 17 00:19:21.440156 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 00:19:21.444064 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 00:19:21.444204 lvm[1434]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:19:21.452151 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 00:19:21.452690 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 00:19:21.460134 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 00:19:21.464007 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 17 00:19:21.467983 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 00:19:21.478281 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 00:19:21.487168 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 00:19:21.488947 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 00:19:21.492129 jq[1438]: false Jan 17 00:19:21.490495 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 00:19:21.497281 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 00:19:21.501072 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 00:19:21.506472 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 00:19:21.506768 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 00:19:21.531978 update_engine[1447]: I20260117 00:19:21.530748 1447 main.cc:92] Flatcar Update Engine starting Jan 17 00:19:21.550582 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 00:19:21.571481 coreos-metadata[1436]: Jan 17 00:19:21.571 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 17 00:19:21.572384 jq[1448]: true Jan 17 00:19:21.580492 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 00:19:21.580778 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 00:19:21.595095 coreos-metadata[1436]: Jan 17 00:19:21.594 INFO Fetch successful Jan 17 00:19:21.605443 dbus-daemon[1437]: [system] SELinux support is enabled Jan 17 00:19:21.605707 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 00:19:21.622204 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 00:19:21.622252 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 00:19:21.622817 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 00:19:21.627895 extend-filesystems[1439]: Found loop4 Jan 17 00:19:21.627895 extend-filesystems[1439]: Found loop5 Jan 17 00:19:21.627895 extend-filesystems[1439]: Found loop6 Jan 17 00:19:21.627895 extend-filesystems[1439]: Found loop7 Jan 17 00:19:21.627895 extend-filesystems[1439]: Found vda Jan 17 00:19:21.627895 extend-filesystems[1439]: Found vda1 Jan 17 00:19:21.622902 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Jan 17 00:19:21.644022 tar[1453]: linux-amd64/LICENSE Jan 17 00:19:21.644022 tar[1453]: linux-amd64/helm Jan 17 00:19:21.655190 extend-filesystems[1439]: Found vda2 Jan 17 00:19:21.655190 extend-filesystems[1439]: Found vda3 Jan 17 00:19:21.655190 extend-filesystems[1439]: Found usr Jan 17 00:19:21.655190 extend-filesystems[1439]: Found vda4 Jan 17 00:19:21.655190 extend-filesystems[1439]: Found vda6 Jan 17 00:19:21.655190 extend-filesystems[1439]: Found vda7 Jan 17 00:19:21.655190 extend-filesystems[1439]: Found vda9 Jan 17 00:19:21.655190 extend-filesystems[1439]: Checking size of /dev/vda9 Jan 17 00:19:21.688649 update_engine[1447]: I20260117 00:19:21.642257 1447 update_check_scheduler.cc:74] Next update check in 7m54s Jan 17 00:19:21.624570 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 00:19:21.688782 jq[1462]: true Jan 17 00:19:21.637520 systemd[1]: Started update-engine.service - Update Engine. Jan 17 00:19:21.648335 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 00:19:21.650667 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 00:19:21.651116 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 00:19:21.651679 (ntainerd)[1461]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 00:19:21.727863 extend-filesystems[1439]: Resized partition /dev/vda9 Jan 17 00:19:21.736152 extend-filesystems[1483]: resize2fs 1.47.1 (20-May-2024) Jan 17 00:19:21.740576 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Jan 17 00:19:21.737063 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 17 00:19:21.737653 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 00:19:21.768600 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1384) Jan 17 00:19:21.869751 locksmithd[1474]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 00:19:21.873202 systemd-logind[1446]: New seat seat0. Jan 17 00:19:21.874832 systemd-logind[1446]: Watching system buttons on /dev/input/event1 (Power Button) Jan 17 00:19:21.874853 systemd-logind[1446]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 17 00:19:21.875297 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 00:19:21.900859 bash[1501]: Updated "/home/core/.ssh/authorized_keys" Jan 17 00:19:21.903791 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 00:19:21.918793 systemd[1]: Starting sshkeys.service... Jan 17 00:19:21.944955 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 17 00:19:21.958133 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 17 00:19:21.968060 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 17 00:19:21.981736 extend-filesystems[1483]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 17 00:19:21.981736 extend-filesystems[1483]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 17 00:19:21.981736 extend-filesystems[1483]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 17 00:19:21.997046 extend-filesystems[1439]: Resized filesystem in /dev/vda9 Jan 17 00:19:21.997046 extend-filesystems[1439]: Found vdb Jan 17 00:19:21.983022 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 00:19:21.984597 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 00:19:22.048452 coreos-metadata[1506]: Jan 17 00:19:22.048 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 17 00:19:22.064222 coreos-metadata[1506]: Jan 17 00:19:22.064 INFO Fetch successful Jan 17 00:19:22.095963 unknown[1506]: wrote ssh authorized keys file for user: core Jan 17 00:19:22.144050 systemd-networkd[1371]: eth0: Gained IPv6LL Jan 17 00:19:22.144565 systemd-timesyncd[1343]: Network configuration changed, trying to establish connection. Jan 17 00:19:22.147346 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 00:19:22.148417 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 00:19:22.154445 update-ssh-keys[1516]: Updated "/home/core/.ssh/authorized_keys" Jan 17 00:19:22.156343 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:19:22.166378 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 00:19:22.169185 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 17 00:19:22.177819 systemd[1]: Finished sshkeys.service. Jan 17 00:19:22.273878 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 00:19:22.394800 containerd[1461]: time="2026-01-17T00:19:22.393112298Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 00:19:22.458117 systemd-networkd[1371]: eth1: Gained IPv6LL Jan 17 00:19:22.459526 systemd-timesyncd[1343]: Network configuration changed, trying to establish connection. Jan 17 00:19:22.469321 containerd[1461]: time="2026-01-17T00:19:22.469143050Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:19:22.476817 containerd[1461]: time="2026-01-17T00:19:22.474041616Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:19:22.476817 containerd[1461]: time="2026-01-17T00:19:22.475373698Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 00:19:22.476817 containerd[1461]: time="2026-01-17T00:19:22.475413582Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 00:19:22.476817 containerd[1461]: time="2026-01-17T00:19:22.475610720Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 00:19:22.476817 containerd[1461]: time="2026-01-17T00:19:22.475633526Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 00:19:22.476817 containerd[1461]: time="2026-01-17T00:19:22.475707388Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:19:22.476817 containerd[1461]: time="2026-01-17T00:19:22.475723628Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:19:22.480165 containerd[1461]: time="2026-01-17T00:19:22.479607489Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:19:22.480165 containerd[1461]: time="2026-01-17T00:19:22.479650871Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 00:19:22.480165 containerd[1461]: time="2026-01-17T00:19:22.479674203Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:19:22.480165 containerd[1461]: time="2026-01-17T00:19:22.479685783Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 00:19:22.480165 containerd[1461]: time="2026-01-17T00:19:22.479828290Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:19:22.482961 containerd[1461]: time="2026-01-17T00:19:22.482493364Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:19:22.482961 containerd[1461]: time="2026-01-17T00:19:22.482673066Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:19:22.482961 containerd[1461]: time="2026-01-17T00:19:22.482689017Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 00:19:22.482961 containerd[1461]: time="2026-01-17T00:19:22.482827285Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 00:19:22.482961 containerd[1461]: time="2026-01-17T00:19:22.482886228Z" level=info msg="metadata content store policy set" policy=shared Jan 17 00:19:22.493893 containerd[1461]: time="2026-01-17T00:19:22.493845870Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 00:19:22.494158 containerd[1461]: time="2026-01-17T00:19:22.494132166Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 00:19:22.494458 containerd[1461]: time="2026-01-17T00:19:22.494428588Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 00:19:22.495288 containerd[1461]: time="2026-01-17T00:19:22.494894836Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 00:19:22.495288 containerd[1461]: time="2026-01-17T00:19:22.494959393Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 00:19:22.495288 containerd[1461]: time="2026-01-17T00:19:22.495213533Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 00:19:22.496149 containerd[1461]: time="2026-01-17T00:19:22.496127706Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 00:19:22.497952 containerd[1461]: time="2026-01-17T00:19:22.496363358Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 00:19:22.497952 containerd[1461]: time="2026-01-17T00:19:22.496392632Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 00:19:22.497952 containerd[1461]: time="2026-01-17T00:19:22.496413785Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 00:19:22.497952 containerd[1461]: time="2026-01-17T00:19:22.496435638Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 00:19:22.497952 containerd[1461]: time="2026-01-17T00:19:22.496450083Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 00:19:22.497952 containerd[1461]: time="2026-01-17T00:19:22.496464238Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 00:19:22.497952 containerd[1461]: time="2026-01-17T00:19:22.496479295Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 00:19:22.497952 containerd[1461]: time="2026-01-17T00:19:22.496494711Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 00:19:22.497952 containerd[1461]: time="2026-01-17T00:19:22.496508293Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 00:19:22.497952 containerd[1461]: time="2026-01-17T00:19:22.496521353Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 00:19:22.497952 containerd[1461]: time="2026-01-17T00:19:22.496534055Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 00:19:22.497952 containerd[1461]: time="2026-01-17T00:19:22.496554927Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 00:19:22.497952 containerd[1461]: time="2026-01-17T00:19:22.496571279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 00:19:22.497952 containerd[1461]: time="2026-01-17T00:19:22.496584888Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 00:19:22.498413 containerd[1461]: time="2026-01-17T00:19:22.496598961Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 00:19:22.498413 containerd[1461]: time="2026-01-17T00:19:22.496610871Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 00:19:22.498413 containerd[1461]: time="2026-01-17T00:19:22.496657672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 00:19:22.498413 containerd[1461]: time="2026-01-17T00:19:22.496677707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 00:19:22.498413 containerd[1461]: time="2026-01-17T00:19:22.496691553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 00:19:22.498413 containerd[1461]: time="2026-01-17T00:19:22.496704621Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 00:19:22.498413 containerd[1461]: time="2026-01-17T00:19:22.496719104Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 00:19:22.498413 containerd[1461]: time="2026-01-17T00:19:22.496730253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 00:19:22.498413 containerd[1461]: time="2026-01-17T00:19:22.496743266Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 00:19:22.498413 containerd[1461]: time="2026-01-17T00:19:22.496756910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 00:19:22.498413 containerd[1461]: time="2026-01-17T00:19:22.496787115Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 00:19:22.498413 containerd[1461]: time="2026-01-17T00:19:22.496813100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 00:19:22.498413 containerd[1461]: time="2026-01-17T00:19:22.496826389Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 00:19:22.498413 containerd[1461]: time="2026-01-17T00:19:22.496837958Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 00:19:22.499949 containerd[1461]: time="2026-01-17T00:19:22.498776898Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 00:19:22.499949 containerd[1461]: time="2026-01-17T00:19:22.498953586Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 00:19:22.499949 containerd[1461]: time="2026-01-17T00:19:22.498972382Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 00:19:22.499949 containerd[1461]: time="2026-01-17T00:19:22.498987881Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 00:19:22.499949 containerd[1461]: time="2026-01-17T00:19:22.498998165Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 00:19:22.499949 containerd[1461]: time="2026-01-17T00:19:22.499012404Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 00:19:22.499949 containerd[1461]: time="2026-01-17T00:19:22.499023623Z" level=info msg="NRI interface is disabled by configuration." Jan 17 00:19:22.499949 containerd[1461]: time="2026-01-17T00:19:22.499034079Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 00:19:22.500162 containerd[1461]: time="2026-01-17T00:19:22.499351412Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 00:19:22.500162 containerd[1461]: time="2026-01-17T00:19:22.499407210Z" level=info msg="Connect containerd service" Jan 17 00:19:22.500162 containerd[1461]: time="2026-01-17T00:19:22.499478532Z" level=info msg="using legacy CRI server" Jan 17 00:19:22.500162 containerd[1461]: time="2026-01-17T00:19:22.499492764Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 00:19:22.500162 containerd[1461]: time="2026-01-17T00:19:22.499591011Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 00:19:22.502807 containerd[1461]: time="2026-01-17T00:19:22.502263120Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 00:19:22.503018 containerd[1461]: time="2026-01-17T00:19:22.502985221Z" level=info msg="Start subscribing containerd event" Jan 17 00:19:22.503099 containerd[1461]: time="2026-01-17T00:19:22.503089163Z" level=info msg="Start recovering state" Jan 17 00:19:22.503202 containerd[1461]: time="2026-01-17T00:19:22.503190954Z" level=info msg="Start event monitor" Jan 17 00:19:22.503362 containerd[1461]: time="2026-01-17T00:19:22.503342748Z" level=info msg="Start snapshots syncer" Jan 17 00:19:22.503441 containerd[1461]: time="2026-01-17T00:19:22.503429598Z" level=info msg="Start cni network conf syncer for default" Jan 17 00:19:22.504271 containerd[1461]: time="2026-01-17T00:19:22.503894057Z" level=info msg="Start streaming server" Jan 17 00:19:22.507250 containerd[1461]: time="2026-01-17T00:19:22.505834801Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 00:19:22.509023 containerd[1461]: time="2026-01-17T00:19:22.507406380Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 00:19:22.509023 containerd[1461]: time="2026-01-17T00:19:22.507549526Z" level=info msg="containerd successfully booted in 0.116927s" Jan 17 00:19:22.507670 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 00:19:22.535995 sshd_keygen[1473]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 00:19:22.591393 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 00:19:22.607340 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 00:19:22.647203 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 00:19:22.647509 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 00:19:22.660370 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 00:19:22.695581 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 00:19:22.702644 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 00:19:22.705412 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 17 00:19:22.710818 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 00:19:22.954558 tar[1453]: linux-amd64/README.md Jan 17 00:19:22.975086 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 17 00:19:23.656480 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:19:23.660732 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 00:19:23.664015 systemd[1]: Startup finished in 1.404s (kernel) + 6.105s (initrd) + 5.859s (userspace) = 13.369s. Jan 17 00:19:23.673081 (kubelet)[1560]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:19:24.445208 kubelet[1560]: E0117 00:19:24.445094 1560 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:19:24.448427 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:19:24.448701 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:19:24.449284 systemd[1]: kubelet.service: Consumed 1.616s CPU time. Jan 17 00:19:26.073445 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 00:19:26.079448 systemd[1]: Started sshd@0-209.38.74.55:22-4.153.228.146:57116.service - OpenSSH per-connection server daemon (4.153.228.146:57116). Jan 17 00:19:26.475415 sshd[1572]: Accepted publickey for core from 4.153.228.146 port 57116 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:19:26.478155 sshd[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:19:26.491877 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 00:19:26.497327 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 00:19:26.501252 systemd-logind[1446]: New session 1 of user core. Jan 17 00:19:26.517494 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 00:19:26.524333 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 00:19:26.538662 (systemd)[1576]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 00:19:26.653723 systemd[1576]: Queued start job for default target default.target. Jan 17 00:19:26.666038 systemd[1576]: Created slice app.slice - User Application Slice. Jan 17 00:19:26.666200 systemd[1576]: Reached target paths.target - Paths. Jan 17 00:19:26.666314 systemd[1576]: Reached target timers.target - Timers. Jan 17 00:19:26.668166 systemd[1576]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 00:19:26.689398 systemd[1576]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 00:19:26.690368 systemd[1576]: Reached target sockets.target - Sockets. Jan 17 00:19:26.690405 systemd[1576]: Reached target basic.target - Basic System. Jan 17 00:19:26.690474 systemd[1576]: Reached target default.target - Main User Target. Jan 17 00:19:26.690509 systemd[1576]: Startup finished in 142ms. Jan 17 00:19:26.691096 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 00:19:26.698273 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 00:19:27.003256 systemd[1]: Started sshd@1-209.38.74.55:22-4.153.228.146:57128.service - OpenSSH per-connection server daemon (4.153.228.146:57128). Jan 17 00:19:27.433222 sshd[1587]: Accepted publickey for core from 4.153.228.146 port 57128 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:19:27.434896 sshd[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:19:27.440097 systemd-logind[1446]: New session 2 of user core. Jan 17 00:19:27.451239 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 00:19:27.737243 sshd[1587]: pam_unix(sshd:session): session closed for user core Jan 17 00:19:27.741909 systemd[1]: sshd@1-209.38.74.55:22-4.153.228.146:57128.service: Deactivated successfully. Jan 17 00:19:27.743839 systemd[1]: session-2.scope: Deactivated successfully. Jan 17 00:19:27.744637 systemd-logind[1446]: Session 2 logged out. Waiting for processes to exit. Jan 17 00:19:27.746116 systemd-logind[1446]: Removed session 2. Jan 17 00:19:27.831477 systemd[1]: Started sshd@2-209.38.74.55:22-4.153.228.146:57132.service - OpenSSH per-connection server daemon (4.153.228.146:57132). Jan 17 00:19:28.279707 sshd[1594]: Accepted publickey for core from 4.153.228.146 port 57132 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:19:28.281258 sshd[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:19:28.288011 systemd-logind[1446]: New session 3 of user core. Jan 17 00:19:28.290159 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 00:19:28.604471 sshd[1594]: pam_unix(sshd:session): session closed for user core Jan 17 00:19:28.608209 systemd[1]: sshd@2-209.38.74.55:22-4.153.228.146:57132.service: Deactivated successfully. Jan 17 00:19:28.610226 systemd[1]: session-3.scope: Deactivated successfully. Jan 17 00:19:28.610875 systemd-logind[1446]: Session 3 logged out. Waiting for processes to exit. Jan 17 00:19:28.611829 systemd-logind[1446]: Removed session 3. Jan 17 00:19:28.683070 systemd[1]: Started sshd@3-209.38.74.55:22-4.153.228.146:57140.service - OpenSSH per-connection server daemon (4.153.228.146:57140). Jan 17 00:19:29.128305 sshd[1601]: Accepted publickey for core from 4.153.228.146 port 57140 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:19:29.130095 sshd[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:19:29.134603 systemd-logind[1446]: New session 4 of user core. Jan 17 00:19:29.143164 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 00:19:29.450826 sshd[1601]: pam_unix(sshd:session): session closed for user core Jan 17 00:19:29.455178 systemd[1]: sshd@3-209.38.74.55:22-4.153.228.146:57140.service: Deactivated successfully. Jan 17 00:19:29.457194 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 00:19:29.457893 systemd-logind[1446]: Session 4 logged out. Waiting for processes to exit. Jan 17 00:19:29.459201 systemd-logind[1446]: Removed session 4. Jan 17 00:19:29.530292 systemd[1]: Started sshd@4-209.38.74.55:22-4.153.228.146:57154.service - OpenSSH per-connection server daemon (4.153.228.146:57154). Jan 17 00:19:29.951149 sshd[1608]: Accepted publickey for core from 4.153.228.146 port 57154 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:19:29.952826 sshd[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:19:29.959460 systemd-logind[1446]: New session 5 of user core. Jan 17 00:19:29.964241 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 00:19:30.206711 sudo[1611]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 00:19:30.207804 sudo[1611]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:19:30.223667 sudo[1611]: pam_unix(sudo:session): session closed for user root Jan 17 00:19:30.291283 sshd[1608]: pam_unix(sshd:session): session closed for user core Jan 17 00:19:30.295774 systemd-logind[1446]: Session 5 logged out. Waiting for processes to exit. Jan 17 00:19:30.296646 systemd[1]: sshd@4-209.38.74.55:22-4.153.228.146:57154.service: Deactivated successfully. Jan 17 00:19:30.299156 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 00:19:30.301413 systemd-logind[1446]: Removed session 5. Jan 17 00:19:30.387393 systemd[1]: Started sshd@5-209.38.74.55:22-4.153.228.146:57160.service - OpenSSH per-connection server daemon (4.153.228.146:57160). Jan 17 00:19:30.821222 sshd[1616]: Accepted publickey for core from 4.153.228.146 port 57160 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:19:30.823198 sshd[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:19:30.829640 systemd-logind[1446]: New session 6 of user core. Jan 17 00:19:30.835267 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 00:19:31.075080 sudo[1620]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 00:19:31.075579 sudo[1620]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:19:31.080318 sudo[1620]: pam_unix(sudo:session): session closed for user root Jan 17 00:19:31.087201 sudo[1619]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 00:19:31.087516 sudo[1619]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:19:31.106413 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 00:19:31.109335 auditctl[1623]: No rules Jan 17 00:19:31.109821 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 00:19:31.110076 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 00:19:31.122523 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:19:31.155889 augenrules[1641]: No rules Jan 17 00:19:31.157638 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:19:31.159954 sudo[1619]: pam_unix(sudo:session): session closed for user root Jan 17 00:19:31.229327 sshd[1616]: pam_unix(sshd:session): session closed for user core Jan 17 00:19:31.232973 systemd[1]: sshd@5-209.38.74.55:22-4.153.228.146:57160.service: Deactivated successfully. Jan 17 00:19:31.234816 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 00:19:31.236566 systemd-logind[1446]: Session 6 logged out. Waiting for processes to exit. Jan 17 00:19:31.237830 systemd-logind[1446]: Removed session 6. Jan 17 00:19:31.302356 systemd[1]: Started sshd@6-209.38.74.55:22-4.153.228.146:57164.service - OpenSSH per-connection server daemon (4.153.228.146:57164). Jan 17 00:19:31.687511 sshd[1649]: Accepted publickey for core from 4.153.228.146 port 57164 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:19:31.689585 sshd[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:19:31.696947 systemd-logind[1446]: New session 7 of user core. Jan 17 00:19:31.702259 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 00:19:31.915743 sudo[1652]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 00:19:31.916192 sudo[1652]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:19:32.486302 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 17 00:19:32.490054 (dockerd)[1668]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 17 00:19:33.027832 dockerd[1668]: time="2026-01-17T00:19:33.027474188Z" level=info msg="Starting up" Jan 17 00:19:33.182176 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport232586193-merged.mount: Deactivated successfully. Jan 17 00:19:33.217964 dockerd[1668]: time="2026-01-17T00:19:33.217744220Z" level=info msg="Loading containers: start." Jan 17 00:19:33.363480 kernel: Initializing XFRM netlink socket Jan 17 00:19:33.398098 systemd-timesyncd[1343]: Network configuration changed, trying to establish connection. Jan 17 00:19:33.462239 systemd-networkd[1371]: docker0: Link UP Jan 17 00:19:33.483061 dockerd[1668]: time="2026-01-17T00:19:33.482993874Z" level=info msg="Loading containers: done." Jan 17 00:19:33.508690 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2629795018-merged.mount: Deactivated successfully. Jan 17 00:19:33.511520 dockerd[1668]: time="2026-01-17T00:19:33.511454124Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 17 00:19:33.511699 dockerd[1668]: time="2026-01-17T00:19:33.511588076Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 17 00:19:33.511780 dockerd[1668]: time="2026-01-17T00:19:33.511743829Z" level=info msg="Daemon has completed initialization" Jan 17 00:19:33.566742 dockerd[1668]: time="2026-01-17T00:19:33.566559820Z" level=info msg="API listen on /run/docker.sock" Jan 17 00:19:33.566860 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 17 00:19:34.430588 systemd-resolved[1328]: Clock change detected. Flushing caches. Jan 17 00:19:34.432043 systemd-timesyncd[1343]: Contacted time server 69.164.213.136:123 (2.flatcar.pool.ntp.org). Jan 17 00:19:34.432118 systemd-timesyncd[1343]: Initial clock synchronization to Sat 2026-01-17 00:19:34.430415 UTC. Jan 17 00:19:35.262002 containerd[1461]: time="2026-01-17T00:19:35.261712056Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 17 00:19:35.362265 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 00:19:35.373949 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:19:35.515943 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:19:35.521853 (kubelet)[1823]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:19:35.596223 kubelet[1823]: E0117 00:19:35.596108 1823 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:19:35.601052 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:19:35.601214 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:19:36.240136 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1745502277.mount: Deactivated successfully. Jan 17 00:19:37.718999 containerd[1461]: time="2026-01-17T00:19:37.718937106Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:37.722567 containerd[1461]: time="2026-01-17T00:19:37.720744854Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=30114712" Jan 17 00:19:37.722567 containerd[1461]: time="2026-01-17T00:19:37.720818833Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:37.724572 containerd[1461]: time="2026-01-17T00:19:37.724283751Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:37.726318 containerd[1461]: time="2026-01-17T00:19:37.725929211Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 2.464157703s" Jan 17 00:19:37.726318 containerd[1461]: time="2026-01-17T00:19:37.725980958Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Jan 17 00:19:37.727443 containerd[1461]: time="2026-01-17T00:19:37.727424259Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 17 00:19:39.630582 containerd[1461]: time="2026-01-17T00:19:39.629656356Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:39.631598 containerd[1461]: time="2026-01-17T00:19:39.631327758Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26016781" Jan 17 00:19:39.631598 containerd[1461]: time="2026-01-17T00:19:39.631449885Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:39.635198 containerd[1461]: time="2026-01-17T00:19:39.635123915Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:39.637394 containerd[1461]: time="2026-01-17T00:19:39.636975492Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 1.90944179s" Jan 17 00:19:39.637394 containerd[1461]: time="2026-01-17T00:19:39.637024296Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Jan 17 00:19:39.637963 containerd[1461]: time="2026-01-17T00:19:39.637882427Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 17 00:19:41.253575 containerd[1461]: time="2026-01-17T00:19:41.252910319Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:41.254620 containerd[1461]: time="2026-01-17T00:19:41.254565830Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20158102" Jan 17 00:19:41.255378 containerd[1461]: time="2026-01-17T00:19:41.254870395Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:41.258928 containerd[1461]: time="2026-01-17T00:19:41.258883594Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:41.260595 containerd[1461]: time="2026-01-17T00:19:41.260552490Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 1.622613415s" Jan 17 00:19:41.260666 containerd[1461]: time="2026-01-17T00:19:41.260601583Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Jan 17 00:19:41.261188 containerd[1461]: time="2026-01-17T00:19:41.261146737Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 17 00:19:41.262786 systemd-resolved[1328]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Jan 17 00:19:42.745018 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount119871008.mount: Deactivated successfully. Jan 17 00:19:43.599635 containerd[1461]: time="2026-01-17T00:19:43.599569209Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:43.601988 containerd[1461]: time="2026-01-17T00:19:43.601389642Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31930096" Jan 17 00:19:43.604195 containerd[1461]: time="2026-01-17T00:19:43.604079168Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:43.606102 containerd[1461]: time="2026-01-17T00:19:43.605825393Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:43.607906 containerd[1461]: time="2026-01-17T00:19:43.607854146Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 2.346669201s" Jan 17 00:19:43.608042 containerd[1461]: time="2026-01-17T00:19:43.607915338Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Jan 17 00:19:43.608977 containerd[1461]: time="2026-01-17T00:19:43.608705540Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 17 00:19:44.323774 systemd-resolved[1328]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Jan 17 00:19:44.374981 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount17053846.mount: Deactivated successfully. Jan 17 00:19:45.612875 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 17 00:19:45.625035 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:19:45.745060 containerd[1461]: time="2026-01-17T00:19:45.744975110Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:45.749772 containerd[1461]: time="2026-01-17T00:19:45.749257552Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Jan 17 00:19:45.759739 containerd[1461]: time="2026-01-17T00:19:45.757930754Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:45.768166 containerd[1461]: time="2026-01-17T00:19:45.768087057Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:45.771480 containerd[1461]: time="2026-01-17T00:19:45.771395348Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.162642123s" Jan 17 00:19:45.771809 containerd[1461]: time="2026-01-17T00:19:45.771783003Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jan 17 00:19:45.773086 containerd[1461]: time="2026-01-17T00:19:45.773038208Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 17 00:19:45.982093 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:19:45.996174 (kubelet)[1964]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:19:46.075190 kubelet[1964]: E0117 00:19:46.075068 1964 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:19:46.079367 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:19:46.080289 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:19:46.511676 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3076907816.mount: Deactivated successfully. Jan 17 00:19:46.518581 containerd[1461]: time="2026-01-17T00:19:46.518446574Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:46.519737 containerd[1461]: time="2026-01-17T00:19:46.519676971Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 17 00:19:46.521574 containerd[1461]: time="2026-01-17T00:19:46.520064610Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:46.523038 containerd[1461]: time="2026-01-17T00:19:46.522975797Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:46.524259 containerd[1461]: time="2026-01-17T00:19:46.524063065Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 750.775056ms" Jan 17 00:19:46.524259 containerd[1461]: time="2026-01-17T00:19:46.524105010Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 17 00:19:46.525233 containerd[1461]: time="2026-01-17T00:19:46.524755059Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 17 00:19:47.399014 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount996141415.mount: Deactivated successfully. Jan 17 00:19:47.468708 systemd-resolved[1328]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Jan 17 00:19:50.189080 containerd[1461]: time="2026-01-17T00:19:50.188981187Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:50.191679 containerd[1461]: time="2026-01-17T00:19:50.191581366Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58926227" Jan 17 00:19:50.194566 containerd[1461]: time="2026-01-17T00:19:50.192740396Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:50.198529 containerd[1461]: time="2026-01-17T00:19:50.198468284Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:19:50.207693 containerd[1461]: time="2026-01-17T00:19:50.207634154Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 3.682812481s" Jan 17 00:19:50.208210 containerd[1461]: time="2026-01-17T00:19:50.208173863Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jan 17 00:19:55.017360 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:19:55.030039 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:19:55.064947 systemd[1]: Reloading requested from client PID 2059 ('systemctl') (unit session-7.scope)... Jan 17 00:19:55.064967 systemd[1]: Reloading... Jan 17 00:19:55.187565 zram_generator::config[2097]: No configuration found. Jan 17 00:19:55.372302 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:19:55.489117 systemd[1]: Reloading finished in 423 ms. Jan 17 00:19:55.563166 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:19:55.567727 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 00:19:55.568075 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:19:55.576092 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:19:55.755735 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:19:55.771457 (kubelet)[2155]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:19:55.842957 kubelet[2155]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:19:55.842957 kubelet[2155]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:19:55.842957 kubelet[2155]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:19:55.842957 kubelet[2155]: I0117 00:19:55.842508 2155 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:19:56.444354 kubelet[2155]: I0117 00:19:56.444276 2155 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 17 00:19:56.444354 kubelet[2155]: I0117 00:19:56.444325 2155 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:19:56.444793 kubelet[2155]: I0117 00:19:56.444757 2155 server.go:956] "Client rotation is on, will bootstrap in background" Jan 17 00:19:56.485566 kubelet[2155]: I0117 00:19:56.483612 2155 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:19:56.485566 kubelet[2155]: E0117 00:19:56.484780 2155 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://209.38.74.55:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 209.38.74.55:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 17 00:19:56.499997 kubelet[2155]: E0117 00:19:56.499911 2155 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:19:56.500253 kubelet[2155]: I0117 00:19:56.500229 2155 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 17 00:19:56.508827 kubelet[2155]: I0117 00:19:56.508776 2155 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 00:19:56.510468 kubelet[2155]: I0117 00:19:56.510396 2155 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:19:56.514166 kubelet[2155]: I0117 00:19:56.510468 2155 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-8cc98427e3","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 17 00:19:56.514166 kubelet[2155]: I0117 00:19:56.514150 2155 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:19:56.514166 kubelet[2155]: I0117 00:19:56.514168 2155 container_manager_linux.go:303] "Creating device plugin manager" Jan 17 00:19:56.514442 kubelet[2155]: I0117 00:19:56.514369 2155 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:19:56.517906 kubelet[2155]: I0117 00:19:56.517251 2155 kubelet.go:480] "Attempting to sync node with API server" Jan 17 00:19:56.517906 kubelet[2155]: I0117 00:19:56.517296 2155 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:19:56.517906 kubelet[2155]: I0117 00:19:56.517328 2155 kubelet.go:386] "Adding apiserver pod source" Jan 17 00:19:56.520688 kubelet[2155]: I0117 00:19:56.520648 2155 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:19:56.528475 kubelet[2155]: E0117 00:19:56.528436 2155 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://209.38.74.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-8cc98427e3&limit=500&resourceVersion=0\": dial tcp 209.38.74.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 17 00:19:56.528928 kubelet[2155]: E0117 00:19:56.528903 2155 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://209.38.74.55:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 209.38.74.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 17 00:19:56.530413 kubelet[2155]: I0117 00:19:56.530382 2155 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:19:56.531374 kubelet[2155]: I0117 00:19:56.531008 2155 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 17 00:19:56.532736 kubelet[2155]: W0117 00:19:56.531840 2155 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 00:19:56.535484 kubelet[2155]: I0117 00:19:56.535456 2155 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 17 00:19:56.535597 kubelet[2155]: I0117 00:19:56.535518 2155 server.go:1289] "Started kubelet" Jan 17 00:19:56.538376 kubelet[2155]: I0117 00:19:56.538329 2155 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:19:56.539786 kubelet[2155]: I0117 00:19:56.539764 2155 server.go:317] "Adding debug handlers to kubelet server" Jan 17 00:19:56.540213 kubelet[2155]: I0117 00:19:56.540160 2155 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:19:56.540565 kubelet[2155]: I0117 00:19:56.540529 2155 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:19:56.542749 kubelet[2155]: I0117 00:19:56.542727 2155 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:19:56.547478 kubelet[2155]: E0117 00:19:56.544147 2155 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://209.38.74.55:6443/api/v1/namespaces/default/events\": dial tcp 209.38.74.55:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-n-8cc98427e3.188b5cb199fc98bb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-8cc98427e3,UID:ci-4081.3.6-n-8cc98427e3,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-8cc98427e3,},FirstTimestamp:2026-01-17 00:19:56.535486651 +0000 UTC m=+0.756815372,LastTimestamp:2026-01-17 00:19:56.535486651 +0000 UTC m=+0.756815372,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-8cc98427e3,}" Jan 17 00:19:56.550780 kubelet[2155]: I0117 00:19:56.550743 2155 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:19:56.554560 kubelet[2155]: I0117 00:19:56.554507 2155 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 17 00:19:56.563090 kubelet[2155]: I0117 00:19:56.554730 2155 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 17 00:19:56.563090 kubelet[2155]: E0117 00:19:56.555318 2155 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-8cc98427e3\" not found" Jan 17 00:19:56.563291 kubelet[2155]: I0117 00:19:56.563167 2155 reconciler.go:26] "Reconciler: start to sync state" Jan 17 00:19:56.565838 kubelet[2155]: E0117 00:19:56.564641 2155 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://209.38.74.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 209.38.74.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 17 00:19:56.565838 kubelet[2155]: E0117 00:19:56.564774 2155 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://209.38.74.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-8cc98427e3?timeout=10s\": dial tcp 209.38.74.55:6443: connect: connection refused" interval="200ms" Jan 17 00:19:56.565838 kubelet[2155]: I0117 00:19:56.565069 2155 factory.go:223] Registration of the systemd container factory successfully Jan 17 00:19:56.565838 kubelet[2155]: I0117 00:19:56.565193 2155 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:19:56.567715 kubelet[2155]: E0117 00:19:56.567693 2155 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 00:19:56.568161 kubelet[2155]: I0117 00:19:56.568143 2155 factory.go:223] Registration of the containerd container factory successfully Jan 17 00:19:56.590757 kubelet[2155]: I0117 00:19:56.589654 2155 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 17 00:19:56.598378 kubelet[2155]: I0117 00:19:56.598320 2155 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 17 00:19:56.598378 kubelet[2155]: I0117 00:19:56.598360 2155 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 17 00:19:56.598576 kubelet[2155]: I0117 00:19:56.598389 2155 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:19:56.598576 kubelet[2155]: I0117 00:19:56.598403 2155 kubelet.go:2436] "Starting kubelet main sync loop" Jan 17 00:19:56.598576 kubelet[2155]: E0117 00:19:56.598466 2155 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 00:19:56.603444 kubelet[2155]: E0117 00:19:56.603312 2155 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://209.38.74.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 209.38.74.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 17 00:19:56.607317 kubelet[2155]: I0117 00:19:56.606948 2155 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:19:56.607317 kubelet[2155]: I0117 00:19:56.607017 2155 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:19:56.607317 kubelet[2155]: I0117 00:19:56.607051 2155 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:19:56.610134 kubelet[2155]: I0117 00:19:56.610109 2155 policy_none.go:49] "None policy: Start" Jan 17 00:19:56.610316 kubelet[2155]: I0117 00:19:56.610253 2155 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 17 00:19:56.610316 kubelet[2155]: I0117 00:19:56.610268 2155 state_mem.go:35] "Initializing new in-memory state store" Jan 17 00:19:56.617590 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 17 00:19:56.633949 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 17 00:19:56.637515 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 17 00:19:56.648333 kubelet[2155]: E0117 00:19:56.648297 2155 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 17 00:19:56.649492 kubelet[2155]: I0117 00:19:56.649328 2155 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:19:56.649492 kubelet[2155]: I0117 00:19:56.649350 2155 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:19:56.651248 kubelet[2155]: I0117 00:19:56.650986 2155 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:19:56.651931 kubelet[2155]: E0117 00:19:56.651815 2155 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:19:56.651931 kubelet[2155]: E0117 00:19:56.651863 2155 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.6-n-8cc98427e3\" not found" Jan 17 00:19:56.719002 systemd[1]: Created slice kubepods-burstable-pod2dc6e7616bc482ea4b087f3e27c00151.slice - libcontainer container kubepods-burstable-pod2dc6e7616bc482ea4b087f3e27c00151.slice. Jan 17 00:19:56.726529 kubelet[2155]: E0117 00:19:56.726100 2155 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-8cc98427e3\" not found" node="ci-4081.3.6-n-8cc98427e3" Jan 17 00:19:56.730043 systemd[1]: Created slice kubepods-burstable-podedc5817c08a8da2954a53ff3fe733618.slice - libcontainer container kubepods-burstable-podedc5817c08a8da2954a53ff3fe733618.slice. Jan 17 00:19:56.741759 kubelet[2155]: E0117 00:19:56.741709 2155 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-8cc98427e3\" not found" node="ci-4081.3.6-n-8cc98427e3" Jan 17 00:19:56.745254 systemd[1]: Created slice kubepods-burstable-pod3a7babb4715cbac282b66acf6985350c.slice - libcontainer container kubepods-burstable-pod3a7babb4715cbac282b66acf6985350c.slice. Jan 17 00:19:56.748527 kubelet[2155]: E0117 00:19:56.748224 2155 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-8cc98427e3\" not found" node="ci-4081.3.6-n-8cc98427e3" Jan 17 00:19:56.750794 kubelet[2155]: I0117 00:19:56.750768 2155 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-8cc98427e3" Jan 17 00:19:56.751426 kubelet[2155]: E0117 00:19:56.751395 2155 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://209.38.74.55:6443/api/v1/nodes\": dial tcp 209.38.74.55:6443: connect: connection refused" node="ci-4081.3.6-n-8cc98427e3" Jan 17 00:19:56.765717 kubelet[2155]: E0117 00:19:56.765656 2155 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://209.38.74.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-8cc98427e3?timeout=10s\": dial tcp 209.38.74.55:6443: connect: connection refused" interval="400ms" Jan 17 00:19:56.864666 kubelet[2155]: I0117 00:19:56.864580 2155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/edc5817c08a8da2954a53ff3fe733618-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-8cc98427e3\" (UID: \"edc5817c08a8da2954a53ff3fe733618\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8cc98427e3" Jan 17 00:19:56.864666 kubelet[2155]: I0117 00:19:56.864661 2155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/edc5817c08a8da2954a53ff3fe733618-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-8cc98427e3\" (UID: \"edc5817c08a8da2954a53ff3fe733618\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8cc98427e3" Jan 17 00:19:56.864666 kubelet[2155]: I0117 00:19:56.864699 2155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/edc5817c08a8da2954a53ff3fe733618-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-8cc98427e3\" (UID: \"edc5817c08a8da2954a53ff3fe733618\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8cc98427e3" Jan 17 00:19:56.865415 kubelet[2155]: I0117 00:19:56.864725 2155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/edc5817c08a8da2954a53ff3fe733618-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-8cc98427e3\" (UID: \"edc5817c08a8da2954a53ff3fe733618\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8cc98427e3" Jan 17 00:19:56.865415 kubelet[2155]: I0117 00:19:56.864805 2155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3a7babb4715cbac282b66acf6985350c-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-8cc98427e3\" (UID: \"3a7babb4715cbac282b66acf6985350c\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-8cc98427e3" Jan 17 00:19:56.865415 kubelet[2155]: I0117 00:19:56.864857 2155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2dc6e7616bc482ea4b087f3e27c00151-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-8cc98427e3\" (UID: \"2dc6e7616bc482ea4b087f3e27c00151\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-8cc98427e3" Jan 17 00:19:56.865415 kubelet[2155]: I0117 00:19:56.864886 2155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2dc6e7616bc482ea4b087f3e27c00151-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-8cc98427e3\" (UID: \"2dc6e7616bc482ea4b087f3e27c00151\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-8cc98427e3" Jan 17 00:19:56.865415 kubelet[2155]: I0117 00:19:56.864920 2155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2dc6e7616bc482ea4b087f3e27c00151-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-8cc98427e3\" (UID: \"2dc6e7616bc482ea4b087f3e27c00151\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-8cc98427e3" Jan 17 00:19:56.865630 kubelet[2155]: I0117 00:19:56.864957 2155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/edc5817c08a8da2954a53ff3fe733618-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-8cc98427e3\" (UID: \"edc5817c08a8da2954a53ff3fe733618\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8cc98427e3" Jan 17 00:19:56.953927 kubelet[2155]: I0117 00:19:56.953409 2155 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-8cc98427e3" Jan 17 00:19:56.954271 kubelet[2155]: E0117 00:19:56.954224 2155 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://209.38.74.55:6443/api/v1/nodes\": dial tcp 209.38.74.55:6443: connect: connection refused" node="ci-4081.3.6-n-8cc98427e3" Jan 17 00:19:57.028510 kubelet[2155]: E0117 00:19:57.027861 2155 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:19:57.029346 containerd[1461]: time="2026-01-17T00:19:57.028790222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-8cc98427e3,Uid:2dc6e7616bc482ea4b087f3e27c00151,Namespace:kube-system,Attempt:0,}" Jan 17 00:19:57.043179 kubelet[2155]: E0117 00:19:57.043061 2155 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:19:57.049045 containerd[1461]: time="2026-01-17T00:19:57.048990604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-8cc98427e3,Uid:edc5817c08a8da2954a53ff3fe733618,Namespace:kube-system,Attempt:0,}" Jan 17 00:19:57.049196 kubelet[2155]: E0117 00:19:57.049132 2155 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:19:57.050079 containerd[1461]: time="2026-01-17T00:19:57.050037406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-8cc98427e3,Uid:3a7babb4715cbac282b66acf6985350c,Namespace:kube-system,Attempt:0,}" Jan 17 00:19:57.166879 kubelet[2155]: E0117 00:19:57.166815 2155 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://209.38.74.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-8cc98427e3?timeout=10s\": dial tcp 209.38.74.55:6443: connect: connection refused" interval="800ms" Jan 17 00:19:57.356192 kubelet[2155]: I0117 00:19:57.355731 2155 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-8cc98427e3" Jan 17 00:19:57.356192 kubelet[2155]: E0117 00:19:57.356072 2155 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://209.38.74.55:6443/api/v1/nodes\": dial tcp 209.38.74.55:6443: connect: connection refused" node="ci-4081.3.6-n-8cc98427e3" Jan 17 00:19:57.582102 kubelet[2155]: E0117 00:19:57.582003 2155 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://209.38.74.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-8cc98427e3&limit=500&resourceVersion=0\": dial tcp 209.38.74.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 17 00:19:57.677217 kubelet[2155]: E0117 00:19:57.677165 2155 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://209.38.74.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 209.38.74.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 17 00:19:57.685416 kubelet[2155]: E0117 00:19:57.685342 2155 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://209.38.74.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 209.38.74.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 17 00:19:57.749011 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount198018400.mount: Deactivated successfully. Jan 17 00:19:57.754965 containerd[1461]: time="2026-01-17T00:19:57.754808656Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:19:57.757212 containerd[1461]: time="2026-01-17T00:19:57.757127286Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:19:57.758074 containerd[1461]: time="2026-01-17T00:19:57.758010866Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:19:57.759184 containerd[1461]: time="2026-01-17T00:19:57.759140670Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:19:57.759866 containerd[1461]: time="2026-01-17T00:19:57.759747730Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 17 00:19:57.760753 containerd[1461]: time="2026-01-17T00:19:57.760620485Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:19:57.760753 containerd[1461]: time="2026-01-17T00:19:57.760714072Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:19:57.763982 containerd[1461]: time="2026-01-17T00:19:57.763931954Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:19:57.765359 containerd[1461]: time="2026-01-17T00:19:57.765070202Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 736.166919ms" Jan 17 00:19:57.768074 containerd[1461]: time="2026-01-17T00:19:57.768017219Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 718.660694ms" Jan 17 00:19:57.768597 containerd[1461]: time="2026-01-17T00:19:57.768498669Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 718.395605ms" Jan 17 00:19:57.954406 kubelet[2155]: E0117 00:19:57.953148 2155 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://209.38.74.55:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 209.38.74.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 17 00:19:57.960440 containerd[1461]: time="2026-01-17T00:19:57.959415161Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:19:57.960440 containerd[1461]: time="2026-01-17T00:19:57.959481953Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:19:57.960440 containerd[1461]: time="2026-01-17T00:19:57.959498117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:19:57.967565 kubelet[2155]: E0117 00:19:57.967450 2155 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://209.38.74.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-8cc98427e3?timeout=10s\": dial tcp 209.38.74.55:6443: connect: connection refused" interval="1.6s" Jan 17 00:19:57.967908 containerd[1461]: time="2026-01-17T00:19:57.960863118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:19:57.967908 containerd[1461]: time="2026-01-17T00:19:57.967755614Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:19:57.967908 containerd[1461]: time="2026-01-17T00:19:57.967825274Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:19:57.967908 containerd[1461]: time="2026-01-17T00:19:57.967855746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:19:57.969345 containerd[1461]: time="2026-01-17T00:19:57.968618541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:19:57.972988 containerd[1461]: time="2026-01-17T00:19:57.972833949Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:19:57.973252 containerd[1461]: time="2026-01-17T00:19:57.973167726Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:19:57.973755 containerd[1461]: time="2026-01-17T00:19:57.973410307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:19:57.974412 containerd[1461]: time="2026-01-17T00:19:57.974264759Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:19:58.006811 systemd[1]: Started cri-containerd-9096416304eacf8578b43cb24362b9e67f56d4a7e825f2c1bf0b8f274d775134.scope - libcontainer container 9096416304eacf8578b43cb24362b9e67f56d4a7e825f2c1bf0b8f274d775134. Jan 17 00:19:58.009108 systemd[1]: Started cri-containerd-d8b3b6aec9743590743a9053ba8c7cecb56de688c8f52b8bbec78a69568545f2.scope - libcontainer container d8b3b6aec9743590743a9053ba8c7cecb56de688c8f52b8bbec78a69568545f2. Jan 17 00:19:58.015884 systemd[1]: Started cri-containerd-018fc6df3c14b0d635422e472f2fd6fbd8c894824d7da7ee5b22b64c4321d15b.scope - libcontainer container 018fc6df3c14b0d635422e472f2fd6fbd8c894824d7da7ee5b22b64c4321d15b. Jan 17 00:19:58.112339 containerd[1461]: time="2026-01-17T00:19:58.112147509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-8cc98427e3,Uid:2dc6e7616bc482ea4b087f3e27c00151,Namespace:kube-system,Attempt:0,} returns sandbox id \"9096416304eacf8578b43cb24362b9e67f56d4a7e825f2c1bf0b8f274d775134\"" Jan 17 00:19:58.117408 kubelet[2155]: E0117 00:19:58.116629 2155 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:19:58.129328 containerd[1461]: time="2026-01-17T00:19:58.129219982Z" level=info msg="CreateContainer within sandbox \"9096416304eacf8578b43cb24362b9e67f56d4a7e825f2c1bf0b8f274d775134\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 17 00:19:58.139896 containerd[1461]: time="2026-01-17T00:19:58.139813210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-8cc98427e3,Uid:edc5817c08a8da2954a53ff3fe733618,Namespace:kube-system,Attempt:0,} returns sandbox id \"018fc6df3c14b0d635422e472f2fd6fbd8c894824d7da7ee5b22b64c4321d15b\"" Jan 17 00:19:58.141618 kubelet[2155]: E0117 00:19:58.141263 2155 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:19:58.146307 containerd[1461]: time="2026-01-17T00:19:58.146044398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-8cc98427e3,Uid:3a7babb4715cbac282b66acf6985350c,Namespace:kube-system,Attempt:0,} returns sandbox id \"d8b3b6aec9743590743a9053ba8c7cecb56de688c8f52b8bbec78a69568545f2\"" Jan 17 00:19:58.147480 kubelet[2155]: E0117 00:19:58.147322 2155 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:19:58.148408 containerd[1461]: time="2026-01-17T00:19:58.148366067Z" level=info msg="CreateContainer within sandbox \"018fc6df3c14b0d635422e472f2fd6fbd8c894824d7da7ee5b22b64c4321d15b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 17 00:19:58.154116 containerd[1461]: time="2026-01-17T00:19:58.154066662Z" level=info msg="CreateContainer within sandbox \"9096416304eacf8578b43cb24362b9e67f56d4a7e825f2c1bf0b8f274d775134\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"372a6d6fda3d58bd99064b4e81ba0b51993d2005b5f725d0e2b58fbe3b69188d\"" Jan 17 00:19:58.154426 containerd[1461]: time="2026-01-17T00:19:58.154339478Z" level=info msg="CreateContainer within sandbox \"d8b3b6aec9743590743a9053ba8c7cecb56de688c8f52b8bbec78a69568545f2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 17 00:19:58.156581 containerd[1461]: time="2026-01-17T00:19:58.155707480Z" level=info msg="StartContainer for \"372a6d6fda3d58bd99064b4e81ba0b51993d2005b5f725d0e2b58fbe3b69188d\"" Jan 17 00:19:58.157844 kubelet[2155]: I0117 00:19:58.157809 2155 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-8cc98427e3" Jan 17 00:19:58.159049 kubelet[2155]: E0117 00:19:58.158991 2155 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://209.38.74.55:6443/api/v1/nodes\": dial tcp 209.38.74.55:6443: connect: connection refused" node="ci-4081.3.6-n-8cc98427e3" Jan 17 00:19:58.173050 containerd[1461]: time="2026-01-17T00:19:58.172954392Z" level=info msg="CreateContainer within sandbox \"018fc6df3c14b0d635422e472f2fd6fbd8c894824d7da7ee5b22b64c4321d15b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"32646a3d89150de90dc01251c471524d6a19defd3f54b181a2a18b862bddcca7\"" Jan 17 00:19:58.175724 containerd[1461]: time="2026-01-17T00:19:58.175656267Z" level=info msg="StartContainer for \"32646a3d89150de90dc01251c471524d6a19defd3f54b181a2a18b862bddcca7\"" Jan 17 00:19:58.177460 containerd[1461]: time="2026-01-17T00:19:58.177381833Z" level=info msg="CreateContainer within sandbox \"d8b3b6aec9743590743a9053ba8c7cecb56de688c8f52b8bbec78a69568545f2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"bffa1011ed02f2eb9c2c16e779bb6d2008a027ffb0cbf55255c46fc8af302a5d\"" Jan 17 00:19:58.181506 containerd[1461]: time="2026-01-17T00:19:58.179810878Z" level=info msg="StartContainer for \"bffa1011ed02f2eb9c2c16e779bb6d2008a027ffb0cbf55255c46fc8af302a5d\"" Jan 17 00:19:58.208807 systemd[1]: Started cri-containerd-372a6d6fda3d58bd99064b4e81ba0b51993d2005b5f725d0e2b58fbe3b69188d.scope - libcontainer container 372a6d6fda3d58bd99064b4e81ba0b51993d2005b5f725d0e2b58fbe3b69188d. Jan 17 00:19:58.236848 systemd[1]: Started cri-containerd-32646a3d89150de90dc01251c471524d6a19defd3f54b181a2a18b862bddcca7.scope - libcontainer container 32646a3d89150de90dc01251c471524d6a19defd3f54b181a2a18b862bddcca7. Jan 17 00:19:58.260893 systemd[1]: Started cri-containerd-bffa1011ed02f2eb9c2c16e779bb6d2008a027ffb0cbf55255c46fc8af302a5d.scope - libcontainer container bffa1011ed02f2eb9c2c16e779bb6d2008a027ffb0cbf55255c46fc8af302a5d. Jan 17 00:19:58.324909 containerd[1461]: time="2026-01-17T00:19:58.324834057Z" level=info msg="StartContainer for \"372a6d6fda3d58bd99064b4e81ba0b51993d2005b5f725d0e2b58fbe3b69188d\" returns successfully" Jan 17 00:19:58.351017 containerd[1461]: time="2026-01-17T00:19:58.350358026Z" level=info msg="StartContainer for \"32646a3d89150de90dc01251c471524d6a19defd3f54b181a2a18b862bddcca7\" returns successfully" Jan 17 00:19:58.359493 containerd[1461]: time="2026-01-17T00:19:58.359266414Z" level=info msg="StartContainer for \"bffa1011ed02f2eb9c2c16e779bb6d2008a027ffb0cbf55255c46fc8af302a5d\" returns successfully" Jan 17 00:19:58.594686 kubelet[2155]: E0117 00:19:58.593773 2155 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://209.38.74.55:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 209.38.74.55:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 17 00:19:58.618141 kubelet[2155]: E0117 00:19:58.618098 2155 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-8cc98427e3\" not found" node="ci-4081.3.6-n-8cc98427e3" Jan 17 00:19:58.618300 kubelet[2155]: E0117 00:19:58.618277 2155 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:19:58.621009 kubelet[2155]: E0117 00:19:58.620976 2155 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-8cc98427e3\" not found" node="ci-4081.3.6-n-8cc98427e3" Jan 17 00:19:58.621131 kubelet[2155]: E0117 00:19:58.621116 2155 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:19:58.625081 kubelet[2155]: E0117 00:19:58.625043 2155 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-8cc98427e3\" not found" node="ci-4081.3.6-n-8cc98427e3" Jan 17 00:19:58.625233 kubelet[2155]: E0117 00:19:58.625206 2155 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:19:59.629563 kubelet[2155]: E0117 00:19:59.629499 2155 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-8cc98427e3\" not found" node="ci-4081.3.6-n-8cc98427e3" Jan 17 00:19:59.630111 kubelet[2155]: E0117 00:19:59.629670 2155 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:19:59.630111 kubelet[2155]: E0117 00:19:59.629977 2155 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-8cc98427e3\" not found" node="ci-4081.3.6-n-8cc98427e3" Jan 17 00:19:59.630111 kubelet[2155]: E0117 00:19:59.630063 2155 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:19:59.763157 kubelet[2155]: I0117 00:19:59.760603 2155 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-8cc98427e3" Jan 17 00:20:00.630637 kubelet[2155]: E0117 00:20:00.630596 2155 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-8cc98427e3\" not found" node="ci-4081.3.6-n-8cc98427e3" Jan 17 00:20:00.631169 kubelet[2155]: E0117 00:20:00.630792 2155 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:20:01.352158 kubelet[2155]: I0117 00:20:01.351669 2155 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-8cc98427e3" Jan 17 00:20:01.352158 kubelet[2155]: E0117 00:20:01.351727 2155 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081.3.6-n-8cc98427e3\": node \"ci-4081.3.6-n-8cc98427e3\" not found" Jan 17 00:20:01.458043 kubelet[2155]: I0117 00:20:01.457439 2155 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-8cc98427e3" Jan 17 00:20:01.474303 kubelet[2155]: E0117 00:20:01.474256 2155 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-8cc98427e3\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-n-8cc98427e3" Jan 17 00:20:01.474936 kubelet[2155]: I0117 00:20:01.474632 2155 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8cc98427e3" Jan 17 00:20:01.477430 kubelet[2155]: E0117 00:20:01.477121 2155 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-8cc98427e3\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8cc98427e3" Jan 17 00:20:01.477430 kubelet[2155]: I0117 00:20:01.477182 2155 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-8cc98427e3" Jan 17 00:20:01.480251 kubelet[2155]: E0117 00:20:01.480193 2155 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-8cc98427e3\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.6-n-8cc98427e3" Jan 17 00:20:01.532006 kubelet[2155]: I0117 00:20:01.531940 2155 apiserver.go:52] "Watching apiserver" Jan 17 00:20:01.563704 kubelet[2155]: I0117 00:20:01.563637 2155 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 17 00:20:04.227113 systemd[1]: Reloading requested from client PID 2439 ('systemctl') (unit session-7.scope)... Jan 17 00:20:04.227591 systemd[1]: Reloading... Jan 17 00:20:04.345584 zram_generator::config[2484]: No configuration found. Jan 17 00:20:04.502988 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:20:04.615199 systemd[1]: Reloading finished in 387 ms. Jan 17 00:20:04.669258 kubelet[2155]: I0117 00:20:04.668781 2155 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:20:04.669095 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:20:04.681515 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 00:20:04.681872 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:20:04.681955 systemd[1]: kubelet.service: Consumed 1.297s CPU time, 128.1M memory peak, 0B memory swap peak. Jan 17 00:20:04.693105 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:20:04.916844 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:20:04.930653 (kubelet)[2529]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:20:05.024575 kubelet[2529]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:20:05.024575 kubelet[2529]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:20:05.024575 kubelet[2529]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:20:05.024575 kubelet[2529]: I0117 00:20:05.023786 2529 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:20:05.033324 kubelet[2529]: I0117 00:20:05.033260 2529 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 17 00:20:05.033324 kubelet[2529]: I0117 00:20:05.033295 2529 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:20:05.033731 kubelet[2529]: I0117 00:20:05.033691 2529 server.go:956] "Client rotation is on, will bootstrap in background" Jan 17 00:20:05.035430 kubelet[2529]: I0117 00:20:05.035400 2529 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 17 00:20:05.039137 kubelet[2529]: I0117 00:20:05.038564 2529 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:20:05.052736 kubelet[2529]: E0117 00:20:05.052681 2529 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:20:05.052974 kubelet[2529]: I0117 00:20:05.052961 2529 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 17 00:20:05.056255 kubelet[2529]: I0117 00:20:05.056224 2529 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 00:20:05.059460 kubelet[2529]: I0117 00:20:05.059195 2529 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:20:05.059460 kubelet[2529]: I0117 00:20:05.059234 2529 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-8cc98427e3","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 17 00:20:05.060391 kubelet[2529]: I0117 00:20:05.060371 2529 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:20:05.060447 kubelet[2529]: I0117 00:20:05.060441 2529 container_manager_linux.go:303] "Creating device plugin manager" Jan 17 00:20:05.060570 kubelet[2529]: I0117 00:20:05.060561 2529 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:20:05.060825 kubelet[2529]: I0117 00:20:05.060808 2529 kubelet.go:480] "Attempting to sync node with API server" Jan 17 00:20:05.060947 kubelet[2529]: I0117 00:20:05.060933 2529 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:20:05.062181 kubelet[2529]: I0117 00:20:05.062152 2529 kubelet.go:386] "Adding apiserver pod source" Jan 17 00:20:05.062406 kubelet[2529]: I0117 00:20:05.062307 2529 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:20:05.066614 kubelet[2529]: I0117 00:20:05.066499 2529 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:20:05.070577 kubelet[2529]: I0117 00:20:05.069106 2529 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 17 00:20:05.075119 kubelet[2529]: I0117 00:20:05.075090 2529 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 17 00:20:05.075354 kubelet[2529]: I0117 00:20:05.075340 2529 server.go:1289] "Started kubelet" Jan 17 00:20:05.079003 kubelet[2529]: I0117 00:20:05.078909 2529 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:20:05.079356 kubelet[2529]: I0117 00:20:05.079336 2529 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:20:05.079443 kubelet[2529]: I0117 00:20:05.079397 2529 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:20:05.081436 kubelet[2529]: I0117 00:20:05.081411 2529 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:20:05.082501 kubelet[2529]: I0117 00:20:05.082470 2529 server.go:317] "Adding debug handlers to kubelet server" Jan 17 00:20:05.089035 kubelet[2529]: I0117 00:20:05.088996 2529 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:20:05.097921 kubelet[2529]: I0117 00:20:05.093214 2529 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 17 00:20:05.098835 kubelet[2529]: I0117 00:20:05.093228 2529 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 17 00:20:05.099091 kubelet[2529]: E0117 00:20:05.093480 2529 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-8cc98427e3\" not found" Jan 17 00:20:05.099362 kubelet[2529]: I0117 00:20:05.099346 2529 reconciler.go:26] "Reconciler: start to sync state" Jan 17 00:20:05.104901 kubelet[2529]: I0117 00:20:05.104631 2529 factory.go:223] Registration of the systemd container factory successfully Jan 17 00:20:05.107956 kubelet[2529]: I0117 00:20:05.107848 2529 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:20:05.115294 kubelet[2529]: I0117 00:20:05.115224 2529 factory.go:223] Registration of the containerd container factory successfully Jan 17 00:20:05.132038 kubelet[2529]: I0117 00:20:05.131983 2529 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 17 00:20:05.134252 kubelet[2529]: I0117 00:20:05.134217 2529 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 17 00:20:05.135243 kubelet[2529]: I0117 00:20:05.135219 2529 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 17 00:20:05.135378 kubelet[2529]: I0117 00:20:05.135364 2529 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:20:05.135445 kubelet[2529]: I0117 00:20:05.135435 2529 kubelet.go:2436] "Starting kubelet main sync loop" Jan 17 00:20:05.135665 kubelet[2529]: E0117 00:20:05.135628 2529 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 00:20:05.145718 kubelet[2529]: E0117 00:20:05.145440 2529 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 00:20:05.196007 kubelet[2529]: I0117 00:20:05.195482 2529 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:20:05.196007 kubelet[2529]: I0117 00:20:05.195527 2529 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:20:05.196007 kubelet[2529]: I0117 00:20:05.195573 2529 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:20:05.196007 kubelet[2529]: I0117 00:20:05.195762 2529 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 00:20:05.196007 kubelet[2529]: I0117 00:20:05.195775 2529 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 17 00:20:05.196007 kubelet[2529]: I0117 00:20:05.195793 2529 policy_none.go:49] "None policy: Start" Jan 17 00:20:05.196007 kubelet[2529]: I0117 00:20:05.195805 2529 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 17 00:20:05.196007 kubelet[2529]: I0117 00:20:05.195815 2529 state_mem.go:35] "Initializing new in-memory state store" Jan 17 00:20:05.196007 kubelet[2529]: I0117 00:20:05.195910 2529 state_mem.go:75] "Updated machine memory state" Jan 17 00:20:05.206613 kubelet[2529]: E0117 00:20:05.206021 2529 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 17 00:20:05.206613 kubelet[2529]: I0117 00:20:05.206342 2529 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:20:05.206613 kubelet[2529]: I0117 00:20:05.206360 2529 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:20:05.206934 kubelet[2529]: I0117 00:20:05.206914 2529 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:20:05.214576 kubelet[2529]: E0117 00:20:05.211525 2529 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:20:05.237172 kubelet[2529]: I0117 00:20:05.237085 2529 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8cc98427e3" Jan 17 00:20:05.240174 kubelet[2529]: I0117 00:20:05.240140 2529 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-8cc98427e3" Jan 17 00:20:05.242582 kubelet[2529]: I0117 00:20:05.241791 2529 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-8cc98427e3" Jan 17 00:20:05.251744 kubelet[2529]: I0117 00:20:05.251709 2529 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 17 00:20:05.252953 kubelet[2529]: I0117 00:20:05.252935 2529 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 17 00:20:05.257647 kubelet[2529]: I0117 00:20:05.257609 2529 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 17 00:20:05.302148 kubelet[2529]: I0117 00:20:05.302091 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3a7babb4715cbac282b66acf6985350c-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-8cc98427e3\" (UID: \"3a7babb4715cbac282b66acf6985350c\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-8cc98427e3" Jan 17 00:20:05.302597 kubelet[2529]: I0117 00:20:05.302518 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2dc6e7616bc482ea4b087f3e27c00151-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-8cc98427e3\" (UID: \"2dc6e7616bc482ea4b087f3e27c00151\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-8cc98427e3" Jan 17 00:20:05.302803 kubelet[2529]: I0117 00:20:05.302769 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/edc5817c08a8da2954a53ff3fe733618-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-8cc98427e3\" (UID: \"edc5817c08a8da2954a53ff3fe733618\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8cc98427e3" Jan 17 00:20:05.302975 kubelet[2529]: I0117 00:20:05.302957 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/edc5817c08a8da2954a53ff3fe733618-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-8cc98427e3\" (UID: \"edc5817c08a8da2954a53ff3fe733618\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8cc98427e3" Jan 17 00:20:05.303152 kubelet[2529]: I0117 00:20:05.303129 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2dc6e7616bc482ea4b087f3e27c00151-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-8cc98427e3\" (UID: \"2dc6e7616bc482ea4b087f3e27c00151\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-8cc98427e3" Jan 17 00:20:05.303309 kubelet[2529]: I0117 00:20:05.303290 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2dc6e7616bc482ea4b087f3e27c00151-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-8cc98427e3\" (UID: \"2dc6e7616bc482ea4b087f3e27c00151\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-8cc98427e3" Jan 17 00:20:05.303451 kubelet[2529]: I0117 00:20:05.303433 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/edc5817c08a8da2954a53ff3fe733618-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-8cc98427e3\" (UID: \"edc5817c08a8da2954a53ff3fe733618\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8cc98427e3" Jan 17 00:20:05.303599 kubelet[2529]: I0117 00:20:05.303583 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/edc5817c08a8da2954a53ff3fe733618-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-8cc98427e3\" (UID: \"edc5817c08a8da2954a53ff3fe733618\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8cc98427e3" Jan 17 00:20:05.303760 kubelet[2529]: I0117 00:20:05.303738 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/edc5817c08a8da2954a53ff3fe733618-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-8cc98427e3\" (UID: \"edc5817c08a8da2954a53ff3fe733618\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8cc98427e3" Jan 17 00:20:05.326240 kubelet[2529]: I0117 00:20:05.326097 2529 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-8cc98427e3" Jan 17 00:20:05.340529 kubelet[2529]: I0117 00:20:05.340182 2529 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.6-n-8cc98427e3" Jan 17 00:20:05.340529 kubelet[2529]: I0117 00:20:05.340472 2529 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-8cc98427e3" Jan 17 00:20:05.555730 kubelet[2529]: E0117 00:20:05.554421 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:20:05.557886 kubelet[2529]: E0117 00:20:05.557809 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:20:05.559142 kubelet[2529]: E0117 00:20:05.558815 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:20:06.073844 kubelet[2529]: I0117 00:20:06.073793 2529 apiserver.go:52] "Watching apiserver" Jan 17 00:20:06.099089 kubelet[2529]: I0117 00:20:06.099031 2529 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 17 00:20:06.173600 kubelet[2529]: E0117 00:20:06.173098 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:20:06.174306 kubelet[2529]: E0117 00:20:06.174284 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:20:06.174737 kubelet[2529]: I0117 00:20:06.174721 2529 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-8cc98427e3" Jan 17 00:20:06.201569 kubelet[2529]: I0117 00:20:06.201276 2529 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 17 00:20:06.202564 kubelet[2529]: E0117 00:20:06.201811 2529 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-8cc98427e3\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.6-n-8cc98427e3" Jan 17 00:20:06.202564 kubelet[2529]: E0117 00:20:06.202007 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:20:06.239567 kubelet[2529]: I0117 00:20:06.238099 2529 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8cc98427e3" podStartSLOduration=1.238077666 podStartE2EDuration="1.238077666s" podCreationTimestamp="2026-01-17 00:20:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:20:06.225657834 +0000 UTC m=+1.285930134" watchObservedRunningTime="2026-01-17 00:20:06.238077666 +0000 UTC m=+1.298349946" Jan 17 00:20:06.252254 kubelet[2529]: I0117 00:20:06.252196 2529 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.6-n-8cc98427e3" podStartSLOduration=1.25217781 podStartE2EDuration="1.25217781s" podCreationTimestamp="2026-01-17 00:20:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:20:06.252072814 +0000 UTC m=+1.312345103" watchObservedRunningTime="2026-01-17 00:20:06.25217781 +0000 UTC m=+1.312450067" Jan 17 00:20:06.253384 kubelet[2529]: I0117 00:20:06.253162 2529 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.6-n-8cc98427e3" podStartSLOduration=1.2526967820000001 podStartE2EDuration="1.252696782s" podCreationTimestamp="2026-01-17 00:20:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:20:06.238353758 +0000 UTC m=+1.298626046" watchObservedRunningTime="2026-01-17 00:20:06.252696782 +0000 UTC m=+1.312969073" Jan 17 00:20:07.175518 kubelet[2529]: E0117 00:20:07.175468 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:20:07.176435 kubelet[2529]: E0117 00:20:07.176280 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:20:08.107404 update_engine[1447]: I20260117 00:20:08.107256 1447 update_attempter.cc:509] Updating boot flags... Jan 17 00:20:08.154793 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2584) Jan 17 00:20:08.180621 kubelet[2529]: E0117 00:20:08.177835 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:20:08.238643 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2583) Jan 17 00:20:09.301934 kubelet[2529]: E0117 00:20:09.301563 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:20:10.184684 kubelet[2529]: E0117 00:20:10.184636 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:20:10.830570 kubelet[2529]: I0117 00:20:10.830415 2529 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 17 00:20:10.834114 containerd[1461]: time="2026-01-17T00:20:10.834046920Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 00:20:10.835081 kubelet[2529]: I0117 00:20:10.834481 2529 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 17 00:20:11.712511 systemd[1]: Created slice kubepods-besteffort-pod191e2452_2a15_44e3_8540_543d40e67dad.slice - libcontainer container kubepods-besteffort-pod191e2452_2a15_44e3_8540_543d40e67dad.slice. Jan 17 00:20:11.746673 kubelet[2529]: I0117 00:20:11.746428 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/191e2452-2a15-44e3-8540-543d40e67dad-kube-proxy\") pod \"kube-proxy-g9pwf\" (UID: \"191e2452-2a15-44e3-8540-543d40e67dad\") " pod="kube-system/kube-proxy-g9pwf" Jan 17 00:20:11.746673 kubelet[2529]: I0117 00:20:11.746485 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/191e2452-2a15-44e3-8540-543d40e67dad-xtables-lock\") pod \"kube-proxy-g9pwf\" (UID: \"191e2452-2a15-44e3-8540-543d40e67dad\") " pod="kube-system/kube-proxy-g9pwf" Jan 17 00:20:11.746673 kubelet[2529]: I0117 00:20:11.746530 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/191e2452-2a15-44e3-8540-543d40e67dad-lib-modules\") pod \"kube-proxy-g9pwf\" (UID: \"191e2452-2a15-44e3-8540-543d40e67dad\") " pod="kube-system/kube-proxy-g9pwf" Jan 17 00:20:11.746673 kubelet[2529]: I0117 00:20:11.746567 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxb2s\" (UniqueName: \"kubernetes.io/projected/191e2452-2a15-44e3-8540-543d40e67dad-kube-api-access-zxb2s\") pod \"kube-proxy-g9pwf\" (UID: \"191e2452-2a15-44e3-8540-543d40e67dad\") " pod="kube-system/kube-proxy-g9pwf" Jan 17 00:20:11.795712 systemd[1]: Created slice kubepods-besteffort-pod815bc847_444d_4476_bf8d_b657549bfb6d.slice - libcontainer container kubepods-besteffort-pod815bc847_444d_4476_bf8d_b657549bfb6d.slice. Jan 17 00:20:11.847409 kubelet[2529]: I0117 00:20:11.847332 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9m8k\" (UniqueName: \"kubernetes.io/projected/815bc847-444d-4476-bf8d-b657549bfb6d-kube-api-access-s9m8k\") pod \"tigera-operator-7dcd859c48-4c2dh\" (UID: \"815bc847-444d-4476-bf8d-b657549bfb6d\") " pod="tigera-operator/tigera-operator-7dcd859c48-4c2dh" Jan 17 00:20:11.848010 kubelet[2529]: I0117 00:20:11.847450 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/815bc847-444d-4476-bf8d-b657549bfb6d-var-lib-calico\") pod \"tigera-operator-7dcd859c48-4c2dh\" (UID: \"815bc847-444d-4476-bf8d-b657549bfb6d\") " pod="tigera-operator/tigera-operator-7dcd859c48-4c2dh" Jan 17 00:20:12.021846 kubelet[2529]: E0117 00:20:12.021640 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:20:12.022785 containerd[1461]: time="2026-01-17T00:20:12.022383403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g9pwf,Uid:191e2452-2a15-44e3-8540-543d40e67dad,Namespace:kube-system,Attempt:0,}" Jan 17 00:20:12.056851 containerd[1461]: time="2026-01-17T00:20:12.055847858Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:20:12.056851 containerd[1461]: time="2026-01-17T00:20:12.055945395Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:20:12.056851 containerd[1461]: time="2026-01-17T00:20:12.055971694Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:20:12.056851 containerd[1461]: time="2026-01-17T00:20:12.056105289Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:20:12.085828 systemd[1]: Started cri-containerd-4208cb401b32eec8dba533aa5f3dc06bbf8c836621c268b9a220e9e9045e6dca.scope - libcontainer container 4208cb401b32eec8dba533aa5f3dc06bbf8c836621c268b9a220e9e9045e6dca. Jan 17 00:20:12.104477 containerd[1461]: time="2026-01-17T00:20:12.104069980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-4c2dh,Uid:815bc847-444d-4476-bf8d-b657549bfb6d,Namespace:tigera-operator,Attempt:0,}" Jan 17 00:20:12.130519 containerd[1461]: time="2026-01-17T00:20:12.130478184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g9pwf,Uid:191e2452-2a15-44e3-8540-543d40e67dad,Namespace:kube-system,Attempt:0,} returns sandbox id \"4208cb401b32eec8dba533aa5f3dc06bbf8c836621c268b9a220e9e9045e6dca\"" Jan 17 00:20:12.131806 kubelet[2529]: E0117 00:20:12.131773 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:20:12.141785 containerd[1461]: time="2026-01-17T00:20:12.141473234Z" level=info msg="CreateContainer within sandbox \"4208cb401b32eec8dba533aa5f3dc06bbf8c836621c268b9a220e9e9045e6dca\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 00:20:12.158061 containerd[1461]: time="2026-01-17T00:20:12.157928606Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:20:12.158270 containerd[1461]: time="2026-01-17T00:20:12.158025790Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:20:12.158270 containerd[1461]: time="2026-01-17T00:20:12.158075057Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:20:12.158380 containerd[1461]: time="2026-01-17T00:20:12.158264042Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:20:12.159398 containerd[1461]: time="2026-01-17T00:20:12.159275878Z" level=info msg="CreateContainer within sandbox \"4208cb401b32eec8dba533aa5f3dc06bbf8c836621c268b9a220e9e9045e6dca\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1327472d13dfb0bd0ab04683cd2c2e32faeff848b77fbdbe3ac332138058871a\"" Jan 17 00:20:12.162095 containerd[1461]: time="2026-01-17T00:20:12.162051053Z" level=info msg="StartContainer for \"1327472d13dfb0bd0ab04683cd2c2e32faeff848b77fbdbe3ac332138058871a\"" Jan 17 00:20:12.196497 systemd[1]: Started cri-containerd-5b0cb36f6417c4f37897e50b779068130d5bb7c78f998693ab120e06d8353d02.scope - libcontainer container 5b0cb36f6417c4f37897e50b779068130d5bb7c78f998693ab120e06d8353d02. Jan 17 00:20:12.220018 systemd[1]: Started cri-containerd-1327472d13dfb0bd0ab04683cd2c2e32faeff848b77fbdbe3ac332138058871a.scope - libcontainer container 1327472d13dfb0bd0ab04683cd2c2e32faeff848b77fbdbe3ac332138058871a. Jan 17 00:20:12.267311 containerd[1461]: time="2026-01-17T00:20:12.265661693Z" level=info msg="StartContainer for \"1327472d13dfb0bd0ab04683cd2c2e32faeff848b77fbdbe3ac332138058871a\" returns successfully" Jan 17 00:20:12.282173 containerd[1461]: time="2026-01-17T00:20:12.281900639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-4c2dh,Uid:815bc847-444d-4476-bf8d-b657549bfb6d,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"5b0cb36f6417c4f37897e50b779068130d5bb7c78f998693ab120e06d8353d02\"" Jan 17 00:20:12.288501 containerd[1461]: time="2026-01-17T00:20:12.288212815Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 17 00:20:13.201670 kubelet[2529]: E0117 00:20:13.201601 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:20:13.484913 kubelet[2529]: E0117 00:20:13.483910 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:20:13.509009 kubelet[2529]: I0117 00:20:13.508898 2529 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-g9pwf" podStartSLOduration=2.508869159 podStartE2EDuration="2.508869159s" podCreationTimestamp="2026-01-17 00:20:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:20:13.222109142 +0000 UTC m=+8.282381430" watchObservedRunningTime="2026-01-17 00:20:13.508869159 +0000 UTC m=+8.569141451" Jan 17 00:20:13.775313 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4260862968.mount: Deactivated successfully. Jan 17 00:20:14.208250 kubelet[2529]: E0117 00:20:14.208191 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:20:14.210111 kubelet[2529]: E0117 00:20:14.208751 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:20:15.214105 containerd[1461]: time="2026-01-17T00:20:15.214034421Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:20:15.216398 containerd[1461]: time="2026-01-17T00:20:15.215849023Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Jan 17 00:20:15.219691 containerd[1461]: time="2026-01-17T00:20:15.219600290Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:20:15.222798 containerd[1461]: time="2026-01-17T00:20:15.222719628Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:20:15.225271 containerd[1461]: time="2026-01-17T00:20:15.225184846Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.936925935s" Jan 17 00:20:15.225271 containerd[1461]: time="2026-01-17T00:20:15.225251208Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 17 00:20:15.233586 containerd[1461]: time="2026-01-17T00:20:15.233364573Z" level=info msg="CreateContainer within sandbox \"5b0cb36f6417c4f37897e50b779068130d5bb7c78f998693ab120e06d8353d02\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 17 00:20:15.261933 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4158088556.mount: Deactivated successfully. Jan 17 00:20:15.272822 containerd[1461]: time="2026-01-17T00:20:15.272529514Z" level=info msg="CreateContainer within sandbox \"5b0cb36f6417c4f37897e50b779068130d5bb7c78f998693ab120e06d8353d02\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"f6205aa9bd3e2f8b240bfd3ff8ecfc468674323d20b701f7f665f9a89a39e523\"" Jan 17 00:20:15.274213 containerd[1461]: time="2026-01-17T00:20:15.273894604Z" level=info msg="StartContainer for \"f6205aa9bd3e2f8b240bfd3ff8ecfc468674323d20b701f7f665f9a89a39e523\"" Jan 17 00:20:15.343824 systemd[1]: run-containerd-runc-k8s.io-f6205aa9bd3e2f8b240bfd3ff8ecfc468674323d20b701f7f665f9a89a39e523-runc.yExTUq.mount: Deactivated successfully. Jan 17 00:20:15.355876 systemd[1]: Started cri-containerd-f6205aa9bd3e2f8b240bfd3ff8ecfc468674323d20b701f7f665f9a89a39e523.scope - libcontainer container f6205aa9bd3e2f8b240bfd3ff8ecfc468674323d20b701f7f665f9a89a39e523. Jan 17 00:20:15.406261 containerd[1461]: time="2026-01-17T00:20:15.406187598Z" level=info msg="StartContainer for \"f6205aa9bd3e2f8b240bfd3ff8ecfc468674323d20b701f7f665f9a89a39e523\" returns successfully" Jan 17 00:20:17.459566 kubelet[2529]: E0117 00:20:17.459161 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:20:17.557370 kubelet[2529]: I0117 00:20:17.557298 2529 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-4c2dh" podStartSLOduration=3.60958144 podStartE2EDuration="6.549698083s" podCreationTimestamp="2026-01-17 00:20:11 +0000 UTC" firstStartedPulling="2026-01-17 00:20:12.286306806 +0000 UTC m=+7.346579062" lastFinishedPulling="2026-01-17 00:20:15.226423437 +0000 UTC m=+10.286695705" observedRunningTime="2026-01-17 00:20:16.242760487 +0000 UTC m=+11.303032783" watchObservedRunningTime="2026-01-17 00:20:17.549698083 +0000 UTC m=+12.609970370" Jan 17 00:20:18.227964 kubelet[2529]: E0117 00:20:18.227915 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:20:19.487259 systemd[1]: cri-containerd-f6205aa9bd3e2f8b240bfd3ff8ecfc468674323d20b701f7f665f9a89a39e523.scope: Deactivated successfully. Jan 17 00:20:19.536369 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f6205aa9bd3e2f8b240bfd3ff8ecfc468674323d20b701f7f665f9a89a39e523-rootfs.mount: Deactivated successfully. Jan 17 00:20:19.615265 containerd[1461]: time="2026-01-17T00:20:19.557311833Z" level=info msg="shim disconnected" id=f6205aa9bd3e2f8b240bfd3ff8ecfc468674323d20b701f7f665f9a89a39e523 namespace=k8s.io Jan 17 00:20:19.615265 containerd[1461]: time="2026-01-17T00:20:19.615245201Z" level=warning msg="cleaning up after shim disconnected" id=f6205aa9bd3e2f8b240bfd3ff8ecfc468674323d20b701f7f665f9a89a39e523 namespace=k8s.io Jan 17 00:20:19.615265 containerd[1461]: time="2026-01-17T00:20:19.615278919Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:20:20.241573 kubelet[2529]: I0117 00:20:20.238861 2529 scope.go:117] "RemoveContainer" containerID="f6205aa9bd3e2f8b240bfd3ff8ecfc468674323d20b701f7f665f9a89a39e523" Jan 17 00:20:20.270711 containerd[1461]: time="2026-01-17T00:20:20.270378573Z" level=info msg="CreateContainer within sandbox \"5b0cb36f6417c4f37897e50b779068130d5bb7c78f998693ab120e06d8353d02\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 17 00:20:20.301695 containerd[1461]: time="2026-01-17T00:20:20.301497531Z" level=info msg="CreateContainer within sandbox \"5b0cb36f6417c4f37897e50b779068130d5bb7c78f998693ab120e06d8353d02\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"564b1f59b5eee7e103dd04101500bace4c8a90e21e5dbe50dec81dd3a47d6801\"" Jan 17 00:20:20.303879 containerd[1461]: time="2026-01-17T00:20:20.303841041Z" level=info msg="StartContainer for \"564b1f59b5eee7e103dd04101500bace4c8a90e21e5dbe50dec81dd3a47d6801\"" Jan 17 00:20:20.403774 systemd[1]: Started cri-containerd-564b1f59b5eee7e103dd04101500bace4c8a90e21e5dbe50dec81dd3a47d6801.scope - libcontainer container 564b1f59b5eee7e103dd04101500bace4c8a90e21e5dbe50dec81dd3a47d6801. Jan 17 00:20:20.467007 containerd[1461]: time="2026-01-17T00:20:20.466768826Z" level=info msg="StartContainer for \"564b1f59b5eee7e103dd04101500bace4c8a90e21e5dbe50dec81dd3a47d6801\" returns successfully" Jan 17 00:20:23.063518 sudo[1652]: pam_unix(sudo:session): session closed for user root Jan 17 00:20:23.125612 sshd[1649]: pam_unix(sshd:session): session closed for user core Jan 17 00:20:23.130426 systemd[1]: sshd@6-209.38.74.55:22-4.153.228.146:57164.service: Deactivated successfully. Jan 17 00:20:23.133093 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 00:20:23.133415 systemd[1]: session-7.scope: Consumed 7.808s CPU time, 143.1M memory peak, 0B memory swap peak. Jan 17 00:20:23.134183 systemd-logind[1446]: Session 7 logged out. Waiting for processes to exit. Jan 17 00:20:23.136092 systemd-logind[1446]: Removed session 7. Jan 17 00:20:31.804087 systemd[1]: Created slice kubepods-besteffort-podb3acfc9f_a0b7_457c_ad3f_c6c8ccbffda4.slice - libcontainer container kubepods-besteffort-podb3acfc9f_a0b7_457c_ad3f_c6c8ccbffda4.slice. Jan 17 00:20:31.907309 kubelet[2529]: I0117 00:20:31.907236 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b3acfc9f-a0b7-457c-ad3f-c6c8ccbffda4-typha-certs\") pod \"calico-typha-7b4668bdb8-h8h2v\" (UID: \"b3acfc9f-a0b7-457c-ad3f-c6c8ccbffda4\") " pod="calico-system/calico-typha-7b4668bdb8-h8h2v" Jan 17 00:20:31.907309 kubelet[2529]: I0117 00:20:31.907317 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hzs6\" (UniqueName: \"kubernetes.io/projected/b3acfc9f-a0b7-457c-ad3f-c6c8ccbffda4-kube-api-access-5hzs6\") pod \"calico-typha-7b4668bdb8-h8h2v\" (UID: \"b3acfc9f-a0b7-457c-ad3f-c6c8ccbffda4\") " pod="calico-system/calico-typha-7b4668bdb8-h8h2v" Jan 17 00:20:31.907934 kubelet[2529]: I0117 00:20:31.907351 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b3acfc9f-a0b7-457c-ad3f-c6c8ccbffda4-tigera-ca-bundle\") pod \"calico-typha-7b4668bdb8-h8h2v\" (UID: \"b3acfc9f-a0b7-457c-ad3f-c6c8ccbffda4\") " pod="calico-system/calico-typha-7b4668bdb8-h8h2v" Jan 17 00:20:32.080485 systemd[1]: Created slice kubepods-besteffort-pod11fdf435_b5ab_4f14_927e_8b157166ed3a.slice - libcontainer container kubepods-besteffort-pod11fdf435_b5ab_4f14_927e_8b157166ed3a.slice. Jan 17 00:20:32.112456 kubelet[2529]: E0117 00:20:32.112170 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:20:32.113602 containerd[1461]: time="2026-01-17T00:20:32.113533568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7b4668bdb8-h8h2v,Uid:b3acfc9f-a0b7-457c-ad3f-c6c8ccbffda4,Namespace:calico-system,Attempt:0,}" Jan 17 00:20:32.162710 containerd[1461]: time="2026-01-17T00:20:32.162503173Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:20:32.162710 containerd[1461]: time="2026-01-17T00:20:32.162626717Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:20:32.162710 containerd[1461]: time="2026-01-17T00:20:32.162648648Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:20:32.163369 containerd[1461]: time="2026-01-17T00:20:32.162785683Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:20:32.211339 kubelet[2529]: I0117 00:20:32.210673 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/11fdf435-b5ab-4f14-927e-8b157166ed3a-node-certs\") pod \"calico-node-qsgwf\" (UID: \"11fdf435-b5ab-4f14-927e-8b157166ed3a\") " pod="calico-system/calico-node-qsgwf" Jan 17 00:20:32.211339 kubelet[2529]: I0117 00:20:32.210725 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/11fdf435-b5ab-4f14-927e-8b157166ed3a-policysync\") pod \"calico-node-qsgwf\" (UID: \"11fdf435-b5ab-4f14-927e-8b157166ed3a\") " pod="calico-system/calico-node-qsgwf" Jan 17 00:20:32.211339 kubelet[2529]: I0117 00:20:32.210759 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/11fdf435-b5ab-4f14-927e-8b157166ed3a-xtables-lock\") pod \"calico-node-qsgwf\" (UID: \"11fdf435-b5ab-4f14-927e-8b157166ed3a\") " pod="calico-system/calico-node-qsgwf" Jan 17 00:20:32.211339 kubelet[2529]: I0117 00:20:32.210793 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/11fdf435-b5ab-4f14-927e-8b157166ed3a-var-run-calico\") pod \"calico-node-qsgwf\" (UID: \"11fdf435-b5ab-4f14-927e-8b157166ed3a\") " pod="calico-system/calico-node-qsgwf" Jan 17 00:20:32.211339 kubelet[2529]: I0117 00:20:32.210826 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/11fdf435-b5ab-4f14-927e-8b157166ed3a-cni-log-dir\") pod \"calico-node-qsgwf\" (UID: \"11fdf435-b5ab-4f14-927e-8b157166ed3a\") " pod="calico-system/calico-node-qsgwf" Jan 17 00:20:32.210835 systemd[1]: Started cri-containerd-b85f551935e2bc44474da3b41d9bdbb9d14aaa2ada8af523e5ac65b225b8ee50.scope - libcontainer container b85f551935e2bc44474da3b41d9bdbb9d14aaa2ada8af523e5ac65b225b8ee50. Jan 17 00:20:32.213625 kubelet[2529]: I0117 00:20:32.210855 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/11fdf435-b5ab-4f14-927e-8b157166ed3a-lib-modules\") pod \"calico-node-qsgwf\" (UID: \"11fdf435-b5ab-4f14-927e-8b157166ed3a\") " pod="calico-system/calico-node-qsgwf" Jan 17 00:20:32.213625 kubelet[2529]: I0117 00:20:32.210882 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/11fdf435-b5ab-4f14-927e-8b157166ed3a-tigera-ca-bundle\") pod \"calico-node-qsgwf\" (UID: \"11fdf435-b5ab-4f14-927e-8b157166ed3a\") " pod="calico-system/calico-node-qsgwf" Jan 17 00:20:32.213625 kubelet[2529]: I0117 00:20:32.210910 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/11fdf435-b5ab-4f14-927e-8b157166ed3a-flexvol-driver-host\") pod \"calico-node-qsgwf\" (UID: \"11fdf435-b5ab-4f14-927e-8b157166ed3a\") " pod="calico-system/calico-node-qsgwf" Jan 17 00:20:32.213625 kubelet[2529]: I0117 00:20:32.210946 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/11fdf435-b5ab-4f14-927e-8b157166ed3a-cni-net-dir\") pod \"calico-node-qsgwf\" (UID: \"11fdf435-b5ab-4f14-927e-8b157166ed3a\") " pod="calico-system/calico-node-qsgwf" Jan 17 00:20:32.213625 kubelet[2529]: I0117 00:20:32.210973 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/11fdf435-b5ab-4f14-927e-8b157166ed3a-var-lib-calico\") pod \"calico-node-qsgwf\" (UID: \"11fdf435-b5ab-4f14-927e-8b157166ed3a\") " pod="calico-system/calico-node-qsgwf" Jan 17 00:20:32.213825 kubelet[2529]: I0117 00:20:32.210996 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/11fdf435-b5ab-4f14-927e-8b157166ed3a-cni-bin-dir\") pod \"calico-node-qsgwf\" (UID: \"11fdf435-b5ab-4f14-927e-8b157166ed3a\") " pod="calico-system/calico-node-qsgwf" Jan 17 00:20:32.213825 kubelet[2529]: I0117 00:20:32.211024 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cmvb\" (UniqueName: \"kubernetes.io/projected/11fdf435-b5ab-4f14-927e-8b157166ed3a-kube-api-access-2cmvb\") pod \"calico-node-qsgwf\" (UID: \"11fdf435-b5ab-4f14-927e-8b157166ed3a\") " pod="calico-system/calico-node-qsgwf" Jan 17 00:20:32.264780 kubelet[2529]: E0117 00:20:32.264017 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dcsb9" podUID="96bd27e8-f4d7-4ca9-8ceb-fc56f28a33f0" Jan 17 00:20:32.312575 kubelet[2529]: I0117 00:20:32.311321 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/96bd27e8-f4d7-4ca9-8ceb-fc56f28a33f0-kubelet-dir\") pod \"csi-node-driver-dcsb9\" (UID: \"96bd27e8-f4d7-4ca9-8ceb-fc56f28a33f0\") " pod="calico-system/csi-node-driver-dcsb9" Jan 17 00:20:32.312575 kubelet[2529]: I0117 00:20:32.311362 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/96bd27e8-f4d7-4ca9-8ceb-fc56f28a33f0-registration-dir\") pod \"csi-node-driver-dcsb9\" (UID: \"96bd27e8-f4d7-4ca9-8ceb-fc56f28a33f0\") " pod="calico-system/csi-node-driver-dcsb9" Jan 17 00:20:32.312575 kubelet[2529]: I0117 00:20:32.311382 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/96bd27e8-f4d7-4ca9-8ceb-fc56f28a33f0-socket-dir\") pod \"csi-node-driver-dcsb9\" (UID: \"96bd27e8-f4d7-4ca9-8ceb-fc56f28a33f0\") " pod="calico-system/csi-node-driver-dcsb9" Jan 17 00:20:32.312575 kubelet[2529]: I0117 00:20:32.311461 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k797\" (UniqueName: \"kubernetes.io/projected/96bd27e8-f4d7-4ca9-8ceb-fc56f28a33f0-kube-api-access-4k797\") pod \"csi-node-driver-dcsb9\" (UID: \"96bd27e8-f4d7-4ca9-8ceb-fc56f28a33f0\") " pod="calico-system/csi-node-driver-dcsb9" Jan 17 00:20:32.312575 kubelet[2529]: I0117 00:20:32.311504 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/96bd27e8-f4d7-4ca9-8ceb-fc56f28a33f0-varrun\") pod \"csi-node-driver-dcsb9\" (UID: \"96bd27e8-f4d7-4ca9-8ceb-fc56f28a33f0\") " pod="calico-system/csi-node-driver-dcsb9" Jan 17 00:20:32.329105 kubelet[2529]: E0117 00:20:32.329044 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:32.329442 kubelet[2529]: W0117 00:20:32.329396 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:32.329875 kubelet[2529]: E0117 00:20:32.329732 2529 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:32.342029 kubelet[2529]: E0117 00:20:32.340991 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:32.354166 containerd[1461]: time="2026-01-17T00:20:32.353774796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7b4668bdb8-h8h2v,Uid:b3acfc9f-a0b7-457c-ad3f-c6c8ccbffda4,Namespace:calico-system,Attempt:0,} returns sandbox id \"b85f551935e2bc44474da3b41d9bdbb9d14aaa2ada8af523e5ac65b225b8ee50\"" Jan 17 00:20:32.356397 kubelet[2529]: W0117 00:20:32.342524 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:32.356397 kubelet[2529]: E0117 00:20:32.356331 2529 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:32.362821 kubelet[2529]: E0117 00:20:32.361618 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:20:32.364448 containerd[1461]: time="2026-01-17T00:20:32.364403035Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 17 00:20:32.386473 kubelet[2529]: E0117 00:20:32.386386 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:20:32.387890 containerd[1461]: time="2026-01-17T00:20:32.387832702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qsgwf,Uid:11fdf435-b5ab-4f14-927e-8b157166ed3a,Namespace:calico-system,Attempt:0,}" Jan 17 00:20:32.413814 kubelet[2529]: E0117 00:20:32.413773 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:32.413814 kubelet[2529]: W0117 00:20:32.413802 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:32.413814 kubelet[2529]: E0117 00:20:32.413827 2529 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:32.414679 kubelet[2529]: E0117 00:20:32.414097 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:32.414679 kubelet[2529]: W0117 00:20:32.414105 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:32.414679 kubelet[2529]: E0117 00:20:32.414115 2529 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:32.416433 kubelet[2529]: E0117 00:20:32.416397 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:32.416433 kubelet[2529]: W0117 00:20:32.416428 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:32.416955 kubelet[2529]: E0117 00:20:32.416453 2529 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:32.416955 kubelet[2529]: E0117 00:20:32.416778 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:32.416955 kubelet[2529]: W0117 00:20:32.416789 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:32.416955 kubelet[2529]: E0117 00:20:32.416802 2529 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:32.417435 kubelet[2529]: E0117 00:20:32.417019 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:32.417435 kubelet[2529]: W0117 00:20:32.417028 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:32.417435 kubelet[2529]: E0117 00:20:32.417037 2529 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:32.417435 kubelet[2529]: E0117 00:20:32.417190 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:32.417435 kubelet[2529]: W0117 00:20:32.417198 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:32.417435 kubelet[2529]: E0117 00:20:32.417206 2529 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:32.417435 kubelet[2529]: E0117 00:20:32.417421 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:32.417435 kubelet[2529]: W0117 00:20:32.417433 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:32.418276 kubelet[2529]: E0117 00:20:32.417445 2529 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:32.418781 kubelet[2529]: E0117 00:20:32.418759 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:32.418858 kubelet[2529]: W0117 00:20:32.418784 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:32.418858 kubelet[2529]: E0117 00:20:32.418805 2529 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:32.419432 kubelet[2529]: E0117 00:20:32.419156 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:32.419432 kubelet[2529]: W0117 00:20:32.419177 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:32.419432 kubelet[2529]: E0117 00:20:32.419194 2529 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:32.422621 kubelet[2529]: E0117 00:20:32.422586 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:32.422621 kubelet[2529]: W0117 00:20:32.422618 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:32.422818 kubelet[2529]: E0117 00:20:32.422644 2529 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:32.423002 kubelet[2529]: E0117 00:20:32.422984 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:32.423049 kubelet[2529]: W0117 00:20:32.423004 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:32.423049 kubelet[2529]: E0117 00:20:32.423022 2529 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:32.423262 kubelet[2529]: E0117 00:20:32.423248 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:32.423262 kubelet[2529]: W0117 00:20:32.423261 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:32.423353 kubelet[2529]: E0117 00:20:32.423271 2529 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:32.424463 kubelet[2529]: E0117 00:20:32.424441 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:32.424463 kubelet[2529]: W0117 00:20:32.424458 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:32.424645 kubelet[2529]: E0117 00:20:32.424483 2529 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:32.424695 kubelet[2529]: E0117 00:20:32.424678 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:32.424695 kubelet[2529]: W0117 00:20:32.424690 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:32.425088 kubelet[2529]: E0117 00:20:32.424698 2529 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:32.425088 kubelet[2529]: E0117 00:20:32.424867 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:32.425088 kubelet[2529]: W0117 00:20:32.424875 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:32.425088 kubelet[2529]: E0117 00:20:32.424883 2529 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:32.425088 kubelet[2529]: E0117 00:20:32.425017 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:32.425088 kubelet[2529]: W0117 00:20:32.425023 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:32.425088 kubelet[2529]: E0117 00:20:32.425030 2529 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:32.425320 kubelet[2529]: E0117 00:20:32.425205 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:32.425320 kubelet[2529]: W0117 00:20:32.425213 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:32.425320 kubelet[2529]: E0117 00:20:32.425222 2529 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:32.428714 kubelet[2529]: E0117 00:20:32.428687 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:32.428714 kubelet[2529]: W0117 00:20:32.428709 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:32.428714 kubelet[2529]: E0117 00:20:32.428728 2529 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:32.431163 kubelet[2529]: E0117 00:20:32.429598 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:32.431163 kubelet[2529]: W0117 00:20:32.429615 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:32.431163 kubelet[2529]: E0117 00:20:32.429630 2529 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:32.431163 kubelet[2529]: E0117 00:20:32.430918 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:32.431163 kubelet[2529]: W0117 00:20:32.430934 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:32.431163 kubelet[2529]: E0117 00:20:32.430952 2529 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:32.432604 kubelet[2529]: E0117 00:20:32.432583 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:32.433450 kubelet[2529]: W0117 00:20:32.433329 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:32.433450 kubelet[2529]: E0117 00:20:32.433356 2529 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:32.433909 kubelet[2529]: E0117 00:20:32.433867 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:32.433909 kubelet[2529]: W0117 00:20:32.433880 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:32.433909 kubelet[2529]: E0117 00:20:32.433892 2529 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:32.435102 kubelet[2529]: E0117 00:20:32.434874 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:32.435102 kubelet[2529]: W0117 00:20:32.434889 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:32.435102 kubelet[2529]: E0117 00:20:32.434903 2529 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:32.435678 kubelet[2529]: E0117 00:20:32.435651 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:32.436431 kubelet[2529]: W0117 00:20:32.436402 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:32.436891 kubelet[2529]: E0117 00:20:32.436676 2529 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:32.437673 kubelet[2529]: E0117 00:20:32.437319 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:32.437673 kubelet[2529]: W0117 00:20:32.437332 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:32.437673 kubelet[2529]: E0117 00:20:32.437344 2529 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:32.479574 containerd[1461]: time="2026-01-17T00:20:32.478981270Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:20:32.479574 containerd[1461]: time="2026-01-17T00:20:32.479059059Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:20:32.479574 containerd[1461]: time="2026-01-17T00:20:32.479089344Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:20:32.479574 containerd[1461]: time="2026-01-17T00:20:32.479216502Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:20:32.494571 kubelet[2529]: E0117 00:20:32.492976 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:32.494571 kubelet[2529]: W0117 00:20:32.492996 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:32.494571 kubelet[2529]: E0117 00:20:32.493016 2529 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:32.544761 systemd[1]: Started cri-containerd-dcb0197f9e05aa3bd378e3d43edf3b360375bab6491b478dd382df8f9fad4adb.scope - libcontainer container dcb0197f9e05aa3bd378e3d43edf3b360375bab6491b478dd382df8f9fad4adb. Jan 17 00:20:32.665349 containerd[1461]: time="2026-01-17T00:20:32.665208243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qsgwf,Uid:11fdf435-b5ab-4f14-927e-8b157166ed3a,Namespace:calico-system,Attempt:0,} returns sandbox id \"dcb0197f9e05aa3bd378e3d43edf3b360375bab6491b478dd382df8f9fad4adb\"" Jan 17 00:20:32.675790 kubelet[2529]: E0117 00:20:32.675736 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:20:34.019757 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount409091926.mount: Deactivated successfully. Jan 17 00:20:34.140764 kubelet[2529]: E0117 00:20:34.140686 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dcsb9" podUID="96bd27e8-f4d7-4ca9-8ceb-fc56f28a33f0" Jan 17 00:20:35.392977 containerd[1461]: time="2026-01-17T00:20:35.390732168Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:20:35.410329 containerd[1461]: time="2026-01-17T00:20:35.410225984Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Jan 17 00:20:35.411527 containerd[1461]: time="2026-01-17T00:20:35.411469991Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:20:35.414452 containerd[1461]: time="2026-01-17T00:20:35.414397891Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:20:35.416474 containerd[1461]: time="2026-01-17T00:20:35.416432447Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 3.051977601s" Jan 17 00:20:35.416687 containerd[1461]: time="2026-01-17T00:20:35.416669655Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 17 00:20:35.420761 containerd[1461]: time="2026-01-17T00:20:35.420706349Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 17 00:20:35.457818 containerd[1461]: time="2026-01-17T00:20:35.457747053Z" level=info msg="CreateContainer within sandbox \"b85f551935e2bc44474da3b41d9bdbb9d14aaa2ada8af523e5ac65b225b8ee50\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 17 00:20:35.479731 containerd[1461]: time="2026-01-17T00:20:35.479484073Z" level=info msg="CreateContainer within sandbox \"b85f551935e2bc44474da3b41d9bdbb9d14aaa2ada8af523e5ac65b225b8ee50\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"542ece4d04dab1604484bfbc973da9b774073d8eee306d380d652c254fb27694\"" Jan 17 00:20:35.482340 containerd[1461]: time="2026-01-17T00:20:35.480732664Z" level=info msg="StartContainer for \"542ece4d04dab1604484bfbc973da9b774073d8eee306d380d652c254fb27694\"" Jan 17 00:20:35.535850 systemd[1]: Started cri-containerd-542ece4d04dab1604484bfbc973da9b774073d8eee306d380d652c254fb27694.scope - libcontainer container 542ece4d04dab1604484bfbc973da9b774073d8eee306d380d652c254fb27694. Jan 17 00:20:35.610177 containerd[1461]: time="2026-01-17T00:20:35.610109802Z" level=info msg="StartContainer for \"542ece4d04dab1604484bfbc973da9b774073d8eee306d380d652c254fb27694\" returns successfully" Jan 17 00:20:36.137181 kubelet[2529]: E0117 00:20:36.137067 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dcsb9" podUID="96bd27e8-f4d7-4ca9-8ceb-fc56f28a33f0" Jan 17 00:20:36.294189 kubelet[2529]: E0117 00:20:36.293775 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:20:36.394427 kubelet[2529]: E0117 00:20:36.393996 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:36.394704 kubelet[2529]: W0117 00:20:36.394672 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:36.394988 kubelet[2529]: E0117 00:20:36.394854 2529 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:36.395578 kubelet[2529]: E0117 00:20:36.395371 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:36.395578 kubelet[2529]: W0117 00:20:36.395401 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:36.395578 kubelet[2529]: E0117 00:20:36.395426 2529 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:36.395858 kubelet[2529]: E0117 00:20:36.395829 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:36.396000 kubelet[2529]: W0117 00:20:36.395904 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:36.396000 kubelet[2529]: E0117 00:20:36.395935 2529 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:36.396398 kubelet[2529]: E0117 00:20:36.396371 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:36.396625 kubelet[2529]: W0117 00:20:36.396475 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:36.396625 kubelet[2529]: E0117 00:20:36.396492 2529 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:36.396942 kubelet[2529]: E0117 00:20:36.396829 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:36.396942 kubelet[2529]: W0117 00:20:36.396843 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:36.396942 kubelet[2529]: E0117 00:20:36.396854 2529 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:36.397294 kubelet[2529]: E0117 00:20:36.397228 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:36.397294 kubelet[2529]: W0117 00:20:36.397240 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:36.397294 kubelet[2529]: E0117 00:20:36.397253 2529 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:36.397804 kubelet[2529]: E0117 00:20:36.397632 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:36.397804 kubelet[2529]: W0117 00:20:36.397643 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:36.397804 kubelet[2529]: E0117 00:20:36.397655 2529 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:36.398134 kubelet[2529]: E0117 00:20:36.398027 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:36.398134 kubelet[2529]: W0117 00:20:36.398039 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:36.398134 kubelet[2529]: E0117 00:20:36.398066 2529 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:36.398524 kubelet[2529]: E0117 00:20:36.398411 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:36.398524 kubelet[2529]: W0117 00:20:36.398424 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:36.398524 kubelet[2529]: E0117 00:20:36.398438 2529 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:36.398763 kubelet[2529]: E0117 00:20:36.398701 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:36.398763 kubelet[2529]: W0117 00:20:36.398711 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:36.398763 kubelet[2529]: E0117 00:20:36.398720 2529 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:36.399239 kubelet[2529]: E0117 00:20:36.399073 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:36.399239 kubelet[2529]: W0117 00:20:36.399096 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:36.399239 kubelet[2529]: E0117 00:20:36.399111 2529 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:36.399444 kubelet[2529]: E0117 00:20:36.399432 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:36.399626 kubelet[2529]: W0117 00:20:36.399482 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:36.399626 kubelet[2529]: E0117 00:20:36.399495 2529 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:36.399808 kubelet[2529]: E0117 00:20:36.399797 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:36.399859 kubelet[2529]: W0117 00:20:36.399851 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:36.399906 kubelet[2529]: E0117 00:20:36.399897 2529 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:36.400114 kubelet[2529]: E0117 00:20:36.400105 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:36.400266 kubelet[2529]: W0117 00:20:36.400167 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:36.400266 kubelet[2529]: E0117 00:20:36.400190 2529 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:36.400378 kubelet[2529]: E0117 00:20:36.400370 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:36.400420 kubelet[2529]: W0117 00:20:36.400413 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:36.400481 kubelet[2529]: E0117 00:20:36.400473 2529 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:36.439437 systemd[1]: run-containerd-runc-k8s.io-542ece4d04dab1604484bfbc973da9b774073d8eee306d380d652c254fb27694-runc.ToEdHO.mount: Deactivated successfully. Jan 17 00:20:36.454614 kubelet[2529]: E0117 00:20:36.454571 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:36.454614 kubelet[2529]: W0117 00:20:36.454601 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:36.454614 kubelet[2529]: E0117 00:20:36.454623 2529 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:36.454932 kubelet[2529]: E0117 00:20:36.454914 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:36.454971 kubelet[2529]: W0117 00:20:36.454934 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:36.454971 kubelet[2529]: E0117 00:20:36.454949 2529 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:36.455188 kubelet[2529]: E0117 00:20:36.455174 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:36.455188 kubelet[2529]: W0117 00:20:36.455187 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:36.455258 kubelet[2529]: E0117 00:20:36.455196 2529 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:36.455504 kubelet[2529]: E0117 00:20:36.455488 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:36.455504 kubelet[2529]: W0117 00:20:36.455501 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:36.455602 kubelet[2529]: E0117 00:20:36.455513 2529 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:36.455764 kubelet[2529]: E0117 00:20:36.455749 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:36.455801 kubelet[2529]: W0117 00:20:36.455764 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:36.455801 kubelet[2529]: E0117 00:20:36.455773 2529 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:36.455959 kubelet[2529]: E0117 00:20:36.455931 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:36.455959 kubelet[2529]: W0117 00:20:36.455942 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:36.455959 kubelet[2529]: E0117 00:20:36.455952 2529 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:36.456141 kubelet[2529]: E0117 00:20:36.456127 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:36.456141 kubelet[2529]: W0117 00:20:36.456138 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:36.456249 kubelet[2529]: E0117 00:20:36.456146 2529 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:36.456378 kubelet[2529]: E0117 00:20:36.456363 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:36.456378 kubelet[2529]: W0117 00:20:36.456375 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:36.456453 kubelet[2529]: E0117 00:20:36.456384 2529 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:36.457060 kubelet[2529]: E0117 00:20:36.457040 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:36.457060 kubelet[2529]: W0117 00:20:36.457056 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:36.457252 kubelet[2529]: E0117 00:20:36.457067 2529 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:36.457252 kubelet[2529]: E0117 00:20:36.457223 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:36.457252 kubelet[2529]: W0117 00:20:36.457230 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:36.457252 kubelet[2529]: E0117 00:20:36.457237 2529 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:36.457477 kubelet[2529]: E0117 00:20:36.457371 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:36.457477 kubelet[2529]: W0117 00:20:36.457377 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:36.457477 kubelet[2529]: E0117 00:20:36.457384 2529 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:36.457754 kubelet[2529]: E0117 00:20:36.457567 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:36.457754 kubelet[2529]: W0117 00:20:36.457574 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:36.457754 kubelet[2529]: E0117 00:20:36.457580 2529 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:36.457754 kubelet[2529]: E0117 00:20:36.457719 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:36.457754 kubelet[2529]: W0117 00:20:36.457726 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:36.457754 kubelet[2529]: E0117 00:20:36.457733 2529 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:36.458080 kubelet[2529]: E0117 00:20:36.458065 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:36.458080 kubelet[2529]: W0117 00:20:36.458080 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:36.458138 kubelet[2529]: E0117 00:20:36.458090 2529 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:36.458381 kubelet[2529]: E0117 00:20:36.458367 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:36.458381 kubelet[2529]: W0117 00:20:36.458379 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:36.458445 kubelet[2529]: E0117 00:20:36.458387 2529 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:36.458619 kubelet[2529]: E0117 00:20:36.458606 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:36.458619 kubelet[2529]: W0117 00:20:36.458617 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:36.458719 kubelet[2529]: E0117 00:20:36.458625 2529 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:36.458975 kubelet[2529]: E0117 00:20:36.458959 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:36.459019 kubelet[2529]: W0117 00:20:36.458975 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:36.459019 kubelet[2529]: E0117 00:20:36.458989 2529 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:36.459405 kubelet[2529]: E0117 00:20:36.459201 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:20:36.459405 kubelet[2529]: W0117 00:20:36.459214 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:20:36.459405 kubelet[2529]: E0117 00:20:36.459223 2529 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:20:37.072670 containerd[1461]: time="2026-01-17T00:20:37.072575894Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:20:37.073917 containerd[1461]: time="2026-01-17T00:20:37.073838269Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Jan 17 00:20:37.074894 containerd[1461]: time="2026-01-17T00:20:37.074613186Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:20:37.076818 containerd[1461]: time="2026-01-17T00:20:37.076781773Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:20:37.077764 containerd[1461]: time="2026-01-17T00:20:37.077384220Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.656627415s" Jan 17 00:20:37.077764 containerd[1461]: time="2026-01-17T00:20:37.077418134Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 17 00:20:37.092983 containerd[1461]: time="2026-01-17T00:20:37.092679959Z" level=info msg="CreateContainer within sandbox \"dcb0197f9e05aa3bd378e3d43edf3b360375bab6491b478dd382df8f9fad4adb\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 17 00:20:37.108902 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount586113353.mount: Deactivated successfully. Jan 17 00:20:37.112633 containerd[1461]: time="2026-01-17T00:20:37.112588333Z" level=info msg="CreateContainer within sandbox \"dcb0197f9e05aa3bd378e3d43edf3b360375bab6491b478dd382df8f9fad4adb\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"91ed00a2e2d3f5b247a17c860de8cbf50a8544eaa38925524e6ecfc61cff2121\"" Jan 17 00:20:37.113946 containerd[1461]: time="2026-01-17T00:20:37.113866807Z" level=info msg="StartContainer for \"91ed00a2e2d3f5b247a17c860de8cbf50a8544eaa38925524e6ecfc61cff2121\"" Jan 17 00:20:37.176972 systemd[1]: Started cri-containerd-91ed00a2e2d3f5b247a17c860de8cbf50a8544eaa38925524e6ecfc61cff2121.scope - libcontainer container 91ed00a2e2d3f5b247a17c860de8cbf50a8544eaa38925524e6ecfc61cff2121. Jan 17 00:20:37.235903 containerd[1461]: time="2026-01-17T00:20:37.235854927Z" level=info msg="StartContainer for \"91ed00a2e2d3f5b247a17c860de8cbf50a8544eaa38925524e6ecfc61cff2121\" returns successfully" Jan 17 00:20:37.250665 systemd[1]: cri-containerd-91ed00a2e2d3f5b247a17c860de8cbf50a8544eaa38925524e6ecfc61cff2121.scope: Deactivated successfully. Jan 17 00:20:37.299040 kubelet[2529]: I0117 00:20:37.299005 2529 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 00:20:37.300439 kubelet[2529]: E0117 00:20:37.300392 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:20:37.300685 kubelet[2529]: E0117 00:20:37.299638 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:20:37.325909 kubelet[2529]: I0117 00:20:37.321775 2529 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7b4668bdb8-h8h2v" podStartSLOduration=3.265360323 podStartE2EDuration="6.321750585s" podCreationTimestamp="2026-01-17 00:20:31 +0000 UTC" firstStartedPulling="2026-01-17 00:20:32.363992936 +0000 UTC m=+27.424265197" lastFinishedPulling="2026-01-17 00:20:35.420383179 +0000 UTC m=+30.480655459" observedRunningTime="2026-01-17 00:20:36.324621894 +0000 UTC m=+31.384894172" watchObservedRunningTime="2026-01-17 00:20:37.321750585 +0000 UTC m=+32.382022906" Jan 17 00:20:37.335966 containerd[1461]: time="2026-01-17T00:20:37.335672867Z" level=info msg="shim disconnected" id=91ed00a2e2d3f5b247a17c860de8cbf50a8544eaa38925524e6ecfc61cff2121 namespace=k8s.io Jan 17 00:20:37.335966 containerd[1461]: time="2026-01-17T00:20:37.335740358Z" level=warning msg="cleaning up after shim disconnected" id=91ed00a2e2d3f5b247a17c860de8cbf50a8544eaa38925524e6ecfc61cff2121 namespace=k8s.io Jan 17 00:20:37.335966 containerd[1461]: time="2026-01-17T00:20:37.335751491Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:20:37.439224 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-91ed00a2e2d3f5b247a17c860de8cbf50a8544eaa38925524e6ecfc61cff2121-rootfs.mount: Deactivated successfully. Jan 17 00:20:38.136566 kubelet[2529]: E0117 00:20:38.136427 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dcsb9" podUID="96bd27e8-f4d7-4ca9-8ceb-fc56f28a33f0" Jan 17 00:20:38.303665 kubelet[2529]: E0117 00:20:38.303181 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:20:38.305243 containerd[1461]: time="2026-01-17T00:20:38.305204044Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 17 00:20:39.332906 kubelet[2529]: I0117 00:20:39.332354 2529 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 00:20:39.335165 kubelet[2529]: E0117 00:20:39.333742 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:20:40.136154 kubelet[2529]: E0117 00:20:40.136081 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dcsb9" podUID="96bd27e8-f4d7-4ca9-8ceb-fc56f28a33f0" Jan 17 00:20:40.306596 kubelet[2529]: E0117 00:20:40.306386 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:20:41.583750 containerd[1461]: time="2026-01-17T00:20:41.583683879Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:20:41.584846 containerd[1461]: time="2026-01-17T00:20:41.584774982Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 17 00:20:41.585640 containerd[1461]: time="2026-01-17T00:20:41.585607282Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:20:41.588348 containerd[1461]: time="2026-01-17T00:20:41.588314811Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.283064211s" Jan 17 00:20:41.588471 containerd[1461]: time="2026-01-17T00:20:41.588454290Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 17 00:20:41.589938 containerd[1461]: time="2026-01-17T00:20:41.589911773Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:20:41.597826 containerd[1461]: time="2026-01-17T00:20:41.597773056Z" level=info msg="CreateContainer within sandbox \"dcb0197f9e05aa3bd378e3d43edf3b360375bab6491b478dd382df8f9fad4adb\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 17 00:20:41.623090 containerd[1461]: time="2026-01-17T00:20:41.623037613Z" level=info msg="CreateContainer within sandbox \"dcb0197f9e05aa3bd378e3d43edf3b360375bab6491b478dd382df8f9fad4adb\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"34f277d6c1ac6873357bb9dc877e5c4cdeea4db7afcb55e76edd5eafdf4727ae\"" Jan 17 00:20:41.624292 containerd[1461]: time="2026-01-17T00:20:41.624254831Z" level=info msg="StartContainer for \"34f277d6c1ac6873357bb9dc877e5c4cdeea4db7afcb55e76edd5eafdf4727ae\"" Jan 17 00:20:41.681826 systemd[1]: Started cri-containerd-34f277d6c1ac6873357bb9dc877e5c4cdeea4db7afcb55e76edd5eafdf4727ae.scope - libcontainer container 34f277d6c1ac6873357bb9dc877e5c4cdeea4db7afcb55e76edd5eafdf4727ae. Jan 17 00:20:41.733244 containerd[1461]: time="2026-01-17T00:20:41.733175650Z" level=info msg="StartContainer for \"34f277d6c1ac6873357bb9dc877e5c4cdeea4db7afcb55e76edd5eafdf4727ae\" returns successfully" Jan 17 00:20:42.136627 kubelet[2529]: E0117 00:20:42.136163 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dcsb9" podUID="96bd27e8-f4d7-4ca9-8ceb-fc56f28a33f0" Jan 17 00:20:42.319862 kubelet[2529]: E0117 00:20:42.319263 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:20:42.522761 systemd[1]: cri-containerd-34f277d6c1ac6873357bb9dc877e5c4cdeea4db7afcb55e76edd5eafdf4727ae.scope: Deactivated successfully. Jan 17 00:20:42.567531 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-34f277d6c1ac6873357bb9dc877e5c4cdeea4db7afcb55e76edd5eafdf4727ae-rootfs.mount: Deactivated successfully. Jan 17 00:20:42.573463 containerd[1461]: time="2026-01-17T00:20:42.573283413Z" level=info msg="shim disconnected" id=34f277d6c1ac6873357bb9dc877e5c4cdeea4db7afcb55e76edd5eafdf4727ae namespace=k8s.io Jan 17 00:20:42.573463 containerd[1461]: time="2026-01-17T00:20:42.573438258Z" level=warning msg="cleaning up after shim disconnected" id=34f277d6c1ac6873357bb9dc877e5c4cdeea4db7afcb55e76edd5eafdf4727ae namespace=k8s.io Jan 17 00:20:42.573463 containerd[1461]: time="2026-01-17T00:20:42.573454264Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:20:42.624605 kubelet[2529]: I0117 00:20:42.623687 2529 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 17 00:20:42.682727 systemd[1]: Created slice kubepods-burstable-pod16e8fb15_593a_4be8_833b_05df43f1e4e7.slice - libcontainer container kubepods-burstable-pod16e8fb15_593a_4be8_833b_05df43f1e4e7.slice. Jan 17 00:20:42.704057 systemd[1]: Created slice kubepods-besteffort-pod7c550f8d_c288_4bad_bef5_47dd3ed7bb5b.slice - libcontainer container kubepods-besteffort-pod7c550f8d_c288_4bad_bef5_47dd3ed7bb5b.slice. Jan 17 00:20:42.721935 systemd[1]: Created slice kubepods-besteffort-pod8c0578bf_2fb3_4218_b665_10ff5fcbea9f.slice - libcontainer container kubepods-besteffort-pod8c0578bf_2fb3_4218_b665_10ff5fcbea9f.slice. Jan 17 00:20:42.741813 systemd[1]: Created slice kubepods-besteffort-podfee43243_8ebd_4cd2_afa5_ba57dc078efe.slice - libcontainer container kubepods-besteffort-podfee43243_8ebd_4cd2_afa5_ba57dc078efe.slice. Jan 17 00:20:42.760222 systemd[1]: Created slice kubepods-burstable-podc5483e8b_299a_4a15_8ed6_7af74d3f03f3.slice - libcontainer container kubepods-burstable-podc5483e8b_299a_4a15_8ed6_7af74d3f03f3.slice. Jan 17 00:20:42.778979 systemd[1]: Created slice kubepods-besteffort-pod1b489003_62f2_46b7_a6af_3a3a669c193c.slice - libcontainer container kubepods-besteffort-pod1b489003_62f2_46b7_a6af_3a3a669c193c.slice. Jan 17 00:20:42.786790 systemd[1]: Created slice kubepods-besteffort-poda4529381_2d40_4d70_a757_b0ee2c920e64.slice - libcontainer container kubepods-besteffort-poda4529381_2d40_4d70_a757_b0ee2c920e64.slice. Jan 17 00:20:42.811590 kubelet[2529]: I0117 00:20:42.810576 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgjhz\" (UniqueName: \"kubernetes.io/projected/1b489003-62f2-46b7-a6af-3a3a669c193c-kube-api-access-zgjhz\") pod \"calico-kube-controllers-b9877fd47-255j9\" (UID: \"1b489003-62f2-46b7-a6af-3a3a669c193c\") " pod="calico-system/calico-kube-controllers-b9877fd47-255j9" Jan 17 00:20:42.811590 kubelet[2529]: I0117 00:20:42.810629 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/16e8fb15-593a-4be8-833b-05df43f1e4e7-config-volume\") pod \"coredns-674b8bbfcf-kg6z8\" (UID: \"16e8fb15-593a-4be8-833b-05df43f1e4e7\") " pod="kube-system/coredns-674b8bbfcf-kg6z8" Jan 17 00:20:42.811590 kubelet[2529]: I0117 00:20:42.810657 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8tk8\" (UniqueName: \"kubernetes.io/projected/7c550f8d-c288-4bad-bef5-47dd3ed7bb5b-kube-api-access-f8tk8\") pod \"whisker-6497847d59-vfrxp\" (UID: \"7c550f8d-c288-4bad-bef5-47dd3ed7bb5b\") " pod="calico-system/whisker-6497847d59-vfrxp" Jan 17 00:20:42.811590 kubelet[2529]: I0117 00:20:42.810678 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8c0578bf-2fb3-4218-b665-10ff5fcbea9f-calico-apiserver-certs\") pod \"calico-apiserver-669cbdb5c4-j86b8\" (UID: \"8c0578bf-2fb3-4218-b665-10ff5fcbea9f\") " pod="calico-apiserver/calico-apiserver-669cbdb5c4-j86b8" Jan 17 00:20:42.811590 kubelet[2529]: I0117 00:20:42.810695 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fee43243-8ebd-4cd2-afa5-ba57dc078efe-calico-apiserver-certs\") pod \"calico-apiserver-669cbdb5c4-xt5pt\" (UID: \"fee43243-8ebd-4cd2-afa5-ba57dc078efe\") " pod="calico-apiserver/calico-apiserver-669cbdb5c4-xt5pt" Jan 17 00:20:42.812107 kubelet[2529]: I0117 00:20:42.810767 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zds2l\" (UniqueName: \"kubernetes.io/projected/fee43243-8ebd-4cd2-afa5-ba57dc078efe-kube-api-access-zds2l\") pod \"calico-apiserver-669cbdb5c4-xt5pt\" (UID: \"fee43243-8ebd-4cd2-afa5-ba57dc078efe\") " pod="calico-apiserver/calico-apiserver-669cbdb5c4-xt5pt" Jan 17 00:20:42.812107 kubelet[2529]: I0117 00:20:42.810827 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/a4529381-2d40-4d70-a757-b0ee2c920e64-goldmane-key-pair\") pod \"goldmane-666569f655-m56rm\" (UID: \"a4529381-2d40-4d70-a757-b0ee2c920e64\") " pod="calico-system/goldmane-666569f655-m56rm" Jan 17 00:20:42.812107 kubelet[2529]: I0117 00:20:42.810868 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1b489003-62f2-46b7-a6af-3a3a669c193c-tigera-ca-bundle\") pod \"calico-kube-controllers-b9877fd47-255j9\" (UID: \"1b489003-62f2-46b7-a6af-3a3a669c193c\") " pod="calico-system/calico-kube-controllers-b9877fd47-255j9" Jan 17 00:20:42.812107 kubelet[2529]: I0117 00:20:42.810903 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2z55\" (UniqueName: \"kubernetes.io/projected/16e8fb15-593a-4be8-833b-05df43f1e4e7-kube-api-access-w2z55\") pod \"coredns-674b8bbfcf-kg6z8\" (UID: \"16e8fb15-593a-4be8-833b-05df43f1e4e7\") " pod="kube-system/coredns-674b8bbfcf-kg6z8" Jan 17 00:20:42.812107 kubelet[2529]: I0117 00:20:42.810931 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4529381-2d40-4d70-a757-b0ee2c920e64-config\") pod \"goldmane-666569f655-m56rm\" (UID: \"a4529381-2d40-4d70-a757-b0ee2c920e64\") " pod="calico-system/goldmane-666569f655-m56rm" Jan 17 00:20:42.812240 kubelet[2529]: I0117 00:20:42.810965 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4529381-2d40-4d70-a757-b0ee2c920e64-goldmane-ca-bundle\") pod \"goldmane-666569f655-m56rm\" (UID: \"a4529381-2d40-4d70-a757-b0ee2c920e64\") " pod="calico-system/goldmane-666569f655-m56rm" Jan 17 00:20:42.812240 kubelet[2529]: I0117 00:20:42.810992 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxlpp\" (UniqueName: \"kubernetes.io/projected/a4529381-2d40-4d70-a757-b0ee2c920e64-kube-api-access-dxlpp\") pod \"goldmane-666569f655-m56rm\" (UID: \"a4529381-2d40-4d70-a757-b0ee2c920e64\") " pod="calico-system/goldmane-666569f655-m56rm" Jan 17 00:20:42.812240 kubelet[2529]: I0117 00:20:42.811022 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6ptw\" (UniqueName: \"kubernetes.io/projected/8c0578bf-2fb3-4218-b665-10ff5fcbea9f-kube-api-access-d6ptw\") pod \"calico-apiserver-669cbdb5c4-j86b8\" (UID: \"8c0578bf-2fb3-4218-b665-10ff5fcbea9f\") " pod="calico-apiserver/calico-apiserver-669cbdb5c4-j86b8" Jan 17 00:20:42.812240 kubelet[2529]: I0117 00:20:42.811078 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7c550f8d-c288-4bad-bef5-47dd3ed7bb5b-whisker-backend-key-pair\") pod \"whisker-6497847d59-vfrxp\" (UID: \"7c550f8d-c288-4bad-bef5-47dd3ed7bb5b\") " pod="calico-system/whisker-6497847d59-vfrxp" Jan 17 00:20:42.812240 kubelet[2529]: I0117 00:20:42.811116 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c550f8d-c288-4bad-bef5-47dd3ed7bb5b-whisker-ca-bundle\") pod \"whisker-6497847d59-vfrxp\" (UID: \"7c550f8d-c288-4bad-bef5-47dd3ed7bb5b\") " pod="calico-system/whisker-6497847d59-vfrxp" Jan 17 00:20:42.812426 kubelet[2529]: I0117 00:20:42.811168 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c5483e8b-299a-4a15-8ed6-7af74d3f03f3-config-volume\") pod \"coredns-674b8bbfcf-2jwfn\" (UID: \"c5483e8b-299a-4a15-8ed6-7af74d3f03f3\") " pod="kube-system/coredns-674b8bbfcf-2jwfn" Jan 17 00:20:42.812426 kubelet[2529]: I0117 00:20:42.811203 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpqth\" (UniqueName: \"kubernetes.io/projected/c5483e8b-299a-4a15-8ed6-7af74d3f03f3-kube-api-access-hpqth\") pod \"coredns-674b8bbfcf-2jwfn\" (UID: \"c5483e8b-299a-4a15-8ed6-7af74d3f03f3\") " pod="kube-system/coredns-674b8bbfcf-2jwfn" Jan 17 00:20:43.014625 containerd[1461]: time="2026-01-17T00:20:43.014434590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6497847d59-vfrxp,Uid:7c550f8d-c288-4bad-bef5-47dd3ed7bb5b,Namespace:calico-system,Attempt:0,}" Jan 17 00:20:43.035881 containerd[1461]: time="2026-01-17T00:20:43.034362072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-669cbdb5c4-j86b8,Uid:8c0578bf-2fb3-4218-b665-10ff5fcbea9f,Namespace:calico-apiserver,Attempt:0,}" Jan 17 00:20:43.056757 containerd[1461]: time="2026-01-17T00:20:43.056691263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-669cbdb5c4-xt5pt,Uid:fee43243-8ebd-4cd2-afa5-ba57dc078efe,Namespace:calico-apiserver,Attempt:0,}" Jan 17 00:20:43.071687 kubelet[2529]: E0117 00:20:43.070442 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:20:43.086960 containerd[1461]: time="2026-01-17T00:20:43.086768923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b9877fd47-255j9,Uid:1b489003-62f2-46b7-a6af-3a3a669c193c,Namespace:calico-system,Attempt:0,}" Jan 17 00:20:43.091979 containerd[1461]: time="2026-01-17T00:20:43.091914533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2jwfn,Uid:c5483e8b-299a-4a15-8ed6-7af74d3f03f3,Namespace:kube-system,Attempt:0,}" Jan 17 00:20:43.116597 containerd[1461]: time="2026-01-17T00:20:43.116523126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-m56rm,Uid:a4529381-2d40-4d70-a757-b0ee2c920e64,Namespace:calico-system,Attempt:0,}" Jan 17 00:20:43.295708 kubelet[2529]: E0117 00:20:43.294010 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:20:43.300062 containerd[1461]: time="2026-01-17T00:20:43.299672841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-kg6z8,Uid:16e8fb15-593a-4be8-833b-05df43f1e4e7,Namespace:kube-system,Attempt:0,}" Jan 17 00:20:43.376129 kubelet[2529]: E0117 00:20:43.374267 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:20:43.385943 containerd[1461]: time="2026-01-17T00:20:43.385900878Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 17 00:20:43.514343 containerd[1461]: time="2026-01-17T00:20:43.514257835Z" level=error msg="Failed to destroy network for sandbox \"838a260361119b936c4c3d02f67478bead9836787e5be40d80110848c4e9b5ac\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:43.518460 containerd[1461]: time="2026-01-17T00:20:43.518271709Z" level=error msg="encountered an error cleaning up failed sandbox \"838a260361119b936c4c3d02f67478bead9836787e5be40d80110848c4e9b5ac\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:43.518460 containerd[1461]: time="2026-01-17T00:20:43.518380475Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6497847d59-vfrxp,Uid:7c550f8d-c288-4bad-bef5-47dd3ed7bb5b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"838a260361119b936c4c3d02f67478bead9836787e5be40d80110848c4e9b5ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:43.520603 kubelet[2529]: E0117 00:20:43.519971 2529 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"838a260361119b936c4c3d02f67478bead9836787e5be40d80110848c4e9b5ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:43.520603 kubelet[2529]: E0117 00:20:43.520069 2529 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"838a260361119b936c4c3d02f67478bead9836787e5be40d80110848c4e9b5ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6497847d59-vfrxp" Jan 17 00:20:43.520603 kubelet[2529]: E0117 00:20:43.520110 2529 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"838a260361119b936c4c3d02f67478bead9836787e5be40d80110848c4e9b5ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6497847d59-vfrxp" Jan 17 00:20:43.522578 kubelet[2529]: E0117 00:20:43.520195 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6497847d59-vfrxp_calico-system(7c550f8d-c288-4bad-bef5-47dd3ed7bb5b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6497847d59-vfrxp_calico-system(7c550f8d-c288-4bad-bef5-47dd3ed7bb5b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"838a260361119b936c4c3d02f67478bead9836787e5be40d80110848c4e9b5ac\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6497847d59-vfrxp" podUID="7c550f8d-c288-4bad-bef5-47dd3ed7bb5b" Jan 17 00:20:43.525078 containerd[1461]: time="2026-01-17T00:20:43.523775193Z" level=error msg="Failed to destroy network for sandbox \"0c1039da7ae9b20848b087b8181a55c71afdddbbdba3827847f3f1622b16355c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:43.527440 containerd[1461]: time="2026-01-17T00:20:43.527342140Z" level=error msg="encountered an error cleaning up failed sandbox \"0c1039da7ae9b20848b087b8181a55c71afdddbbdba3827847f3f1622b16355c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:43.527440 containerd[1461]: time="2026-01-17T00:20:43.527428579Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b9877fd47-255j9,Uid:1b489003-62f2-46b7-a6af-3a3a669c193c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0c1039da7ae9b20848b087b8181a55c71afdddbbdba3827847f3f1622b16355c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:43.536363 containerd[1461]: time="2026-01-17T00:20:43.536286327Z" level=error msg="Failed to destroy network for sandbox \"f4977e4ced5aa68a533aaba1fd2c49d870d22e29935c3b6a89d9a0566e959ac7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:43.537201 containerd[1461]: time="2026-01-17T00:20:43.537041230Z" level=error msg="Failed to destroy network for sandbox \"508319c222abee0c5b89c340610184580e04a9ff4363de9efe822c84e09b5507\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:43.538709 containerd[1461]: time="2026-01-17T00:20:43.538674786Z" level=error msg="Failed to destroy network for sandbox \"3ffc5e8506d7df4b7bf7be1a5ecceee233557676031fc04e84d88f72bc18b9a5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:43.538913 kubelet[2529]: E0117 00:20:43.538870 2529 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c1039da7ae9b20848b087b8181a55c71afdddbbdba3827847f3f1622b16355c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:43.539114 kubelet[2529]: E0117 00:20:43.539029 2529 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c1039da7ae9b20848b087b8181a55c71afdddbbdba3827847f3f1622b16355c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-b9877fd47-255j9" Jan 17 00:20:43.539114 kubelet[2529]: E0117 00:20:43.539062 2529 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c1039da7ae9b20848b087b8181a55c71afdddbbdba3827847f3f1622b16355c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-b9877fd47-255j9" Jan 17 00:20:43.539552 containerd[1461]: time="2026-01-17T00:20:43.539262726Z" level=error msg="encountered an error cleaning up failed sandbox \"3ffc5e8506d7df4b7bf7be1a5ecceee233557676031fc04e84d88f72bc18b9a5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:43.539552 containerd[1461]: time="2026-01-17T00:20:43.539371352Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-669cbdb5c4-xt5pt,Uid:fee43243-8ebd-4cd2-afa5-ba57dc078efe,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3ffc5e8506d7df4b7bf7be1a5ecceee233557676031fc04e84d88f72bc18b9a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:43.540408 containerd[1461]: time="2026-01-17T00:20:43.540230780Z" level=error msg="encountered an error cleaning up failed sandbox \"508319c222abee0c5b89c340610184580e04a9ff4363de9efe822c84e09b5507\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:43.540408 containerd[1461]: time="2026-01-17T00:20:43.540307250Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-m56rm,Uid:a4529381-2d40-4d70-a757-b0ee2c920e64,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"508319c222abee0c5b89c340610184580e04a9ff4363de9efe822c84e09b5507\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:43.540678 kubelet[2529]: E0117 00:20:43.539238 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-b9877fd47-255j9_calico-system(1b489003-62f2-46b7-a6af-3a3a669c193c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-b9877fd47-255j9_calico-system(1b489003-62f2-46b7-a6af-3a3a669c193c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0c1039da7ae9b20848b087b8181a55c71afdddbbdba3827847f3f1622b16355c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-b9877fd47-255j9" podUID="1b489003-62f2-46b7-a6af-3a3a669c193c" Jan 17 00:20:43.541605 containerd[1461]: time="2026-01-17T00:20:43.540770787Z" level=error msg="encountered an error cleaning up failed sandbox \"f4977e4ced5aa68a533aaba1fd2c49d870d22e29935c3b6a89d9a0566e959ac7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:43.541605 containerd[1461]: time="2026-01-17T00:20:43.540910322Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-669cbdb5c4-j86b8,Uid:8c0578bf-2fb3-4218-b665-10ff5fcbea9f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f4977e4ced5aa68a533aaba1fd2c49d870d22e29935c3b6a89d9a0566e959ac7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:43.541802 kubelet[2529]: E0117 00:20:43.540924 2529 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"508319c222abee0c5b89c340610184580e04a9ff4363de9efe822c84e09b5507\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:43.541802 kubelet[2529]: E0117 00:20:43.540973 2529 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"508319c222abee0c5b89c340610184580e04a9ff4363de9efe822c84e09b5507\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-m56rm" Jan 17 00:20:43.541802 kubelet[2529]: E0117 00:20:43.541000 2529 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"508319c222abee0c5b89c340610184580e04a9ff4363de9efe822c84e09b5507\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-m56rm" Jan 17 00:20:43.541963 kubelet[2529]: E0117 00:20:43.541077 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-m56rm_calico-system(a4529381-2d40-4d70-a757-b0ee2c920e64)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-m56rm_calico-system(a4529381-2d40-4d70-a757-b0ee2c920e64)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"508319c222abee0c5b89c340610184580e04a9ff4363de9efe822c84e09b5507\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-m56rm" podUID="a4529381-2d40-4d70-a757-b0ee2c920e64" Jan 17 00:20:43.541963 kubelet[2529]: E0117 00:20:43.541122 2529 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ffc5e8506d7df4b7bf7be1a5ecceee233557676031fc04e84d88f72bc18b9a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:43.541963 kubelet[2529]: E0117 00:20:43.541139 2529 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ffc5e8506d7df4b7bf7be1a5ecceee233557676031fc04e84d88f72bc18b9a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-669cbdb5c4-xt5pt" Jan 17 00:20:43.543156 kubelet[2529]: E0117 00:20:43.541153 2529 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ffc5e8506d7df4b7bf7be1a5ecceee233557676031fc04e84d88f72bc18b9a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-669cbdb5c4-xt5pt" Jan 17 00:20:43.543156 kubelet[2529]: E0117 00:20:43.541178 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-669cbdb5c4-xt5pt_calico-apiserver(fee43243-8ebd-4cd2-afa5-ba57dc078efe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-669cbdb5c4-xt5pt_calico-apiserver(fee43243-8ebd-4cd2-afa5-ba57dc078efe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3ffc5e8506d7df4b7bf7be1a5ecceee233557676031fc04e84d88f72bc18b9a5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-669cbdb5c4-xt5pt" podUID="fee43243-8ebd-4cd2-afa5-ba57dc078efe" Jan 17 00:20:43.543156 kubelet[2529]: E0117 00:20:43.541243 2529 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4977e4ced5aa68a533aaba1fd2c49d870d22e29935c3b6a89d9a0566e959ac7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:43.543304 kubelet[2529]: E0117 00:20:43.541259 2529 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4977e4ced5aa68a533aaba1fd2c49d870d22e29935c3b6a89d9a0566e959ac7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-669cbdb5c4-j86b8" Jan 17 00:20:43.543304 kubelet[2529]: E0117 00:20:43.541272 2529 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4977e4ced5aa68a533aaba1fd2c49d870d22e29935c3b6a89d9a0566e959ac7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-669cbdb5c4-j86b8" Jan 17 00:20:43.543304 kubelet[2529]: E0117 00:20:43.541297 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-669cbdb5c4-j86b8_calico-apiserver(8c0578bf-2fb3-4218-b665-10ff5fcbea9f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-669cbdb5c4-j86b8_calico-apiserver(8c0578bf-2fb3-4218-b665-10ff5fcbea9f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f4977e4ced5aa68a533aaba1fd2c49d870d22e29935c3b6a89d9a0566e959ac7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-669cbdb5c4-j86b8" podUID="8c0578bf-2fb3-4218-b665-10ff5fcbea9f" Jan 17 00:20:43.551189 containerd[1461]: time="2026-01-17T00:20:43.551042742Z" level=error msg="Failed to destroy network for sandbox \"ade9cf2cd9c59dee832d466cc511609b3e68e63411929e06444ebf7b6f7b746c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:43.555100 containerd[1461]: time="2026-01-17T00:20:43.554393594Z" level=error msg="encountered an error cleaning up failed sandbox \"ade9cf2cd9c59dee832d466cc511609b3e68e63411929e06444ebf7b6f7b746c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:43.555100 containerd[1461]: time="2026-01-17T00:20:43.554478812Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2jwfn,Uid:c5483e8b-299a-4a15-8ed6-7af74d3f03f3,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ade9cf2cd9c59dee832d466cc511609b3e68e63411929e06444ebf7b6f7b746c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:43.555276 kubelet[2529]: E0117 00:20:43.554783 2529 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ade9cf2cd9c59dee832d466cc511609b3e68e63411929e06444ebf7b6f7b746c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:43.555276 kubelet[2529]: E0117 00:20:43.554856 2529 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ade9cf2cd9c59dee832d466cc511609b3e68e63411929e06444ebf7b6f7b746c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-2jwfn" Jan 17 00:20:43.555276 kubelet[2529]: E0117 00:20:43.554884 2529 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ade9cf2cd9c59dee832d466cc511609b3e68e63411929e06444ebf7b6f7b746c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-2jwfn" Jan 17 00:20:43.555366 kubelet[2529]: E0117 00:20:43.554981 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-2jwfn_kube-system(c5483e8b-299a-4a15-8ed6-7af74d3f03f3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-2jwfn_kube-system(c5483e8b-299a-4a15-8ed6-7af74d3f03f3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ade9cf2cd9c59dee832d466cc511609b3e68e63411929e06444ebf7b6f7b746c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-2jwfn" podUID="c5483e8b-299a-4a15-8ed6-7af74d3f03f3" Jan 17 00:20:43.599052 containerd[1461]: time="2026-01-17T00:20:43.598998721Z" level=error msg="Failed to destroy network for sandbox \"4972674ea7086c5f3a0512271193226956bd1cd7f22b559feafd9ec1036b1221\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:43.599720 containerd[1461]: time="2026-01-17T00:20:43.599593697Z" level=error msg="encountered an error cleaning up failed sandbox \"4972674ea7086c5f3a0512271193226956bd1cd7f22b559feafd9ec1036b1221\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:43.599720 containerd[1461]: time="2026-01-17T00:20:43.599665233Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-kg6z8,Uid:16e8fb15-593a-4be8-833b-05df43f1e4e7,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4972674ea7086c5f3a0512271193226956bd1cd7f22b559feafd9ec1036b1221\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:43.600350 kubelet[2529]: E0117 00:20:43.600263 2529 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4972674ea7086c5f3a0512271193226956bd1cd7f22b559feafd9ec1036b1221\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:43.600521 kubelet[2529]: E0117 00:20:43.600388 2529 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4972674ea7086c5f3a0512271193226956bd1cd7f22b559feafd9ec1036b1221\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-kg6z8" Jan 17 00:20:43.600521 kubelet[2529]: E0117 00:20:43.600423 2529 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4972674ea7086c5f3a0512271193226956bd1cd7f22b559feafd9ec1036b1221\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-kg6z8" Jan 17 00:20:43.601747 kubelet[2529]: E0117 00:20:43.601611 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-kg6z8_kube-system(16e8fb15-593a-4be8-833b-05df43f1e4e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-kg6z8_kube-system(16e8fb15-593a-4be8-833b-05df43f1e4e7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4972674ea7086c5f3a0512271193226956bd1cd7f22b559feafd9ec1036b1221\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-kg6z8" podUID="16e8fb15-593a-4be8-833b-05df43f1e4e7" Jan 17 00:20:44.148433 systemd[1]: Created slice kubepods-besteffort-pod96bd27e8_f4d7_4ca9_8ceb_fc56f28a33f0.slice - libcontainer container kubepods-besteffort-pod96bd27e8_f4d7_4ca9_8ceb_fc56f28a33f0.slice. Jan 17 00:20:44.154445 containerd[1461]: time="2026-01-17T00:20:44.154237081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dcsb9,Uid:96bd27e8-f4d7-4ca9-8ceb-fc56f28a33f0,Namespace:calico-system,Attempt:0,}" Jan 17 00:20:44.267806 containerd[1461]: time="2026-01-17T00:20:44.267680981Z" level=error msg="Failed to destroy network for sandbox \"87a14bc904253110840c44a9cbfb2d2c7b945fa5100db2395842b3b79fa3f745\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:44.268523 containerd[1461]: time="2026-01-17T00:20:44.268197593Z" level=error msg="encountered an error cleaning up failed sandbox \"87a14bc904253110840c44a9cbfb2d2c7b945fa5100db2395842b3b79fa3f745\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:44.268523 containerd[1461]: time="2026-01-17T00:20:44.268288181Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dcsb9,Uid:96bd27e8-f4d7-4ca9-8ceb-fc56f28a33f0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"87a14bc904253110840c44a9cbfb2d2c7b945fa5100db2395842b3b79fa3f745\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:44.268865 kubelet[2529]: E0117 00:20:44.268687 2529 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87a14bc904253110840c44a9cbfb2d2c7b945fa5100db2395842b3b79fa3f745\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:44.268865 kubelet[2529]: E0117 00:20:44.268847 2529 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87a14bc904253110840c44a9cbfb2d2c7b945fa5100db2395842b3b79fa3f745\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dcsb9" Jan 17 00:20:44.271720 kubelet[2529]: E0117 00:20:44.268885 2529 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87a14bc904253110840c44a9cbfb2d2c7b945fa5100db2395842b3b79fa3f745\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dcsb9" Jan 17 00:20:44.271720 kubelet[2529]: E0117 00:20:44.268973 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dcsb9_calico-system(96bd27e8-f4d7-4ca9-8ceb-fc56f28a33f0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dcsb9_calico-system(96bd27e8-f4d7-4ca9-8ceb-fc56f28a33f0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"87a14bc904253110840c44a9cbfb2d2c7b945fa5100db2395842b3b79fa3f745\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dcsb9" podUID="96bd27e8-f4d7-4ca9-8ceb-fc56f28a33f0" Jan 17 00:20:44.275239 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-87a14bc904253110840c44a9cbfb2d2c7b945fa5100db2395842b3b79fa3f745-shm.mount: Deactivated successfully. Jan 17 00:20:44.378108 kubelet[2529]: I0117 00:20:44.377856 2529 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="508319c222abee0c5b89c340610184580e04a9ff4363de9efe822c84e09b5507" Jan 17 00:20:44.386829 kubelet[2529]: I0117 00:20:44.385503 2529 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87a14bc904253110840c44a9cbfb2d2c7b945fa5100db2395842b3b79fa3f745" Jan 17 00:20:44.386999 containerd[1461]: time="2026-01-17T00:20:44.386424428Z" level=info msg="StopPodSandbox for \"87a14bc904253110840c44a9cbfb2d2c7b945fa5100db2395842b3b79fa3f745\"" Jan 17 00:20:44.391778 containerd[1461]: time="2026-01-17T00:20:44.390436655Z" level=info msg="StopPodSandbox for \"508319c222abee0c5b89c340610184580e04a9ff4363de9efe822c84e09b5507\"" Jan 17 00:20:44.393374 containerd[1461]: time="2026-01-17T00:20:44.393309211Z" level=info msg="Ensure that sandbox 508319c222abee0c5b89c340610184580e04a9ff4363de9efe822c84e09b5507 in task-service has been cleanup successfully" Jan 17 00:20:44.393979 containerd[1461]: time="2026-01-17T00:20:44.393335888Z" level=info msg="Ensure that sandbox 87a14bc904253110840c44a9cbfb2d2c7b945fa5100db2395842b3b79fa3f745 in task-service has been cleanup successfully" Jan 17 00:20:44.416926 kubelet[2529]: I0117 00:20:44.416571 2529 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ade9cf2cd9c59dee832d466cc511609b3e68e63411929e06444ebf7b6f7b746c" Jan 17 00:20:44.423600 containerd[1461]: time="2026-01-17T00:20:44.422856309Z" level=info msg="StopPodSandbox for \"ade9cf2cd9c59dee832d466cc511609b3e68e63411929e06444ebf7b6f7b746c\"" Jan 17 00:20:44.424938 containerd[1461]: time="2026-01-17T00:20:44.424260855Z" level=info msg="Ensure that sandbox ade9cf2cd9c59dee832d466cc511609b3e68e63411929e06444ebf7b6f7b746c in task-service has been cleanup successfully" Jan 17 00:20:44.434139 kubelet[2529]: I0117 00:20:44.434103 2529 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c1039da7ae9b20848b087b8181a55c71afdddbbdba3827847f3f1622b16355c" Jan 17 00:20:44.436409 containerd[1461]: time="2026-01-17T00:20:44.436054352Z" level=info msg="StopPodSandbox for \"0c1039da7ae9b20848b087b8181a55c71afdddbbdba3827847f3f1622b16355c\"" Jan 17 00:20:44.438159 containerd[1461]: time="2026-01-17T00:20:44.438010214Z" level=info msg="Ensure that sandbox 0c1039da7ae9b20848b087b8181a55c71afdddbbdba3827847f3f1622b16355c in task-service has been cleanup successfully" Jan 17 00:20:44.466708 kubelet[2529]: I0117 00:20:44.464681 2529 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4972674ea7086c5f3a0512271193226956bd1cd7f22b559feafd9ec1036b1221" Jan 17 00:20:44.469829 containerd[1461]: time="2026-01-17T00:20:44.468371405Z" level=info msg="StopPodSandbox for \"4972674ea7086c5f3a0512271193226956bd1cd7f22b559feafd9ec1036b1221\"" Jan 17 00:20:44.469829 containerd[1461]: time="2026-01-17T00:20:44.468663650Z" level=info msg="Ensure that sandbox 4972674ea7086c5f3a0512271193226956bd1cd7f22b559feafd9ec1036b1221 in task-service has been cleanup successfully" Jan 17 00:20:44.500902 kubelet[2529]: I0117 00:20:44.500852 2529 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ffc5e8506d7df4b7bf7be1a5ecceee233557676031fc04e84d88f72bc18b9a5" Jan 17 00:20:44.511249 containerd[1461]: time="2026-01-17T00:20:44.510623900Z" level=info msg="StopPodSandbox for \"3ffc5e8506d7df4b7bf7be1a5ecceee233557676031fc04e84d88f72bc18b9a5\"" Jan 17 00:20:44.514963 kubelet[2529]: I0117 00:20:44.514905 2529 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4977e4ced5aa68a533aaba1fd2c49d870d22e29935c3b6a89d9a0566e959ac7" Jan 17 00:20:44.519710 containerd[1461]: time="2026-01-17T00:20:44.519642234Z" level=info msg="Ensure that sandbox 3ffc5e8506d7df4b7bf7be1a5ecceee233557676031fc04e84d88f72bc18b9a5 in task-service has been cleanup successfully" Jan 17 00:20:44.521735 containerd[1461]: time="2026-01-17T00:20:44.521680413Z" level=info msg="StopPodSandbox for \"f4977e4ced5aa68a533aaba1fd2c49d870d22e29935c3b6a89d9a0566e959ac7\"" Jan 17 00:20:44.521967 containerd[1461]: time="2026-01-17T00:20:44.521951713Z" level=info msg="Ensure that sandbox f4977e4ced5aa68a533aaba1fd2c49d870d22e29935c3b6a89d9a0566e959ac7 in task-service has been cleanup successfully" Jan 17 00:20:44.551386 kubelet[2529]: I0117 00:20:44.551338 2529 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="838a260361119b936c4c3d02f67478bead9836787e5be40d80110848c4e9b5ac" Jan 17 00:20:44.562171 containerd[1461]: time="2026-01-17T00:20:44.558155789Z" level=info msg="StopPodSandbox for \"838a260361119b936c4c3d02f67478bead9836787e5be40d80110848c4e9b5ac\"" Jan 17 00:20:44.565033 containerd[1461]: time="2026-01-17T00:20:44.564960161Z" level=info msg="Ensure that sandbox 838a260361119b936c4c3d02f67478bead9836787e5be40d80110848c4e9b5ac in task-service has been cleanup successfully" Jan 17 00:20:44.608301 containerd[1461]: time="2026-01-17T00:20:44.607878207Z" level=error msg="StopPodSandbox for \"508319c222abee0c5b89c340610184580e04a9ff4363de9efe822c84e09b5507\" failed" error="failed to destroy network for sandbox \"508319c222abee0c5b89c340610184580e04a9ff4363de9efe822c84e09b5507\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:44.609575 kubelet[2529]: E0117 00:20:44.609492 2529 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"508319c222abee0c5b89c340610184580e04a9ff4363de9efe822c84e09b5507\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="508319c222abee0c5b89c340610184580e04a9ff4363de9efe822c84e09b5507" Jan 17 00:20:44.609729 kubelet[2529]: E0117 00:20:44.609602 2529 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"508319c222abee0c5b89c340610184580e04a9ff4363de9efe822c84e09b5507"} Jan 17 00:20:44.609729 kubelet[2529]: E0117 00:20:44.609698 2529 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a4529381-2d40-4d70-a757-b0ee2c920e64\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"508319c222abee0c5b89c340610184580e04a9ff4363de9efe822c84e09b5507\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:20:44.609898 kubelet[2529]: E0117 00:20:44.609739 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a4529381-2d40-4d70-a757-b0ee2c920e64\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"508319c222abee0c5b89c340610184580e04a9ff4363de9efe822c84e09b5507\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-m56rm" podUID="a4529381-2d40-4d70-a757-b0ee2c920e64" Jan 17 00:20:44.656416 containerd[1461]: time="2026-01-17T00:20:44.656080428Z" level=error msg="StopPodSandbox for \"ade9cf2cd9c59dee832d466cc511609b3e68e63411929e06444ebf7b6f7b746c\" failed" error="failed to destroy network for sandbox \"ade9cf2cd9c59dee832d466cc511609b3e68e63411929e06444ebf7b6f7b746c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:44.657037 kubelet[2529]: E0117 00:20:44.656941 2529 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ade9cf2cd9c59dee832d466cc511609b3e68e63411929e06444ebf7b6f7b746c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ade9cf2cd9c59dee832d466cc511609b3e68e63411929e06444ebf7b6f7b746c" Jan 17 00:20:44.657183 kubelet[2529]: E0117 00:20:44.657049 2529 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ade9cf2cd9c59dee832d466cc511609b3e68e63411929e06444ebf7b6f7b746c"} Jan 17 00:20:44.657183 kubelet[2529]: E0117 00:20:44.657120 2529 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c5483e8b-299a-4a15-8ed6-7af74d3f03f3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ade9cf2cd9c59dee832d466cc511609b3e68e63411929e06444ebf7b6f7b746c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:20:44.657330 kubelet[2529]: E0117 00:20:44.657179 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c5483e8b-299a-4a15-8ed6-7af74d3f03f3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ade9cf2cd9c59dee832d466cc511609b3e68e63411929e06444ebf7b6f7b746c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-2jwfn" podUID="c5483e8b-299a-4a15-8ed6-7af74d3f03f3" Jan 17 00:20:44.679686 containerd[1461]: time="2026-01-17T00:20:44.679484011Z" level=error msg="StopPodSandbox for \"87a14bc904253110840c44a9cbfb2d2c7b945fa5100db2395842b3b79fa3f745\" failed" error="failed to destroy network for sandbox \"87a14bc904253110840c44a9cbfb2d2c7b945fa5100db2395842b3b79fa3f745\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:44.680572 kubelet[2529]: E0117 00:20:44.679818 2529 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"87a14bc904253110840c44a9cbfb2d2c7b945fa5100db2395842b3b79fa3f745\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="87a14bc904253110840c44a9cbfb2d2c7b945fa5100db2395842b3b79fa3f745" Jan 17 00:20:44.680572 kubelet[2529]: E0117 00:20:44.679895 2529 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"87a14bc904253110840c44a9cbfb2d2c7b945fa5100db2395842b3b79fa3f745"} Jan 17 00:20:44.680572 kubelet[2529]: E0117 00:20:44.679949 2529 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"96bd27e8-f4d7-4ca9-8ceb-fc56f28a33f0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"87a14bc904253110840c44a9cbfb2d2c7b945fa5100db2395842b3b79fa3f745\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:20:44.680572 kubelet[2529]: E0117 00:20:44.679992 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"96bd27e8-f4d7-4ca9-8ceb-fc56f28a33f0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"87a14bc904253110840c44a9cbfb2d2c7b945fa5100db2395842b3b79fa3f745\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dcsb9" podUID="96bd27e8-f4d7-4ca9-8ceb-fc56f28a33f0" Jan 17 00:20:44.692947 containerd[1461]: time="2026-01-17T00:20:44.692675409Z" level=error msg="StopPodSandbox for \"4972674ea7086c5f3a0512271193226956bd1cd7f22b559feafd9ec1036b1221\" failed" error="failed to destroy network for sandbox \"4972674ea7086c5f3a0512271193226956bd1cd7f22b559feafd9ec1036b1221\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:44.693640 kubelet[2529]: E0117 00:20:44.693581 2529 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4972674ea7086c5f3a0512271193226956bd1cd7f22b559feafd9ec1036b1221\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4972674ea7086c5f3a0512271193226956bd1cd7f22b559feafd9ec1036b1221" Jan 17 00:20:44.693798 kubelet[2529]: E0117 00:20:44.693655 2529 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4972674ea7086c5f3a0512271193226956bd1cd7f22b559feafd9ec1036b1221"} Jan 17 00:20:44.693798 kubelet[2529]: E0117 00:20:44.693712 2529 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"16e8fb15-593a-4be8-833b-05df43f1e4e7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4972674ea7086c5f3a0512271193226956bd1cd7f22b559feafd9ec1036b1221\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:20:44.693798 kubelet[2529]: E0117 00:20:44.693751 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"16e8fb15-593a-4be8-833b-05df43f1e4e7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4972674ea7086c5f3a0512271193226956bd1cd7f22b559feafd9ec1036b1221\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-kg6z8" podUID="16e8fb15-593a-4be8-833b-05df43f1e4e7" Jan 17 00:20:44.705375 containerd[1461]: time="2026-01-17T00:20:44.705310950Z" level=error msg="StopPodSandbox for \"0c1039da7ae9b20848b087b8181a55c71afdddbbdba3827847f3f1622b16355c\" failed" error="failed to destroy network for sandbox \"0c1039da7ae9b20848b087b8181a55c71afdddbbdba3827847f3f1622b16355c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:44.706330 kubelet[2529]: E0117 00:20:44.706267 2529 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0c1039da7ae9b20848b087b8181a55c71afdddbbdba3827847f3f1622b16355c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0c1039da7ae9b20848b087b8181a55c71afdddbbdba3827847f3f1622b16355c" Jan 17 00:20:44.706646 kubelet[2529]: E0117 00:20:44.706373 2529 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0c1039da7ae9b20848b087b8181a55c71afdddbbdba3827847f3f1622b16355c"} Jan 17 00:20:44.706646 kubelet[2529]: E0117 00:20:44.706564 2529 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1b489003-62f2-46b7-a6af-3a3a669c193c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0c1039da7ae9b20848b087b8181a55c71afdddbbdba3827847f3f1622b16355c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:20:44.706825 kubelet[2529]: E0117 00:20:44.706605 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1b489003-62f2-46b7-a6af-3a3a669c193c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0c1039da7ae9b20848b087b8181a55c71afdddbbdba3827847f3f1622b16355c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-b9877fd47-255j9" podUID="1b489003-62f2-46b7-a6af-3a3a669c193c" Jan 17 00:20:44.709223 containerd[1461]: time="2026-01-17T00:20:44.708867259Z" level=error msg="StopPodSandbox for \"3ffc5e8506d7df4b7bf7be1a5ecceee233557676031fc04e84d88f72bc18b9a5\" failed" error="failed to destroy network for sandbox \"3ffc5e8506d7df4b7bf7be1a5ecceee233557676031fc04e84d88f72bc18b9a5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:44.710751 kubelet[2529]: E0117 00:20:44.709208 2529 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3ffc5e8506d7df4b7bf7be1a5ecceee233557676031fc04e84d88f72bc18b9a5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3ffc5e8506d7df4b7bf7be1a5ecceee233557676031fc04e84d88f72bc18b9a5" Jan 17 00:20:44.710751 kubelet[2529]: E0117 00:20:44.709265 2529 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3ffc5e8506d7df4b7bf7be1a5ecceee233557676031fc04e84d88f72bc18b9a5"} Jan 17 00:20:44.710751 kubelet[2529]: E0117 00:20:44.709312 2529 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fee43243-8ebd-4cd2-afa5-ba57dc078efe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3ffc5e8506d7df4b7bf7be1a5ecceee233557676031fc04e84d88f72bc18b9a5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:20:44.710751 kubelet[2529]: E0117 00:20:44.709342 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fee43243-8ebd-4cd2-afa5-ba57dc078efe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3ffc5e8506d7df4b7bf7be1a5ecceee233557676031fc04e84d88f72bc18b9a5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-669cbdb5c4-xt5pt" podUID="fee43243-8ebd-4cd2-afa5-ba57dc078efe" Jan 17 00:20:44.735302 containerd[1461]: time="2026-01-17T00:20:44.734182098Z" level=error msg="StopPodSandbox for \"f4977e4ced5aa68a533aaba1fd2c49d870d22e29935c3b6a89d9a0566e959ac7\" failed" error="failed to destroy network for sandbox \"f4977e4ced5aa68a533aaba1fd2c49d870d22e29935c3b6a89d9a0566e959ac7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:44.735616 kubelet[2529]: E0117 00:20:44.734503 2529 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f4977e4ced5aa68a533aaba1fd2c49d870d22e29935c3b6a89d9a0566e959ac7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f4977e4ced5aa68a533aaba1fd2c49d870d22e29935c3b6a89d9a0566e959ac7" Jan 17 00:20:44.735616 kubelet[2529]: E0117 00:20:44.734580 2529 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f4977e4ced5aa68a533aaba1fd2c49d870d22e29935c3b6a89d9a0566e959ac7"} Jan 17 00:20:44.735616 kubelet[2529]: E0117 00:20:44.734615 2529 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8c0578bf-2fb3-4218-b665-10ff5fcbea9f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f4977e4ced5aa68a533aaba1fd2c49d870d22e29935c3b6a89d9a0566e959ac7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:20:44.735616 kubelet[2529]: E0117 00:20:44.734642 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8c0578bf-2fb3-4218-b665-10ff5fcbea9f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f4977e4ced5aa68a533aaba1fd2c49d870d22e29935c3b6a89d9a0566e959ac7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-669cbdb5c4-j86b8" podUID="8c0578bf-2fb3-4218-b665-10ff5fcbea9f" Jan 17 00:20:44.747845 containerd[1461]: time="2026-01-17T00:20:44.746940311Z" level=error msg="StopPodSandbox for \"838a260361119b936c4c3d02f67478bead9836787e5be40d80110848c4e9b5ac\" failed" error="failed to destroy network for sandbox \"838a260361119b936c4c3d02f67478bead9836787e5be40d80110848c4e9b5ac\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:20:44.748038 kubelet[2529]: E0117 00:20:44.747365 2529 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"838a260361119b936c4c3d02f67478bead9836787e5be40d80110848c4e9b5ac\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="838a260361119b936c4c3d02f67478bead9836787e5be40d80110848c4e9b5ac" Jan 17 00:20:44.748038 kubelet[2529]: E0117 00:20:44.747435 2529 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"838a260361119b936c4c3d02f67478bead9836787e5be40d80110848c4e9b5ac"} Jan 17 00:20:44.748038 kubelet[2529]: E0117 00:20:44.747488 2529 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7c550f8d-c288-4bad-bef5-47dd3ed7bb5b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"838a260361119b936c4c3d02f67478bead9836787e5be40d80110848c4e9b5ac\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:20:44.748038 kubelet[2529]: E0117 00:20:44.747517 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7c550f8d-c288-4bad-bef5-47dd3ed7bb5b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"838a260361119b936c4c3d02f67478bead9836787e5be40d80110848c4e9b5ac\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6497847d59-vfrxp" podUID="7c550f8d-c288-4bad-bef5-47dd3ed7bb5b" Jan 17 00:20:50.424111 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2931282684.mount: Deactivated successfully. Jan 17 00:20:50.541782 containerd[1461]: time="2026-01-17T00:20:50.537180414Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 17 00:20:50.542564 containerd[1461]: time="2026-01-17T00:20:50.533507681Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:20:50.576184 containerd[1461]: time="2026-01-17T00:20:50.576123666Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:20:50.577428 containerd[1461]: time="2026-01-17T00:20:50.577366571Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:20:50.626120 containerd[1461]: time="2026-01-17T00:20:50.625553434Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 7.239357024s" Jan 17 00:20:50.626120 containerd[1461]: time="2026-01-17T00:20:50.625615261Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 17 00:20:50.773412 containerd[1461]: time="2026-01-17T00:20:50.773217224Z" level=info msg="CreateContainer within sandbox \"dcb0197f9e05aa3bd378e3d43edf3b360375bab6491b478dd382df8f9fad4adb\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 17 00:20:50.818596 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3600032854.mount: Deactivated successfully. Jan 17 00:20:50.826083 containerd[1461]: time="2026-01-17T00:20:50.825999735Z" level=info msg="CreateContainer within sandbox \"dcb0197f9e05aa3bd378e3d43edf3b360375bab6491b478dd382df8f9fad4adb\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"df59347b37e0c62a710d20332e5d4e3c3bff8682ab61a274dbb5a5c99f3a9a22\"" Jan 17 00:20:50.835591 containerd[1461]: time="2026-01-17T00:20:50.835446696Z" level=info msg="StartContainer for \"df59347b37e0c62a710d20332e5d4e3c3bff8682ab61a274dbb5a5c99f3a9a22\"" Jan 17 00:20:50.956852 systemd[1]: Started cri-containerd-df59347b37e0c62a710d20332e5d4e3c3bff8682ab61a274dbb5a5c99f3a9a22.scope - libcontainer container df59347b37e0c62a710d20332e5d4e3c3bff8682ab61a274dbb5a5c99f3a9a22. Jan 17 00:20:51.054991 containerd[1461]: time="2026-01-17T00:20:51.054282431Z" level=info msg="StartContainer for \"df59347b37e0c62a710d20332e5d4e3c3bff8682ab61a274dbb5a5c99f3a9a22\" returns successfully" Jan 17 00:20:51.240837 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 17 00:20:51.243234 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 17 00:20:51.456578 containerd[1461]: time="2026-01-17T00:20:51.456404005Z" level=info msg="StopPodSandbox for \"838a260361119b936c4c3d02f67478bead9836787e5be40d80110848c4e9b5ac\"" Jan 17 00:20:51.593598 kubelet[2529]: E0117 00:20:51.592481 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:20:51.676617 kubelet[2529]: I0117 00:20:51.669108 2529 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-qsgwf" podStartSLOduration=1.677006527 podStartE2EDuration="19.639860614s" podCreationTimestamp="2026-01-17 00:20:32 +0000 UTC" firstStartedPulling="2026-01-17 00:20:32.698077342 +0000 UTC m=+27.758349603" lastFinishedPulling="2026-01-17 00:20:50.660931429 +0000 UTC m=+45.721203690" observedRunningTime="2026-01-17 00:20:51.638424984 +0000 UTC m=+46.698697269" watchObservedRunningTime="2026-01-17 00:20:51.639860614 +0000 UTC m=+46.700132904" Jan 17 00:20:51.831966 containerd[1461]: 2026-01-17 00:20:51.582 [INFO][3758] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="838a260361119b936c4c3d02f67478bead9836787e5be40d80110848c4e9b5ac" Jan 17 00:20:51.831966 containerd[1461]: 2026-01-17 00:20:51.584 [INFO][3758] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="838a260361119b936c4c3d02f67478bead9836787e5be40d80110848c4e9b5ac" iface="eth0" netns="/var/run/netns/cni-e3fa710d-f86f-74a9-d2d5-57418450d332" Jan 17 00:20:51.831966 containerd[1461]: 2026-01-17 00:20:51.585 [INFO][3758] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="838a260361119b936c4c3d02f67478bead9836787e5be40d80110848c4e9b5ac" iface="eth0" netns="/var/run/netns/cni-e3fa710d-f86f-74a9-d2d5-57418450d332" Jan 17 00:20:51.831966 containerd[1461]: 2026-01-17 00:20:51.586 [INFO][3758] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="838a260361119b936c4c3d02f67478bead9836787e5be40d80110848c4e9b5ac" iface="eth0" netns="/var/run/netns/cni-e3fa710d-f86f-74a9-d2d5-57418450d332" Jan 17 00:20:51.831966 containerd[1461]: 2026-01-17 00:20:51.586 [INFO][3758] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="838a260361119b936c4c3d02f67478bead9836787e5be40d80110848c4e9b5ac" Jan 17 00:20:51.831966 containerd[1461]: 2026-01-17 00:20:51.586 [INFO][3758] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="838a260361119b936c4c3d02f67478bead9836787e5be40d80110848c4e9b5ac" Jan 17 00:20:51.831966 containerd[1461]: 2026-01-17 00:20:51.803 [INFO][3769] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="838a260361119b936c4c3d02f67478bead9836787e5be40d80110848c4e9b5ac" HandleID="k8s-pod-network.838a260361119b936c4c3d02f67478bead9836787e5be40d80110848c4e9b5ac" Workload="ci--4081.3.6--n--8cc98427e3-k8s-whisker--6497847d59--vfrxp-eth0" Jan 17 00:20:51.831966 containerd[1461]: 2026-01-17 00:20:51.805 [INFO][3769] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:20:51.831966 containerd[1461]: 2026-01-17 00:20:51.806 [INFO][3769] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:20:51.831966 containerd[1461]: 2026-01-17 00:20:51.820 [WARNING][3769] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="838a260361119b936c4c3d02f67478bead9836787e5be40d80110848c4e9b5ac" HandleID="k8s-pod-network.838a260361119b936c4c3d02f67478bead9836787e5be40d80110848c4e9b5ac" Workload="ci--4081.3.6--n--8cc98427e3-k8s-whisker--6497847d59--vfrxp-eth0" Jan 17 00:20:51.831966 containerd[1461]: 2026-01-17 00:20:51.820 [INFO][3769] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="838a260361119b936c4c3d02f67478bead9836787e5be40d80110848c4e9b5ac" HandleID="k8s-pod-network.838a260361119b936c4c3d02f67478bead9836787e5be40d80110848c4e9b5ac" Workload="ci--4081.3.6--n--8cc98427e3-k8s-whisker--6497847d59--vfrxp-eth0" Jan 17 00:20:51.831966 containerd[1461]: 2026-01-17 00:20:51.822 [INFO][3769] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:20:51.831966 containerd[1461]: 2026-01-17 00:20:51.826 [INFO][3758] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="838a260361119b936c4c3d02f67478bead9836787e5be40d80110848c4e9b5ac" Jan 17 00:20:51.833490 containerd[1461]: time="2026-01-17T00:20:51.832585787Z" level=info msg="TearDown network for sandbox \"838a260361119b936c4c3d02f67478bead9836787e5be40d80110848c4e9b5ac\" successfully" Jan 17 00:20:51.833490 containerd[1461]: time="2026-01-17T00:20:51.832626192Z" level=info msg="StopPodSandbox for \"838a260361119b936c4c3d02f67478bead9836787e5be40d80110848c4e9b5ac\" returns successfully" Jan 17 00:20:51.835679 systemd[1]: run-netns-cni\x2de3fa710d\x2df86f\x2d74a9\x2dd2d5\x2d57418450d332.mount: Deactivated successfully. Jan 17 00:20:51.907454 kubelet[2529]: I0117 00:20:51.907395 2529 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c550f8d-c288-4bad-bef5-47dd3ed7bb5b-whisker-ca-bundle\") pod \"7c550f8d-c288-4bad-bef5-47dd3ed7bb5b\" (UID: \"7c550f8d-c288-4bad-bef5-47dd3ed7bb5b\") " Jan 17 00:20:51.907454 kubelet[2529]: I0117 00:20:51.907456 2529 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8tk8\" (UniqueName: \"kubernetes.io/projected/7c550f8d-c288-4bad-bef5-47dd3ed7bb5b-kube-api-access-f8tk8\") pod \"7c550f8d-c288-4bad-bef5-47dd3ed7bb5b\" (UID: \"7c550f8d-c288-4bad-bef5-47dd3ed7bb5b\") " Jan 17 00:20:51.907690 kubelet[2529]: I0117 00:20:51.907487 2529 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7c550f8d-c288-4bad-bef5-47dd3ed7bb5b-whisker-backend-key-pair\") pod \"7c550f8d-c288-4bad-bef5-47dd3ed7bb5b\" (UID: \"7c550f8d-c288-4bad-bef5-47dd3ed7bb5b\") " Jan 17 00:20:51.926469 kubelet[2529]: I0117 00:20:51.922666 2529 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c550f8d-c288-4bad-bef5-47dd3ed7bb5b-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "7c550f8d-c288-4bad-bef5-47dd3ed7bb5b" (UID: "7c550f8d-c288-4bad-bef5-47dd3ed7bb5b"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 17 00:20:51.929804 systemd[1]: var-lib-kubelet-pods-7c550f8d\x2dc288\x2d4bad\x2dbef5\x2d47dd3ed7bb5b-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 17 00:20:51.930369 kubelet[2529]: I0117 00:20:51.930271 2529 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c550f8d-c288-4bad-bef5-47dd3ed7bb5b-kube-api-access-f8tk8" (OuterVolumeSpecName: "kube-api-access-f8tk8") pod "7c550f8d-c288-4bad-bef5-47dd3ed7bb5b" (UID: "7c550f8d-c288-4bad-bef5-47dd3ed7bb5b"). InnerVolumeSpecName "kube-api-access-f8tk8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 17 00:20:51.931192 kubelet[2529]: I0117 00:20:51.931122 2529 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c550f8d-c288-4bad-bef5-47dd3ed7bb5b-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "7c550f8d-c288-4bad-bef5-47dd3ed7bb5b" (UID: "7c550f8d-c288-4bad-bef5-47dd3ed7bb5b"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 17 00:20:51.939695 systemd[1]: var-lib-kubelet-pods-7c550f8d\x2dc288\x2d4bad\x2dbef5\x2d47dd3ed7bb5b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2df8tk8.mount: Deactivated successfully. Jan 17 00:20:52.008091 kubelet[2529]: I0117 00:20:52.008050 2529 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-f8tk8\" (UniqueName: \"kubernetes.io/projected/7c550f8d-c288-4bad-bef5-47dd3ed7bb5b-kube-api-access-f8tk8\") on node \"ci-4081.3.6-n-8cc98427e3\" DevicePath \"\"" Jan 17 00:20:52.009501 kubelet[2529]: I0117 00:20:52.009411 2529 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7c550f8d-c288-4bad-bef5-47dd3ed7bb5b-whisker-backend-key-pair\") on node \"ci-4081.3.6-n-8cc98427e3\" DevicePath \"\"" Jan 17 00:20:52.009501 kubelet[2529]: I0117 00:20:52.009467 2529 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c550f8d-c288-4bad-bef5-47dd3ed7bb5b-whisker-ca-bundle\") on node \"ci-4081.3.6-n-8cc98427e3\" DevicePath \"\"" Jan 17 00:20:52.603267 systemd[1]: Removed slice kubepods-besteffort-pod7c550f8d_c288_4bad_bef5_47dd3ed7bb5b.slice - libcontainer container kubepods-besteffort-pod7c550f8d_c288_4bad_bef5_47dd3ed7bb5b.slice. Jan 17 00:20:52.611827 kubelet[2529]: E0117 00:20:52.611395 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:20:52.831849 systemd[1]: Created slice kubepods-besteffort-poddbdec3ca_a9b5_4e95_bddf_4459d785adf7.slice - libcontainer container kubepods-besteffort-poddbdec3ca_a9b5_4e95_bddf_4459d785adf7.slice. Jan 17 00:20:52.918271 kubelet[2529]: I0117 00:20:52.918210 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/dbdec3ca-a9b5-4e95-bddf-4459d785adf7-whisker-backend-key-pair\") pod \"whisker-65bcbb5f55-c8kgh\" (UID: \"dbdec3ca-a9b5-4e95-bddf-4459d785adf7\") " pod="calico-system/whisker-65bcbb5f55-c8kgh" Jan 17 00:20:52.918493 kubelet[2529]: I0117 00:20:52.918335 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dbdec3ca-a9b5-4e95-bddf-4459d785adf7-whisker-ca-bundle\") pod \"whisker-65bcbb5f55-c8kgh\" (UID: \"dbdec3ca-a9b5-4e95-bddf-4459d785adf7\") " pod="calico-system/whisker-65bcbb5f55-c8kgh" Jan 17 00:20:52.918493 kubelet[2529]: I0117 00:20:52.918374 2529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr94t\" (UniqueName: \"kubernetes.io/projected/dbdec3ca-a9b5-4e95-bddf-4459d785adf7-kube-api-access-fr94t\") pod \"whisker-65bcbb5f55-c8kgh\" (UID: \"dbdec3ca-a9b5-4e95-bddf-4459d785adf7\") " pod="calico-system/whisker-65bcbb5f55-c8kgh" Jan 17 00:20:53.141067 containerd[1461]: time="2026-01-17T00:20:53.140631726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65bcbb5f55-c8kgh,Uid:dbdec3ca-a9b5-4e95-bddf-4459d785adf7,Namespace:calico-system,Attempt:0,}" Jan 17 00:20:53.155960 kubelet[2529]: I0117 00:20:53.155857 2529 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c550f8d-c288-4bad-bef5-47dd3ed7bb5b" path="/var/lib/kubelet/pods/7c550f8d-c288-4bad-bef5-47dd3ed7bb5b/volumes" Jan 17 00:20:53.453642 systemd-networkd[1371]: calidf25c61e1cb: Link UP Jan 17 00:20:53.453880 systemd-networkd[1371]: calidf25c61e1cb: Gained carrier Jan 17 00:20:53.478990 containerd[1461]: 2026-01-17 00:20:53.245 [INFO][3917] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 17 00:20:53.478990 containerd[1461]: 2026-01-17 00:20:53.268 [INFO][3917] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--8cc98427e3-k8s-whisker--65bcbb5f55--c8kgh-eth0 whisker-65bcbb5f55- calico-system dbdec3ca-a9b5-4e95-bddf-4459d785adf7 953 0 2026-01-17 00:20:52 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:65bcbb5f55 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.6-n-8cc98427e3 whisker-65bcbb5f55-c8kgh eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calidf25c61e1cb [] [] }} ContainerID="c0119589a2c719752db2951d31546a782d01117d495430429f1c4c1fbf6e0339" Namespace="calico-system" Pod="whisker-65bcbb5f55-c8kgh" WorkloadEndpoint="ci--4081.3.6--n--8cc98427e3-k8s-whisker--65bcbb5f55--c8kgh-" Jan 17 00:20:53.478990 containerd[1461]: 2026-01-17 00:20:53.269 [INFO][3917] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c0119589a2c719752db2951d31546a782d01117d495430429f1c4c1fbf6e0339" Namespace="calico-system" Pod="whisker-65bcbb5f55-c8kgh" WorkloadEndpoint="ci--4081.3.6--n--8cc98427e3-k8s-whisker--65bcbb5f55--c8kgh-eth0" Jan 17 00:20:53.478990 containerd[1461]: 2026-01-17 00:20:53.344 [INFO][3933] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c0119589a2c719752db2951d31546a782d01117d495430429f1c4c1fbf6e0339" HandleID="k8s-pod-network.c0119589a2c719752db2951d31546a782d01117d495430429f1c4c1fbf6e0339" Workload="ci--4081.3.6--n--8cc98427e3-k8s-whisker--65bcbb5f55--c8kgh-eth0" Jan 17 00:20:53.478990 containerd[1461]: 2026-01-17 00:20:53.345 [INFO][3933] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c0119589a2c719752db2951d31546a782d01117d495430429f1c4c1fbf6e0339" HandleID="k8s-pod-network.c0119589a2c719752db2951d31546a782d01117d495430429f1c4c1fbf6e0339" Workload="ci--4081.3.6--n--8cc98427e3-k8s-whisker--65bcbb5f55--c8kgh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003236e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-8cc98427e3", "pod":"whisker-65bcbb5f55-c8kgh", "timestamp":"2026-01-17 00:20:53.344693646 +0000 UTC"}, Hostname:"ci-4081.3.6-n-8cc98427e3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:20:53.478990 containerd[1461]: 2026-01-17 00:20:53.345 [INFO][3933] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:20:53.478990 containerd[1461]: 2026-01-17 00:20:53.345 [INFO][3933] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:20:53.478990 containerd[1461]: 2026-01-17 00:20:53.345 [INFO][3933] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-8cc98427e3' Jan 17 00:20:53.478990 containerd[1461]: 2026-01-17 00:20:53.364 [INFO][3933] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c0119589a2c719752db2951d31546a782d01117d495430429f1c4c1fbf6e0339" host="ci-4081.3.6-n-8cc98427e3" Jan 17 00:20:53.478990 containerd[1461]: 2026-01-17 00:20:53.378 [INFO][3933] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-8cc98427e3" Jan 17 00:20:53.478990 containerd[1461]: 2026-01-17 00:20:53.387 [INFO][3933] ipam/ipam.go 511: Trying affinity for 192.168.60.128/26 host="ci-4081.3.6-n-8cc98427e3" Jan 17 00:20:53.478990 containerd[1461]: 2026-01-17 00:20:53.391 [INFO][3933] ipam/ipam.go 158: Attempting to load block cidr=192.168.60.128/26 host="ci-4081.3.6-n-8cc98427e3" Jan 17 00:20:53.478990 containerd[1461]: 2026-01-17 00:20:53.395 [INFO][3933] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.60.128/26 host="ci-4081.3.6-n-8cc98427e3" Jan 17 00:20:53.478990 containerd[1461]: 2026-01-17 00:20:53.395 [INFO][3933] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.60.128/26 handle="k8s-pod-network.c0119589a2c719752db2951d31546a782d01117d495430429f1c4c1fbf6e0339" host="ci-4081.3.6-n-8cc98427e3" Jan 17 00:20:53.478990 containerd[1461]: 2026-01-17 00:20:53.398 [INFO][3933] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c0119589a2c719752db2951d31546a782d01117d495430429f1c4c1fbf6e0339 Jan 17 00:20:53.478990 containerd[1461]: 2026-01-17 00:20:53.403 [INFO][3933] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.60.128/26 handle="k8s-pod-network.c0119589a2c719752db2951d31546a782d01117d495430429f1c4c1fbf6e0339" host="ci-4081.3.6-n-8cc98427e3" Jan 17 00:20:53.478990 containerd[1461]: 2026-01-17 00:20:53.412 [INFO][3933] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.60.129/26] block=192.168.60.128/26 handle="k8s-pod-network.c0119589a2c719752db2951d31546a782d01117d495430429f1c4c1fbf6e0339" host="ci-4081.3.6-n-8cc98427e3" Jan 17 00:20:53.478990 containerd[1461]: 2026-01-17 00:20:53.412 [INFO][3933] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.60.129/26] handle="k8s-pod-network.c0119589a2c719752db2951d31546a782d01117d495430429f1c4c1fbf6e0339" host="ci-4081.3.6-n-8cc98427e3" Jan 17 00:20:53.478990 containerd[1461]: 2026-01-17 00:20:53.412 [INFO][3933] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:20:53.478990 containerd[1461]: 2026-01-17 00:20:53.412 [INFO][3933] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.60.129/26] IPv6=[] ContainerID="c0119589a2c719752db2951d31546a782d01117d495430429f1c4c1fbf6e0339" HandleID="k8s-pod-network.c0119589a2c719752db2951d31546a782d01117d495430429f1c4c1fbf6e0339" Workload="ci--4081.3.6--n--8cc98427e3-k8s-whisker--65bcbb5f55--c8kgh-eth0" Jan 17 00:20:53.486281 containerd[1461]: 2026-01-17 00:20:53.422 [INFO][3917] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c0119589a2c719752db2951d31546a782d01117d495430429f1c4c1fbf6e0339" Namespace="calico-system" Pod="whisker-65bcbb5f55-c8kgh" WorkloadEndpoint="ci--4081.3.6--n--8cc98427e3-k8s-whisker--65bcbb5f55--c8kgh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8cc98427e3-k8s-whisker--65bcbb5f55--c8kgh-eth0", GenerateName:"whisker-65bcbb5f55-", Namespace:"calico-system", SelfLink:"", UID:"dbdec3ca-a9b5-4e95-bddf-4459d785adf7", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"65bcbb5f55", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8cc98427e3", ContainerID:"", Pod:"whisker-65bcbb5f55-c8kgh", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.60.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calidf25c61e1cb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:20:53.486281 containerd[1461]: 2026-01-17 00:20:53.423 [INFO][3917] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.60.129/32] ContainerID="c0119589a2c719752db2951d31546a782d01117d495430429f1c4c1fbf6e0339" Namespace="calico-system" Pod="whisker-65bcbb5f55-c8kgh" WorkloadEndpoint="ci--4081.3.6--n--8cc98427e3-k8s-whisker--65bcbb5f55--c8kgh-eth0" Jan 17 00:20:53.486281 containerd[1461]: 2026-01-17 00:20:53.423 [INFO][3917] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidf25c61e1cb ContainerID="c0119589a2c719752db2951d31546a782d01117d495430429f1c4c1fbf6e0339" Namespace="calico-system" Pod="whisker-65bcbb5f55-c8kgh" WorkloadEndpoint="ci--4081.3.6--n--8cc98427e3-k8s-whisker--65bcbb5f55--c8kgh-eth0" Jan 17 00:20:53.486281 containerd[1461]: 2026-01-17 00:20:53.447 [INFO][3917] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c0119589a2c719752db2951d31546a782d01117d495430429f1c4c1fbf6e0339" Namespace="calico-system" Pod="whisker-65bcbb5f55-c8kgh" WorkloadEndpoint="ci--4081.3.6--n--8cc98427e3-k8s-whisker--65bcbb5f55--c8kgh-eth0" Jan 17 00:20:53.486281 containerd[1461]: 2026-01-17 00:20:53.448 [INFO][3917] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c0119589a2c719752db2951d31546a782d01117d495430429f1c4c1fbf6e0339" Namespace="calico-system" Pod="whisker-65bcbb5f55-c8kgh" WorkloadEndpoint="ci--4081.3.6--n--8cc98427e3-k8s-whisker--65bcbb5f55--c8kgh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8cc98427e3-k8s-whisker--65bcbb5f55--c8kgh-eth0", GenerateName:"whisker-65bcbb5f55-", Namespace:"calico-system", SelfLink:"", UID:"dbdec3ca-a9b5-4e95-bddf-4459d785adf7", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"65bcbb5f55", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8cc98427e3", ContainerID:"c0119589a2c719752db2951d31546a782d01117d495430429f1c4c1fbf6e0339", Pod:"whisker-65bcbb5f55-c8kgh", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.60.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calidf25c61e1cb", MAC:"82:40:31:ee:e3:5a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:20:53.486281 containerd[1461]: 2026-01-17 00:20:53.464 [INFO][3917] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c0119589a2c719752db2951d31546a782d01117d495430429f1c4c1fbf6e0339" Namespace="calico-system" Pod="whisker-65bcbb5f55-c8kgh" WorkloadEndpoint="ci--4081.3.6--n--8cc98427e3-k8s-whisker--65bcbb5f55--c8kgh-eth0" Jan 17 00:20:53.548613 containerd[1461]: time="2026-01-17T00:20:53.546847232Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:20:53.548613 containerd[1461]: time="2026-01-17T00:20:53.547424495Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:20:53.548613 containerd[1461]: time="2026-01-17T00:20:53.547451990Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:20:53.550468 containerd[1461]: time="2026-01-17T00:20:53.550231956Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:20:53.589992 systemd[1]: Started cri-containerd-c0119589a2c719752db2951d31546a782d01117d495430429f1c4c1fbf6e0339.scope - libcontainer container c0119589a2c719752db2951d31546a782d01117d495430429f1c4c1fbf6e0339. Jan 17 00:20:53.601870 kubelet[2529]: E0117 00:20:53.601079 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:20:53.758321 containerd[1461]: time="2026-01-17T00:20:53.757425670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65bcbb5f55-c8kgh,Uid:dbdec3ca-a9b5-4e95-bddf-4459d785adf7,Namespace:calico-system,Attempt:0,} returns sandbox id \"c0119589a2c719752db2951d31546a782d01117d495430429f1c4c1fbf6e0339\"" Jan 17 00:20:53.761050 containerd[1461]: time="2026-01-17T00:20:53.760446186Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:20:53.838587 kernel: bpftool[4022]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 17 00:20:54.106765 containerd[1461]: time="2026-01-17T00:20:54.106519163Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:20:54.128617 containerd[1461]: time="2026-01-17T00:20:54.109148142Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:20:54.128617 containerd[1461]: time="2026-01-17T00:20:54.109223858Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:20:54.134726 kubelet[2529]: E0117 00:20:54.134614 2529 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:20:54.135730 kubelet[2529]: E0117 00:20:54.135396 2529 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:20:54.141119 kubelet[2529]: E0117 00:20:54.140976 2529 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:d662067c3c9b46248b5886cf8459eddd,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fr94t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-65bcbb5f55-c8kgh_calico-system(dbdec3ca-a9b5-4e95-bddf-4459d785adf7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:20:54.144768 containerd[1461]: time="2026-01-17T00:20:54.144654864Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:20:54.338127 systemd-networkd[1371]: vxlan.calico: Link UP Jan 17 00:20:54.338661 systemd-networkd[1371]: vxlan.calico: Gained carrier Jan 17 00:20:54.460344 containerd[1461]: time="2026-01-17T00:20:54.460107098Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:20:54.462597 containerd[1461]: time="2026-01-17T00:20:54.461168365Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:20:54.462597 containerd[1461]: time="2026-01-17T00:20:54.461267777Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:20:54.462818 kubelet[2529]: E0117 00:20:54.461449 2529 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:20:54.462818 kubelet[2529]: E0117 00:20:54.461652 2529 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:20:54.462889 kubelet[2529]: E0117 00:20:54.461863 2529 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fr94t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-65bcbb5f55-c8kgh_calico-system(dbdec3ca-a9b5-4e95-bddf-4459d785adf7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:20:54.463308 kubelet[2529]: E0117 00:20:54.463250 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65bcbb5f55-c8kgh" podUID="dbdec3ca-a9b5-4e95-bddf-4459d785adf7" Jan 17 00:20:54.611459 kubelet[2529]: E0117 00:20:54.611367 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65bcbb5f55-c8kgh" podUID="dbdec3ca-a9b5-4e95-bddf-4459d785adf7" Jan 17 00:20:54.916815 systemd-networkd[1371]: calidf25c61e1cb: Gained IPv6LL Jan 17 00:20:55.139386 containerd[1461]: time="2026-01-17T00:20:55.138866666Z" level=info msg="StopPodSandbox for \"3ffc5e8506d7df4b7bf7be1a5ecceee233557676031fc04e84d88f72bc18b9a5\"" Jan 17 00:20:55.260647 containerd[1461]: 2026-01-17 00:20:55.206 [INFO][4127] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3ffc5e8506d7df4b7bf7be1a5ecceee233557676031fc04e84d88f72bc18b9a5" Jan 17 00:20:55.260647 containerd[1461]: 2026-01-17 00:20:55.207 [INFO][4127] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3ffc5e8506d7df4b7bf7be1a5ecceee233557676031fc04e84d88f72bc18b9a5" iface="eth0" netns="/var/run/netns/cni-1272d78f-a257-c9e0-b7ba-9a343cda3b6f" Jan 17 00:20:55.260647 containerd[1461]: 2026-01-17 00:20:55.208 [INFO][4127] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3ffc5e8506d7df4b7bf7be1a5ecceee233557676031fc04e84d88f72bc18b9a5" iface="eth0" netns="/var/run/netns/cni-1272d78f-a257-c9e0-b7ba-9a343cda3b6f" Jan 17 00:20:55.260647 containerd[1461]: 2026-01-17 00:20:55.208 [INFO][4127] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3ffc5e8506d7df4b7bf7be1a5ecceee233557676031fc04e84d88f72bc18b9a5" iface="eth0" netns="/var/run/netns/cni-1272d78f-a257-c9e0-b7ba-9a343cda3b6f" Jan 17 00:20:55.260647 containerd[1461]: 2026-01-17 00:20:55.208 [INFO][4127] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3ffc5e8506d7df4b7bf7be1a5ecceee233557676031fc04e84d88f72bc18b9a5" Jan 17 00:20:55.260647 containerd[1461]: 2026-01-17 00:20:55.208 [INFO][4127] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3ffc5e8506d7df4b7bf7be1a5ecceee233557676031fc04e84d88f72bc18b9a5" Jan 17 00:20:55.260647 containerd[1461]: 2026-01-17 00:20:55.243 [INFO][4135] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3ffc5e8506d7df4b7bf7be1a5ecceee233557676031fc04e84d88f72bc18b9a5" HandleID="k8s-pod-network.3ffc5e8506d7df4b7bf7be1a5ecceee233557676031fc04e84d88f72bc18b9a5" Workload="ci--4081.3.6--n--8cc98427e3-k8s-calico--apiserver--669cbdb5c4--xt5pt-eth0" Jan 17 00:20:55.260647 containerd[1461]: 2026-01-17 00:20:55.243 [INFO][4135] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:20:55.260647 containerd[1461]: 2026-01-17 00:20:55.243 [INFO][4135] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:20:55.260647 containerd[1461]: 2026-01-17 00:20:55.252 [WARNING][4135] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3ffc5e8506d7df4b7bf7be1a5ecceee233557676031fc04e84d88f72bc18b9a5" HandleID="k8s-pod-network.3ffc5e8506d7df4b7bf7be1a5ecceee233557676031fc04e84d88f72bc18b9a5" Workload="ci--4081.3.6--n--8cc98427e3-k8s-calico--apiserver--669cbdb5c4--xt5pt-eth0" Jan 17 00:20:55.260647 containerd[1461]: 2026-01-17 00:20:55.252 [INFO][4135] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3ffc5e8506d7df4b7bf7be1a5ecceee233557676031fc04e84d88f72bc18b9a5" HandleID="k8s-pod-network.3ffc5e8506d7df4b7bf7be1a5ecceee233557676031fc04e84d88f72bc18b9a5" Workload="ci--4081.3.6--n--8cc98427e3-k8s-calico--apiserver--669cbdb5c4--xt5pt-eth0" Jan 17 00:20:55.260647 containerd[1461]: 2026-01-17 00:20:55.254 [INFO][4135] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:20:55.260647 containerd[1461]: 2026-01-17 00:20:55.257 [INFO][4127] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3ffc5e8506d7df4b7bf7be1a5ecceee233557676031fc04e84d88f72bc18b9a5" Jan 17 00:20:55.265048 containerd[1461]: time="2026-01-17T00:20:55.261807215Z" level=info msg="TearDown network for sandbox \"3ffc5e8506d7df4b7bf7be1a5ecceee233557676031fc04e84d88f72bc18b9a5\" successfully" Jan 17 00:20:55.265048 containerd[1461]: time="2026-01-17T00:20:55.263608580Z" level=info msg="StopPodSandbox for \"3ffc5e8506d7df4b7bf7be1a5ecceee233557676031fc04e84d88f72bc18b9a5\" returns successfully" Jan 17 00:20:55.265048 containerd[1461]: time="2026-01-17T00:20:55.264812467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-669cbdb5c4-xt5pt,Uid:fee43243-8ebd-4cd2-afa5-ba57dc078efe,Namespace:calico-apiserver,Attempt:1,}" Jan 17 00:20:55.267690 systemd[1]: run-netns-cni\x2d1272d78f\x2da257\x2dc9e0\x2db7ba\x2d9a343cda3b6f.mount: Deactivated successfully. Jan 17 00:20:55.479782 systemd-networkd[1371]: calia9f80d6b616: Link UP Jan 17 00:20:55.479986 systemd-networkd[1371]: calia9f80d6b616: Gained carrier Jan 17 00:20:55.521046 containerd[1461]: 2026-01-17 00:20:55.321 [INFO][4143] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--8cc98427e3-k8s-calico--apiserver--669cbdb5c4--xt5pt-eth0 calico-apiserver-669cbdb5c4- calico-apiserver fee43243-8ebd-4cd2-afa5-ba57dc078efe 978 0 2026-01-17 00:20:25 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:669cbdb5c4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-8cc98427e3 calico-apiserver-669cbdb5c4-xt5pt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia9f80d6b616 [] [] }} ContainerID="182a9d80a4722bb8a4377249036931f5ca4d73ac18028a907ad301d8e126869a" Namespace="calico-apiserver" Pod="calico-apiserver-669cbdb5c4-xt5pt" WorkloadEndpoint="ci--4081.3.6--n--8cc98427e3-k8s-calico--apiserver--669cbdb5c4--xt5pt-" Jan 17 00:20:55.521046 containerd[1461]: 2026-01-17 00:20:55.321 [INFO][4143] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="182a9d80a4722bb8a4377249036931f5ca4d73ac18028a907ad301d8e126869a" Namespace="calico-apiserver" Pod="calico-apiserver-669cbdb5c4-xt5pt" WorkloadEndpoint="ci--4081.3.6--n--8cc98427e3-k8s-calico--apiserver--669cbdb5c4--xt5pt-eth0" Jan 17 00:20:55.521046 containerd[1461]: 2026-01-17 00:20:55.369 [INFO][4155] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="182a9d80a4722bb8a4377249036931f5ca4d73ac18028a907ad301d8e126869a" HandleID="k8s-pod-network.182a9d80a4722bb8a4377249036931f5ca4d73ac18028a907ad301d8e126869a" Workload="ci--4081.3.6--n--8cc98427e3-k8s-calico--apiserver--669cbdb5c4--xt5pt-eth0" Jan 17 00:20:55.521046 containerd[1461]: 2026-01-17 00:20:55.369 [INFO][4155] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="182a9d80a4722bb8a4377249036931f5ca4d73ac18028a907ad301d8e126869a" HandleID="k8s-pod-network.182a9d80a4722bb8a4377249036931f5ca4d73ac18028a907ad301d8e126869a" Workload="ci--4081.3.6--n--8cc98427e3-k8s-calico--apiserver--669cbdb5c4--xt5pt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f660), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-8cc98427e3", "pod":"calico-apiserver-669cbdb5c4-xt5pt", "timestamp":"2026-01-17 00:20:55.369171624 +0000 UTC"}, Hostname:"ci-4081.3.6-n-8cc98427e3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:20:55.521046 containerd[1461]: 2026-01-17 00:20:55.369 [INFO][4155] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:20:55.521046 containerd[1461]: 2026-01-17 00:20:55.369 [INFO][4155] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:20:55.521046 containerd[1461]: 2026-01-17 00:20:55.369 [INFO][4155] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-8cc98427e3' Jan 17 00:20:55.521046 containerd[1461]: 2026-01-17 00:20:55.384 [INFO][4155] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.182a9d80a4722bb8a4377249036931f5ca4d73ac18028a907ad301d8e126869a" host="ci-4081.3.6-n-8cc98427e3" Jan 17 00:20:55.521046 containerd[1461]: 2026-01-17 00:20:55.396 [INFO][4155] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-8cc98427e3" Jan 17 00:20:55.521046 containerd[1461]: 2026-01-17 00:20:55.416 [INFO][4155] ipam/ipam.go 511: Trying affinity for 192.168.60.128/26 host="ci-4081.3.6-n-8cc98427e3" Jan 17 00:20:55.521046 containerd[1461]: 2026-01-17 00:20:55.423 [INFO][4155] ipam/ipam.go 158: Attempting to load block cidr=192.168.60.128/26 host="ci-4081.3.6-n-8cc98427e3" Jan 17 00:20:55.521046 containerd[1461]: 2026-01-17 00:20:55.426 [INFO][4155] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.60.128/26 host="ci-4081.3.6-n-8cc98427e3" Jan 17 00:20:55.521046 containerd[1461]: 2026-01-17 00:20:55.426 [INFO][4155] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.60.128/26 handle="k8s-pod-network.182a9d80a4722bb8a4377249036931f5ca4d73ac18028a907ad301d8e126869a" host="ci-4081.3.6-n-8cc98427e3" Jan 17 00:20:55.521046 containerd[1461]: 2026-01-17 00:20:55.429 [INFO][4155] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.182a9d80a4722bb8a4377249036931f5ca4d73ac18028a907ad301d8e126869a Jan 17 00:20:55.521046 containerd[1461]: 2026-01-17 00:20:55.441 [INFO][4155] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.60.128/26 handle="k8s-pod-network.182a9d80a4722bb8a4377249036931f5ca4d73ac18028a907ad301d8e126869a" host="ci-4081.3.6-n-8cc98427e3" Jan 17 00:20:55.521046 containerd[1461]: 2026-01-17 00:20:55.468 [INFO][4155] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.60.130/26] block=192.168.60.128/26 handle="k8s-pod-network.182a9d80a4722bb8a4377249036931f5ca4d73ac18028a907ad301d8e126869a" host="ci-4081.3.6-n-8cc98427e3" Jan 17 00:20:55.521046 containerd[1461]: 2026-01-17 00:20:55.468 [INFO][4155] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.60.130/26] handle="k8s-pod-network.182a9d80a4722bb8a4377249036931f5ca4d73ac18028a907ad301d8e126869a" host="ci-4081.3.6-n-8cc98427e3" Jan 17 00:20:55.521046 containerd[1461]: 2026-01-17 00:20:55.468 [INFO][4155] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:20:55.521046 containerd[1461]: 2026-01-17 00:20:55.468 [INFO][4155] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.60.130/26] IPv6=[] ContainerID="182a9d80a4722bb8a4377249036931f5ca4d73ac18028a907ad301d8e126869a" HandleID="k8s-pod-network.182a9d80a4722bb8a4377249036931f5ca4d73ac18028a907ad301d8e126869a" Workload="ci--4081.3.6--n--8cc98427e3-k8s-calico--apiserver--669cbdb5c4--xt5pt-eth0" Jan 17 00:20:55.522691 containerd[1461]: 2026-01-17 00:20:55.472 [INFO][4143] cni-plugin/k8s.go 418: Populated endpoint ContainerID="182a9d80a4722bb8a4377249036931f5ca4d73ac18028a907ad301d8e126869a" Namespace="calico-apiserver" Pod="calico-apiserver-669cbdb5c4-xt5pt" WorkloadEndpoint="ci--4081.3.6--n--8cc98427e3-k8s-calico--apiserver--669cbdb5c4--xt5pt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8cc98427e3-k8s-calico--apiserver--669cbdb5c4--xt5pt-eth0", GenerateName:"calico-apiserver-669cbdb5c4-", Namespace:"calico-apiserver", SelfLink:"", UID:"fee43243-8ebd-4cd2-afa5-ba57dc078efe", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"669cbdb5c4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8cc98427e3", ContainerID:"", Pod:"calico-apiserver-669cbdb5c4-xt5pt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.60.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia9f80d6b616", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:20:55.522691 containerd[1461]: 2026-01-17 00:20:55.472 [INFO][4143] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.60.130/32] ContainerID="182a9d80a4722bb8a4377249036931f5ca4d73ac18028a907ad301d8e126869a" Namespace="calico-apiserver" Pod="calico-apiserver-669cbdb5c4-xt5pt" WorkloadEndpoint="ci--4081.3.6--n--8cc98427e3-k8s-calico--apiserver--669cbdb5c4--xt5pt-eth0" Jan 17 00:20:55.522691 containerd[1461]: 2026-01-17 00:20:55.472 [INFO][4143] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia9f80d6b616 ContainerID="182a9d80a4722bb8a4377249036931f5ca4d73ac18028a907ad301d8e126869a" Namespace="calico-apiserver" Pod="calico-apiserver-669cbdb5c4-xt5pt" WorkloadEndpoint="ci--4081.3.6--n--8cc98427e3-k8s-calico--apiserver--669cbdb5c4--xt5pt-eth0" Jan 17 00:20:55.522691 containerd[1461]: 2026-01-17 00:20:55.477 [INFO][4143] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="182a9d80a4722bb8a4377249036931f5ca4d73ac18028a907ad301d8e126869a" Namespace="calico-apiserver" Pod="calico-apiserver-669cbdb5c4-xt5pt" WorkloadEndpoint="ci--4081.3.6--n--8cc98427e3-k8s-calico--apiserver--669cbdb5c4--xt5pt-eth0" Jan 17 00:20:55.522691 containerd[1461]: 2026-01-17 00:20:55.477 [INFO][4143] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="182a9d80a4722bb8a4377249036931f5ca4d73ac18028a907ad301d8e126869a" Namespace="calico-apiserver" Pod="calico-apiserver-669cbdb5c4-xt5pt" WorkloadEndpoint="ci--4081.3.6--n--8cc98427e3-k8s-calico--apiserver--669cbdb5c4--xt5pt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8cc98427e3-k8s-calico--apiserver--669cbdb5c4--xt5pt-eth0", GenerateName:"calico-apiserver-669cbdb5c4-", Namespace:"calico-apiserver", SelfLink:"", UID:"fee43243-8ebd-4cd2-afa5-ba57dc078efe", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"669cbdb5c4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8cc98427e3", ContainerID:"182a9d80a4722bb8a4377249036931f5ca4d73ac18028a907ad301d8e126869a", Pod:"calico-apiserver-669cbdb5c4-xt5pt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.60.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia9f80d6b616", MAC:"66:f3:7a:45:43:5c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:20:55.522691 containerd[1461]: 2026-01-17 00:20:55.515 [INFO][4143] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="182a9d80a4722bb8a4377249036931f5ca4d73ac18028a907ad301d8e126869a" Namespace="calico-apiserver" Pod="calico-apiserver-669cbdb5c4-xt5pt" WorkloadEndpoint="ci--4081.3.6--n--8cc98427e3-k8s-calico--apiserver--669cbdb5c4--xt5pt-eth0" Jan 17 00:20:55.563556 containerd[1461]: time="2026-01-17T00:20:55.563014221Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:20:55.563556 containerd[1461]: time="2026-01-17T00:20:55.563096830Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:20:55.563556 containerd[1461]: time="2026-01-17T00:20:55.563108908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:20:55.563556 containerd[1461]: time="2026-01-17T00:20:55.563220746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:20:55.597775 systemd[1]: Started cri-containerd-182a9d80a4722bb8a4377249036931f5ca4d73ac18028a907ad301d8e126869a.scope - libcontainer container 182a9d80a4722bb8a4377249036931f5ca4d73ac18028a907ad301d8e126869a. Jan 17 00:20:55.609367 kubelet[2529]: E0117 00:20:55.609300 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65bcbb5f55-c8kgh" podUID="dbdec3ca-a9b5-4e95-bddf-4459d785adf7" Jan 17 00:20:55.663916 containerd[1461]: time="2026-01-17T00:20:55.663867140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-669cbdb5c4-xt5pt,Uid:fee43243-8ebd-4cd2-afa5-ba57dc078efe,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"182a9d80a4722bb8a4377249036931f5ca4d73ac18028a907ad301d8e126869a\"" Jan 17 00:20:55.671890 containerd[1461]: time="2026-01-17T00:20:55.671848514Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:20:55.986942 containerd[1461]: time="2026-01-17T00:20:55.986889574Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:20:55.988516 containerd[1461]: time="2026-01-17T00:20:55.987905279Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:20:55.988756 containerd[1461]: time="2026-01-17T00:20:55.988069695Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:20:55.989399 kubelet[2529]: E0117 00:20:55.988985 2529 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:20:55.989399 kubelet[2529]: E0117 00:20:55.989058 2529 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:20:55.989927 kubelet[2529]: E0117 00:20:55.989846 2529 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zds2l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-669cbdb5c4-xt5pt_calico-apiserver(fee43243-8ebd-4cd2-afa5-ba57dc078efe): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:20:55.992107 kubelet[2529]: E0117 00:20:55.992061 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-669cbdb5c4-xt5pt" podUID="fee43243-8ebd-4cd2-afa5-ba57dc078efe" Jan 17 00:20:56.138789 containerd[1461]: time="2026-01-17T00:20:56.138734585Z" level=info msg="StopPodSandbox for \"0c1039da7ae9b20848b087b8181a55c71afdddbbdba3827847f3f1622b16355c\"" Jan 17 00:20:56.139719 containerd[1461]: time="2026-01-17T00:20:56.139322574Z" level=info msg="StopPodSandbox for \"508319c222abee0c5b89c340610184580e04a9ff4363de9efe822c84e09b5507\"" Jan 17 00:20:56.140269 containerd[1461]: time="2026-01-17T00:20:56.140242597Z" level=info msg="StopPodSandbox for \"87a14bc904253110840c44a9cbfb2d2c7b945fa5100db2395842b3b79fa3f745\"" Jan 17 00:20:56.323921 systemd-networkd[1371]: vxlan.calico: Gained IPv6LL Jan 17 00:20:56.396205 containerd[1461]: 2026-01-17 00:20:56.243 [INFO][4235] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="508319c222abee0c5b89c340610184580e04a9ff4363de9efe822c84e09b5507" Jan 17 00:20:56.396205 containerd[1461]: 2026-01-17 00:20:56.244 [INFO][4235] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="508319c222abee0c5b89c340610184580e04a9ff4363de9efe822c84e09b5507" iface="eth0" netns="/var/run/netns/cni-fc6f8448-2aae-978b-7a13-495c8ed06408" Jan 17 00:20:56.396205 containerd[1461]: 2026-01-17 00:20:56.245 [INFO][4235] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="508319c222abee0c5b89c340610184580e04a9ff4363de9efe822c84e09b5507" iface="eth0" netns="/var/run/netns/cni-fc6f8448-2aae-978b-7a13-495c8ed06408" Jan 17 00:20:56.396205 containerd[1461]: 2026-01-17 00:20:56.248 [INFO][4235] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="508319c222abee0c5b89c340610184580e04a9ff4363de9efe822c84e09b5507" iface="eth0" netns="/var/run/netns/cni-fc6f8448-2aae-978b-7a13-495c8ed06408" Jan 17 00:20:56.396205 containerd[1461]: 2026-01-17 00:20:56.251 [INFO][4235] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="508319c222abee0c5b89c340610184580e04a9ff4363de9efe822c84e09b5507" Jan 17 00:20:56.396205 containerd[1461]: 2026-01-17 00:20:56.251 [INFO][4235] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="508319c222abee0c5b89c340610184580e04a9ff4363de9efe822c84e09b5507" Jan 17 00:20:56.396205 containerd[1461]: 2026-01-17 00:20:56.358 [INFO][4257] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="508319c222abee0c5b89c340610184580e04a9ff4363de9efe822c84e09b5507" HandleID="k8s-pod-network.508319c222abee0c5b89c340610184580e04a9ff4363de9efe822c84e09b5507" Workload="ci--4081.3.6--n--8cc98427e3-k8s-goldmane--666569f655--m56rm-eth0" Jan 17 00:20:56.396205 containerd[1461]: 2026-01-17 00:20:56.361 [INFO][4257] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:20:56.396205 containerd[1461]: 2026-01-17 00:20:56.361 [INFO][4257] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:20:56.396205 containerd[1461]: 2026-01-17 00:20:56.381 [WARNING][4257] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="508319c222abee0c5b89c340610184580e04a9ff4363de9efe822c84e09b5507" HandleID="k8s-pod-network.508319c222abee0c5b89c340610184580e04a9ff4363de9efe822c84e09b5507" Workload="ci--4081.3.6--n--8cc98427e3-k8s-goldmane--666569f655--m56rm-eth0" Jan 17 00:20:56.396205 containerd[1461]: 2026-01-17 00:20:56.381 [INFO][4257] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="508319c222abee0c5b89c340610184580e04a9ff4363de9efe822c84e09b5507" HandleID="k8s-pod-network.508319c222abee0c5b89c340610184580e04a9ff4363de9efe822c84e09b5507" Workload="ci--4081.3.6--n--8cc98427e3-k8s-goldmane--666569f655--m56rm-eth0" Jan 17 00:20:56.396205 containerd[1461]: 2026-01-17 00:20:56.384 [INFO][4257] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:20:56.396205 containerd[1461]: 2026-01-17 00:20:56.390 [INFO][4235] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="508319c222abee0c5b89c340610184580e04a9ff4363de9efe822c84e09b5507" Jan 17 00:20:56.402690 containerd[1461]: time="2026-01-17T00:20:56.397931188Z" level=info msg="TearDown network for sandbox \"508319c222abee0c5b89c340610184580e04a9ff4363de9efe822c84e09b5507\" successfully" Jan 17 00:20:56.402690 containerd[1461]: time="2026-01-17T00:20:56.397965207Z" level=info msg="StopPodSandbox for \"508319c222abee0c5b89c340610184580e04a9ff4363de9efe822c84e09b5507\" returns successfully" Jan 17 00:20:56.402690 containerd[1461]: time="2026-01-17T00:20:56.399827638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-m56rm,Uid:a4529381-2d40-4d70-a757-b0ee2c920e64,Namespace:calico-system,Attempt:1,}" Jan 17 00:20:56.402431 systemd[1]: run-netns-cni\x2dfc6f8448\x2d2aae\x2d978b\x2d7a13\x2d495c8ed06408.mount: Deactivated successfully. Jan 17 00:20:56.415065 containerd[1461]: 2026-01-17 00:20:56.308 [INFO][4239] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0c1039da7ae9b20848b087b8181a55c71afdddbbdba3827847f3f1622b16355c" Jan 17 00:20:56.415065 containerd[1461]: 2026-01-17 00:20:56.308 [INFO][4239] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0c1039da7ae9b20848b087b8181a55c71afdddbbdba3827847f3f1622b16355c" iface="eth0" netns="/var/run/netns/cni-c1991b06-5b78-4b4d-03e5-38fbb94cc3d4" Jan 17 00:20:56.415065 containerd[1461]: 2026-01-17 00:20:56.309 [INFO][4239] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0c1039da7ae9b20848b087b8181a55c71afdddbbdba3827847f3f1622b16355c" iface="eth0" netns="/var/run/netns/cni-c1991b06-5b78-4b4d-03e5-38fbb94cc3d4" Jan 17 00:20:56.415065 containerd[1461]: 2026-01-17 00:20:56.309 [INFO][4239] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0c1039da7ae9b20848b087b8181a55c71afdddbbdba3827847f3f1622b16355c" iface="eth0" netns="/var/run/netns/cni-c1991b06-5b78-4b4d-03e5-38fbb94cc3d4" Jan 17 00:20:56.415065 containerd[1461]: 2026-01-17 00:20:56.309 [INFO][4239] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0c1039da7ae9b20848b087b8181a55c71afdddbbdba3827847f3f1622b16355c" Jan 17 00:20:56.415065 containerd[1461]: 2026-01-17 00:20:56.309 [INFO][4239] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0c1039da7ae9b20848b087b8181a55c71afdddbbdba3827847f3f1622b16355c" Jan 17 00:20:56.415065 containerd[1461]: 2026-01-17 00:20:56.377 [INFO][4264] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0c1039da7ae9b20848b087b8181a55c71afdddbbdba3827847f3f1622b16355c" HandleID="k8s-pod-network.0c1039da7ae9b20848b087b8181a55c71afdddbbdba3827847f3f1622b16355c" Workload="ci--4081.3.6--n--8cc98427e3-k8s-calico--kube--controllers--b9877fd47--255j9-eth0" Jan 17 00:20:56.415065 containerd[1461]: 2026-01-17 00:20:56.378 [INFO][4264] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:20:56.415065 containerd[1461]: 2026-01-17 00:20:56.385 [INFO][4264] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:20:56.415065 containerd[1461]: 2026-01-17 00:20:56.394 [WARNING][4264] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0c1039da7ae9b20848b087b8181a55c71afdddbbdba3827847f3f1622b16355c" HandleID="k8s-pod-network.0c1039da7ae9b20848b087b8181a55c71afdddbbdba3827847f3f1622b16355c" Workload="ci--4081.3.6--n--8cc98427e3-k8s-calico--kube--controllers--b9877fd47--255j9-eth0" Jan 17 00:20:56.415065 containerd[1461]: 2026-01-17 00:20:56.394 [INFO][4264] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0c1039da7ae9b20848b087b8181a55c71afdddbbdba3827847f3f1622b16355c" HandleID="k8s-pod-network.0c1039da7ae9b20848b087b8181a55c71afdddbbdba3827847f3f1622b16355c" Workload="ci--4081.3.6--n--8cc98427e3-k8s-calico--kube--controllers--b9877fd47--255j9-eth0" Jan 17 00:20:56.415065 containerd[1461]: 2026-01-17 00:20:56.405 [INFO][4264] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:20:56.415065 containerd[1461]: 2026-01-17 00:20:56.408 [INFO][4239] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0c1039da7ae9b20848b087b8181a55c71afdddbbdba3827847f3f1622b16355c" Jan 17 00:20:56.419696 containerd[1461]: time="2026-01-17T00:20:56.416295992Z" level=info msg="TearDown network for sandbox \"0c1039da7ae9b20848b087b8181a55c71afdddbbdba3827847f3f1622b16355c\" successfully" Jan 17 00:20:56.419696 containerd[1461]: time="2026-01-17T00:20:56.416328609Z" level=info msg="StopPodSandbox for \"0c1039da7ae9b20848b087b8181a55c71afdddbbdba3827847f3f1622b16355c\" returns successfully" Jan 17 00:20:56.419696 containerd[1461]: time="2026-01-17T00:20:56.417424145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b9877fd47-255j9,Uid:1b489003-62f2-46b7-a6af-3a3a669c193c,Namespace:calico-system,Attempt:1,}" Jan 17 00:20:56.421310 systemd[1]: run-netns-cni\x2dc1991b06\x2d5b78\x2d4b4d\x2d03e5\x2d38fbb94cc3d4.mount: Deactivated successfully. Jan 17 00:20:56.472595 containerd[1461]: 2026-01-17 00:20:56.303 [INFO][4243] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="87a14bc904253110840c44a9cbfb2d2c7b945fa5100db2395842b3b79fa3f745" Jan 17 00:20:56.472595 containerd[1461]: 2026-01-17 00:20:56.306 [INFO][4243] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="87a14bc904253110840c44a9cbfb2d2c7b945fa5100db2395842b3b79fa3f745" iface="eth0" netns="/var/run/netns/cni-e113d8f6-67b2-abf6-b732-5cf2250944a3" Jan 17 00:20:56.472595 containerd[1461]: 2026-01-17 00:20:56.306 [INFO][4243] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="87a14bc904253110840c44a9cbfb2d2c7b945fa5100db2395842b3b79fa3f745" iface="eth0" netns="/var/run/netns/cni-e113d8f6-67b2-abf6-b732-5cf2250944a3" Jan 17 00:20:56.472595 containerd[1461]: 2026-01-17 00:20:56.308 [INFO][4243] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="87a14bc904253110840c44a9cbfb2d2c7b945fa5100db2395842b3b79fa3f745" iface="eth0" netns="/var/run/netns/cni-e113d8f6-67b2-abf6-b732-5cf2250944a3" Jan 17 00:20:56.472595 containerd[1461]: 2026-01-17 00:20:56.308 [INFO][4243] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="87a14bc904253110840c44a9cbfb2d2c7b945fa5100db2395842b3b79fa3f745" Jan 17 00:20:56.472595 containerd[1461]: 2026-01-17 00:20:56.308 [INFO][4243] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="87a14bc904253110840c44a9cbfb2d2c7b945fa5100db2395842b3b79fa3f745" Jan 17 00:20:56.472595 containerd[1461]: 2026-01-17 00:20:56.388 [INFO][4266] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="87a14bc904253110840c44a9cbfb2d2c7b945fa5100db2395842b3b79fa3f745" HandleID="k8s-pod-network.87a14bc904253110840c44a9cbfb2d2c7b945fa5100db2395842b3b79fa3f745" Workload="ci--4081.3.6--n--8cc98427e3-k8s-csi--node--driver--dcsb9-eth0" Jan 17 00:20:56.472595 containerd[1461]: 2026-01-17 00:20:56.389 [INFO][4266] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:20:56.472595 containerd[1461]: 2026-01-17 00:20:56.405 [INFO][4266] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:20:56.472595 containerd[1461]: 2026-01-17 00:20:56.443 [WARNING][4266] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="87a14bc904253110840c44a9cbfb2d2c7b945fa5100db2395842b3b79fa3f745" HandleID="k8s-pod-network.87a14bc904253110840c44a9cbfb2d2c7b945fa5100db2395842b3b79fa3f745" Workload="ci--4081.3.6--n--8cc98427e3-k8s-csi--node--driver--dcsb9-eth0" Jan 17 00:20:56.472595 containerd[1461]: 2026-01-17 00:20:56.443 [INFO][4266] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="87a14bc904253110840c44a9cbfb2d2c7b945fa5100db2395842b3b79fa3f745" HandleID="k8s-pod-network.87a14bc904253110840c44a9cbfb2d2c7b945fa5100db2395842b3b79fa3f745" Workload="ci--4081.3.6--n--8cc98427e3-k8s-csi--node--driver--dcsb9-eth0" Jan 17 00:20:56.472595 containerd[1461]: 2026-01-17 00:20:56.450 [INFO][4266] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:20:56.472595 containerd[1461]: 2026-01-17 00:20:56.455 [INFO][4243] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="87a14bc904253110840c44a9cbfb2d2c7b945fa5100db2395842b3b79fa3f745" Jan 17 00:20:56.476111 containerd[1461]: time="2026-01-17T00:20:56.475597128Z" level=info msg="TearDown network for sandbox \"87a14bc904253110840c44a9cbfb2d2c7b945fa5100db2395842b3b79fa3f745\" successfully" Jan 17 00:20:56.476111 containerd[1461]: time="2026-01-17T00:20:56.475820446Z" level=info msg="StopPodSandbox for \"87a14bc904253110840c44a9cbfb2d2c7b945fa5100db2395842b3b79fa3f745\" returns successfully" Jan 17 00:20:56.480397 containerd[1461]: time="2026-01-17T00:20:56.479903098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dcsb9,Uid:96bd27e8-f4d7-4ca9-8ceb-fc56f28a33f0,Namespace:calico-system,Attempt:1,}" Jan 17 00:20:56.622609 kubelet[2529]: E0117 00:20:56.622512 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-669cbdb5c4-xt5pt" podUID="fee43243-8ebd-4cd2-afa5-ba57dc078efe" Jan 17 00:20:56.810884 systemd-networkd[1371]: calic31e565c037: Link UP Jan 17 00:20:56.813739 systemd-networkd[1371]: calic31e565c037: Gained carrier Jan 17 00:20:56.835804 containerd[1461]: 2026-01-17 00:20:56.601 [INFO][4288] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--8cc98427e3-k8s-calico--kube--controllers--b9877fd47--255j9-eth0 calico-kube-controllers-b9877fd47- calico-system 1b489003-62f2-46b7-a6af-3a3a669c193c 999 0 2026-01-17 00:20:32 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:b9877fd47 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.6-n-8cc98427e3 calico-kube-controllers-b9877fd47-255j9 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic31e565c037 [] [] }} ContainerID="5ba847bda35e000f761ef1d0536abf15a71752c10544e67c9bb81c9bea14c6f7" Namespace="calico-system" Pod="calico-kube-controllers-b9877fd47-255j9" WorkloadEndpoint="ci--4081.3.6--n--8cc98427e3-k8s-calico--kube--controllers--b9877fd47--255j9-" Jan 17 00:20:56.835804 containerd[1461]: 2026-01-17 00:20:56.602 [INFO][4288] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5ba847bda35e000f761ef1d0536abf15a71752c10544e67c9bb81c9bea14c6f7" Namespace="calico-system" Pod="calico-kube-controllers-b9877fd47-255j9" WorkloadEndpoint="ci--4081.3.6--n--8cc98427e3-k8s-calico--kube--controllers--b9877fd47--255j9-eth0" Jan 17 00:20:56.835804 containerd[1461]: 2026-01-17 00:20:56.698 [INFO][4320] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5ba847bda35e000f761ef1d0536abf15a71752c10544e67c9bb81c9bea14c6f7" HandleID="k8s-pod-network.5ba847bda35e000f761ef1d0536abf15a71752c10544e67c9bb81c9bea14c6f7" Workload="ci--4081.3.6--n--8cc98427e3-k8s-calico--kube--controllers--b9877fd47--255j9-eth0" Jan 17 00:20:56.835804 containerd[1461]: 2026-01-17 00:20:56.699 [INFO][4320] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5ba847bda35e000f761ef1d0536abf15a71752c10544e67c9bb81c9bea14c6f7" HandleID="k8s-pod-network.5ba847bda35e000f761ef1d0536abf15a71752c10544e67c9bb81c9bea14c6f7" Workload="ci--4081.3.6--n--8cc98427e3-k8s-calico--kube--controllers--b9877fd47--255j9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5250), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-8cc98427e3", "pod":"calico-kube-controllers-b9877fd47-255j9", "timestamp":"2026-01-17 00:20:56.698916346 +0000 UTC"}, Hostname:"ci-4081.3.6-n-8cc98427e3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:20:56.835804 containerd[1461]: 2026-01-17 00:20:56.699 [INFO][4320] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:20:56.835804 containerd[1461]: 2026-01-17 00:20:56.699 [INFO][4320] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:20:56.835804 containerd[1461]: 2026-01-17 00:20:56.700 [INFO][4320] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-8cc98427e3' Jan 17 00:20:56.835804 containerd[1461]: 2026-01-17 00:20:56.728 [INFO][4320] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5ba847bda35e000f761ef1d0536abf15a71752c10544e67c9bb81c9bea14c6f7" host="ci-4081.3.6-n-8cc98427e3" Jan 17 00:20:56.835804 containerd[1461]: 2026-01-17 00:20:56.745 [INFO][4320] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-8cc98427e3" Jan 17 00:20:56.835804 containerd[1461]: 2026-01-17 00:20:56.763 [INFO][4320] ipam/ipam.go 511: Trying affinity for 192.168.60.128/26 host="ci-4081.3.6-n-8cc98427e3" Jan 17 00:20:56.835804 containerd[1461]: 2026-01-17 00:20:56.769 [INFO][4320] ipam/ipam.go 158: Attempting to load block cidr=192.168.60.128/26 host="ci-4081.3.6-n-8cc98427e3" Jan 17 00:20:56.835804 containerd[1461]: 2026-01-17 00:20:56.774 [INFO][4320] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.60.128/26 host="ci-4081.3.6-n-8cc98427e3" Jan 17 00:20:56.835804 containerd[1461]: 2026-01-17 00:20:56.774 [INFO][4320] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.60.128/26 handle="k8s-pod-network.5ba847bda35e000f761ef1d0536abf15a71752c10544e67c9bb81c9bea14c6f7" host="ci-4081.3.6-n-8cc98427e3" Jan 17 00:20:56.835804 containerd[1461]: 2026-01-17 00:20:56.778 [INFO][4320] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5ba847bda35e000f761ef1d0536abf15a71752c10544e67c9bb81c9bea14c6f7 Jan 17 00:20:56.835804 containerd[1461]: 2026-01-17 00:20:56.785 [INFO][4320] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.60.128/26 handle="k8s-pod-network.5ba847bda35e000f761ef1d0536abf15a71752c10544e67c9bb81c9bea14c6f7" host="ci-4081.3.6-n-8cc98427e3" Jan 17 00:20:56.835804 containerd[1461]: 2026-01-17 00:20:56.795 [INFO][4320] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.60.131/26] block=192.168.60.128/26 handle="k8s-pod-network.5ba847bda35e000f761ef1d0536abf15a71752c10544e67c9bb81c9bea14c6f7" host="ci-4081.3.6-n-8cc98427e3" Jan 17 00:20:56.835804 containerd[1461]: 2026-01-17 00:20:56.795 [INFO][4320] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.60.131/26] handle="k8s-pod-network.5ba847bda35e000f761ef1d0536abf15a71752c10544e67c9bb81c9bea14c6f7" host="ci-4081.3.6-n-8cc98427e3" Jan 17 00:20:56.835804 containerd[1461]: 2026-01-17 00:20:56.795 [INFO][4320] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:20:56.835804 containerd[1461]: 2026-01-17 00:20:56.795 [INFO][4320] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.60.131/26] IPv6=[] ContainerID="5ba847bda35e000f761ef1d0536abf15a71752c10544e67c9bb81c9bea14c6f7" HandleID="k8s-pod-network.5ba847bda35e000f761ef1d0536abf15a71752c10544e67c9bb81c9bea14c6f7" Workload="ci--4081.3.6--n--8cc98427e3-k8s-calico--kube--controllers--b9877fd47--255j9-eth0" Jan 17 00:20:56.836945 containerd[1461]: 2026-01-17 00:20:56.801 [INFO][4288] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5ba847bda35e000f761ef1d0536abf15a71752c10544e67c9bb81c9bea14c6f7" Namespace="calico-system" Pod="calico-kube-controllers-b9877fd47-255j9" WorkloadEndpoint="ci--4081.3.6--n--8cc98427e3-k8s-calico--kube--controllers--b9877fd47--255j9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8cc98427e3-k8s-calico--kube--controllers--b9877fd47--255j9-eth0", GenerateName:"calico-kube-controllers-b9877fd47-", Namespace:"calico-system", SelfLink:"", UID:"1b489003-62f2-46b7-a6af-3a3a669c193c", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b9877fd47", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8cc98427e3", ContainerID:"", Pod:"calico-kube-controllers-b9877fd47-255j9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.60.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic31e565c037", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:20:56.836945 containerd[1461]: 2026-01-17 00:20:56.801 [INFO][4288] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.60.131/32] ContainerID="5ba847bda35e000f761ef1d0536abf15a71752c10544e67c9bb81c9bea14c6f7" Namespace="calico-system" Pod="calico-kube-controllers-b9877fd47-255j9" WorkloadEndpoint="ci--4081.3.6--n--8cc98427e3-k8s-calico--kube--controllers--b9877fd47--255j9-eth0" Jan 17 00:20:56.836945 containerd[1461]: 2026-01-17 00:20:56.801 [INFO][4288] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic31e565c037 ContainerID="5ba847bda35e000f761ef1d0536abf15a71752c10544e67c9bb81c9bea14c6f7" Namespace="calico-system" Pod="calico-kube-controllers-b9877fd47-255j9" WorkloadEndpoint="ci--4081.3.6--n--8cc98427e3-k8s-calico--kube--controllers--b9877fd47--255j9-eth0" Jan 17 00:20:56.836945 containerd[1461]: 2026-01-17 00:20:56.814 [INFO][4288] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5ba847bda35e000f761ef1d0536abf15a71752c10544e67c9bb81c9bea14c6f7" Namespace="calico-system" Pod="calico-kube-controllers-b9877fd47-255j9" WorkloadEndpoint="ci--4081.3.6--n--8cc98427e3-k8s-calico--kube--controllers--b9877fd47--255j9-eth0" Jan 17 00:20:56.836945 containerd[1461]: 2026-01-17 00:20:56.815 [INFO][4288] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5ba847bda35e000f761ef1d0536abf15a71752c10544e67c9bb81c9bea14c6f7" Namespace="calico-system" Pod="calico-kube-controllers-b9877fd47-255j9" WorkloadEndpoint="ci--4081.3.6--n--8cc98427e3-k8s-calico--kube--controllers--b9877fd47--255j9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8cc98427e3-k8s-calico--kube--controllers--b9877fd47--255j9-eth0", GenerateName:"calico-kube-controllers-b9877fd47-", Namespace:"calico-system", SelfLink:"", UID:"1b489003-62f2-46b7-a6af-3a3a669c193c", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b9877fd47", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8cc98427e3", ContainerID:"5ba847bda35e000f761ef1d0536abf15a71752c10544e67c9bb81c9bea14c6f7", Pod:"calico-kube-controllers-b9877fd47-255j9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.60.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic31e565c037", MAC:"b6:0d:93:6b:29:56", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:20:56.836945 containerd[1461]: 2026-01-17 00:20:56.831 [INFO][4288] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5ba847bda35e000f761ef1d0536abf15a71752c10544e67c9bb81c9bea14c6f7" Namespace="calico-system" Pod="calico-kube-controllers-b9877fd47-255j9" WorkloadEndpoint="ci--4081.3.6--n--8cc98427e3-k8s-calico--kube--controllers--b9877fd47--255j9-eth0" Jan 17 00:20:56.926681 containerd[1461]: time="2026-01-17T00:20:56.922759523Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:20:56.926681 containerd[1461]: time="2026-01-17T00:20:56.922859451Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:20:56.926681 containerd[1461]: time="2026-01-17T00:20:56.922876153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:20:56.926681 containerd[1461]: time="2026-01-17T00:20:56.922995204Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:20:56.959482 systemd-networkd[1371]: cali50b5449aa60: Link UP Jan 17 00:20:56.961326 systemd-networkd[1371]: cali50b5449aa60: Gained carrier Jan 17 00:20:56.995804 systemd[1]: Started cri-containerd-5ba847bda35e000f761ef1d0536abf15a71752c10544e67c9bb81c9bea14c6f7.scope - libcontainer container 5ba847bda35e000f761ef1d0536abf15a71752c10544e67c9bb81c9bea14c6f7. Jan 17 00:20:57.009727 containerd[1461]: 2026-01-17 00:20:56.582 [INFO][4280] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--8cc98427e3-k8s-goldmane--666569f655--m56rm-eth0 goldmane-666569f655- calico-system a4529381-2d40-4d70-a757-b0ee2c920e64 997 0 2026-01-17 00:20:29 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.6-n-8cc98427e3 goldmane-666569f655-m56rm eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali50b5449aa60 [] [] }} ContainerID="a6ad293a84963c23f531545421abd9c30eef1b59ddbfff375cbb5401a249fbbf" Namespace="calico-system" Pod="goldmane-666569f655-m56rm" WorkloadEndpoint="ci--4081.3.6--n--8cc98427e3-k8s-goldmane--666569f655--m56rm-" Jan 17 00:20:57.009727 containerd[1461]: 2026-01-17 00:20:56.582 [INFO][4280] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a6ad293a84963c23f531545421abd9c30eef1b59ddbfff375cbb5401a249fbbf" Namespace="calico-system" Pod="goldmane-666569f655-m56rm" WorkloadEndpoint="ci--4081.3.6--n--8cc98427e3-k8s-goldmane--666569f655--m56rm-eth0" Jan 17 00:20:57.009727 containerd[1461]: 2026-01-17 00:20:56.750 [INFO][4313] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a6ad293a84963c23f531545421abd9c30eef1b59ddbfff375cbb5401a249fbbf" HandleID="k8s-pod-network.a6ad293a84963c23f531545421abd9c30eef1b59ddbfff375cbb5401a249fbbf" Workload="ci--4081.3.6--n--8cc98427e3-k8s-goldmane--666569f655--m56rm-eth0" Jan 17 00:20:57.009727 containerd[1461]: 2026-01-17 00:20:56.751 [INFO][4313] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a6ad293a84963c23f531545421abd9c30eef1b59ddbfff375cbb5401a249fbbf" HandleID="k8s-pod-network.a6ad293a84963c23f531545421abd9c30eef1b59ddbfff375cbb5401a249fbbf" Workload="ci--4081.3.6--n--8cc98427e3-k8s-goldmane--666569f655--m56rm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032a4b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-8cc98427e3", "pod":"goldmane-666569f655-m56rm", "timestamp":"2026-01-17 00:20:56.750875548 +0000 UTC"}, Hostname:"ci-4081.3.6-n-8cc98427e3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:20:57.009727 containerd[1461]: 2026-01-17 00:20:56.752 [INFO][4313] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:20:57.009727 containerd[1461]: 2026-01-17 00:20:56.796 [INFO][4313] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:20:57.009727 containerd[1461]: 2026-01-17 00:20:56.796 [INFO][4313] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-8cc98427e3' Jan 17 00:20:57.009727 containerd[1461]: 2026-01-17 00:20:56.829 [INFO][4313] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a6ad293a84963c23f531545421abd9c30eef1b59ddbfff375cbb5401a249fbbf" host="ci-4081.3.6-n-8cc98427e3" Jan 17 00:20:57.009727 containerd[1461]: 2026-01-17 00:20:56.851 [INFO][4313] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-8cc98427e3" Jan 17 00:20:57.009727 containerd[1461]: 2026-01-17 00:20:56.865 [INFO][4313] ipam/ipam.go 511: Trying affinity for 192.168.60.128/26 host="ci-4081.3.6-n-8cc98427e3" Jan 17 00:20:57.009727 containerd[1461]: 2026-01-17 00:20:56.868 [INFO][4313] ipam/ipam.go 158: Attempting to load block cidr=192.168.60.128/26 host="ci-4081.3.6-n-8cc98427e3" Jan 17 00:20:57.009727 containerd[1461]: 2026-01-17 00:20:56.877 [INFO][4313] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.60.128/26 host="ci-4081.3.6-n-8cc98427e3" Jan 17 00:20:57.009727 containerd[1461]: 2026-01-17 00:20:56.877 [INFO][4313] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.60.128/26 handle="k8s-pod-network.a6ad293a84963c23f531545421abd9c30eef1b59ddbfff375cbb5401a249fbbf" host="ci-4081.3.6-n-8cc98427e3" Jan 17 00:20:57.009727 containerd[1461]: 2026-01-17 00:20:56.883 [INFO][4313] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a6ad293a84963c23f531545421abd9c30eef1b59ddbfff375cbb5401a249fbbf Jan 17 00:20:57.009727 containerd[1461]: 2026-01-17 00:20:56.899 [INFO][4313] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.60.128/26 handle="k8s-pod-network.a6ad293a84963c23f531545421abd9c30eef1b59ddbfff375cbb5401a249fbbf" host="ci-4081.3.6-n-8cc98427e3" Jan 17 00:20:57.009727 containerd[1461]: 2026-01-17 00:20:56.924 [INFO][4313] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.60.132/26] block=192.168.60.128/26 handle="k8s-pod-network.a6ad293a84963c23f531545421abd9c30eef1b59ddbfff375cbb5401a249fbbf" host="ci-4081.3.6-n-8cc98427e3" Jan 17 00:20:57.009727 containerd[1461]: 2026-01-17 00:20:56.924 [INFO][4313] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.60.132/26] handle="k8s-pod-network.a6ad293a84963c23f531545421abd9c30eef1b59ddbfff375cbb5401a249fbbf" host="ci-4081.3.6-n-8cc98427e3" Jan 17 00:20:57.009727 containerd[1461]: 2026-01-17 00:20:56.924 [INFO][4313] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:20:57.009727 containerd[1461]: 2026-01-17 00:20:56.924 [INFO][4313] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.60.132/26] IPv6=[] ContainerID="a6ad293a84963c23f531545421abd9c30eef1b59ddbfff375cbb5401a249fbbf" HandleID="k8s-pod-network.a6ad293a84963c23f531545421abd9c30eef1b59ddbfff375cbb5401a249fbbf" Workload="ci--4081.3.6--n--8cc98427e3-k8s-goldmane--666569f655--m56rm-eth0" Jan 17 00:20:57.010323 containerd[1461]: 2026-01-17 00:20:56.942 [INFO][4280] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a6ad293a84963c23f531545421abd9c30eef1b59ddbfff375cbb5401a249fbbf" Namespace="calico-system" Pod="goldmane-666569f655-m56rm" WorkloadEndpoint="ci--4081.3.6--n--8cc98427e3-k8s-goldmane--666569f655--m56rm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8cc98427e3-k8s-goldmane--666569f655--m56rm-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"a4529381-2d40-4d70-a757-b0ee2c920e64", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8cc98427e3", ContainerID:"", Pod:"goldmane-666569f655-m56rm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.60.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali50b5449aa60", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:20:57.010323 containerd[1461]: 2026-01-17 00:20:56.942 [INFO][4280] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.60.132/32] ContainerID="a6ad293a84963c23f531545421abd9c30eef1b59ddbfff375cbb5401a249fbbf" Namespace="calico-system" Pod="goldmane-666569f655-m56rm" WorkloadEndpoint="ci--4081.3.6--n--8cc98427e3-k8s-goldmane--666569f655--m56rm-eth0" Jan 17 00:20:57.010323 containerd[1461]: 2026-01-17 00:20:56.942 [INFO][4280] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali50b5449aa60 ContainerID="a6ad293a84963c23f531545421abd9c30eef1b59ddbfff375cbb5401a249fbbf" Namespace="calico-system" Pod="goldmane-666569f655-m56rm" WorkloadEndpoint="ci--4081.3.6--n--8cc98427e3-k8s-goldmane--666569f655--m56rm-eth0" Jan 17 00:20:57.010323 containerd[1461]: 2026-01-17 00:20:56.964 [INFO][4280] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a6ad293a84963c23f531545421abd9c30eef1b59ddbfff375cbb5401a249fbbf" Namespace="calico-system" Pod="goldmane-666569f655-m56rm" WorkloadEndpoint="ci--4081.3.6--n--8cc98427e3-k8s-goldmane--666569f655--m56rm-eth0" Jan 17 00:20:57.010323 containerd[1461]: 2026-01-17 00:20:56.965 [INFO][4280] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a6ad293a84963c23f531545421abd9c30eef1b59ddbfff375cbb5401a249fbbf" Namespace="calico-system" Pod="goldmane-666569f655-m56rm" WorkloadEndpoint="ci--4081.3.6--n--8cc98427e3-k8s-goldmane--666569f655--m56rm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8cc98427e3-k8s-goldmane--666569f655--m56rm-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"a4529381-2d40-4d70-a757-b0ee2c920e64", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8cc98427e3", ContainerID:"a6ad293a84963c23f531545421abd9c30eef1b59ddbfff375cbb5401a249fbbf", Pod:"goldmane-666569f655-m56rm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.60.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali50b5449aa60", MAC:"02:8b:dd:d8:dd:3a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:20:57.010323 containerd[1461]: 2026-01-17 00:20:56.995 [INFO][4280] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a6ad293a84963c23f531545421abd9c30eef1b59ddbfff375cbb5401a249fbbf" Namespace="calico-system" Pod="goldmane-666569f655-m56rm" WorkloadEndpoint="ci--4081.3.6--n--8cc98427e3-k8s-goldmane--666569f655--m56rm-eth0" Jan 17 00:20:57.097759 containerd[1461]: time="2026-01-17T00:20:57.086361371Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:20:57.097759 containerd[1461]: time="2026-01-17T00:20:57.086584198Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:20:57.097759 containerd[1461]: time="2026-01-17T00:20:57.086597778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:20:57.097759 containerd[1461]: time="2026-01-17T00:20:57.087338003Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:20:57.101929 systemd-networkd[1371]: cali5eb1ed15ebd: Link UP Jan 17 00:20:57.104183 systemd-networkd[1371]: cali5eb1ed15ebd: Gained carrier Jan 17 00:20:57.125807 systemd[1]: Started cri-containerd-a6ad293a84963c23f531545421abd9c30eef1b59ddbfff375cbb5401a249fbbf.scope - libcontainer container a6ad293a84963c23f531545421abd9c30eef1b59ddbfff375cbb5401a249fbbf. Jan 17 00:20:57.137764 containerd[1461]: 2026-01-17 00:20:56.601 [INFO][4298] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--8cc98427e3-k8s-csi--node--driver--dcsb9-eth0 csi-node-driver- calico-system 96bd27e8-f4d7-4ca9-8ceb-fc56f28a33f0 998 0 2026-01-17 00:20:32 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.6-n-8cc98427e3 csi-node-driver-dcsb9 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali5eb1ed15ebd [] [] }} ContainerID="15dee02e237760eb1fe41c111ddbb0e74eeb26e077aec87648e60105787f0640" Namespace="calico-system" Pod="csi-node-driver-dcsb9" WorkloadEndpoint="ci--4081.3.6--n--8cc98427e3-k8s-csi--node--driver--dcsb9-" Jan 17 00:20:57.137764 containerd[1461]: 2026-01-17 00:20:56.602 [INFO][4298] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="15dee02e237760eb1fe41c111ddbb0e74eeb26e077aec87648e60105787f0640" Namespace="calico-system" Pod="csi-node-driver-dcsb9" WorkloadEndpoint="ci--4081.3.6--n--8cc98427e3-k8s-csi--node--driver--dcsb9-eth0" Jan 17 00:20:57.137764 containerd[1461]: 2026-01-17 00:20:56.768 [INFO][4325] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="15dee02e237760eb1fe41c111ddbb0e74eeb26e077aec87648e60105787f0640" HandleID="k8s-pod-network.15dee02e237760eb1fe41c111ddbb0e74eeb26e077aec87648e60105787f0640" Workload="ci--4081.3.6--n--8cc98427e3-k8s-csi--node--driver--dcsb9-eth0" Jan 17 00:20:57.137764 containerd[1461]: 2026-01-17 00:20:56.770 [INFO][4325] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="15dee02e237760eb1fe41c111ddbb0e74eeb26e077aec87648e60105787f0640" HandleID="k8s-pod-network.15dee02e237760eb1fe41c111ddbb0e74eeb26e077aec87648e60105787f0640" Workload="ci--4081.3.6--n--8cc98427e3-k8s-csi--node--driver--dcsb9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000261980), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-8cc98427e3", "pod":"csi-node-driver-dcsb9", "timestamp":"2026-01-17 00:20:56.768179026 +0000 UTC"}, Hostname:"ci-4081.3.6-n-8cc98427e3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:20:57.137764 containerd[1461]: 2026-01-17 00:20:56.770 [INFO][4325] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:20:57.137764 containerd[1461]: 2026-01-17 00:20:56.926 [INFO][4325] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:20:57.137764 containerd[1461]: 2026-01-17 00:20:56.926 [INFO][4325] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-8cc98427e3' Jan 17 00:20:57.137764 containerd[1461]: 2026-01-17 00:20:56.943 [INFO][4325] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.15dee02e237760eb1fe41c111ddbb0e74eeb26e077aec87648e60105787f0640" host="ci-4081.3.6-n-8cc98427e3" Jan 17 00:20:57.137764 containerd[1461]: 2026-01-17 00:20:56.971 [INFO][4325] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-8cc98427e3" Jan 17 00:20:57.137764 containerd[1461]: 2026-01-17 00:20:56.995 [INFO][4325] ipam/ipam.go 511: Trying affinity for 192.168.60.128/26 host="ci-4081.3.6-n-8cc98427e3" Jan 17 00:20:57.137764 containerd[1461]: 2026-01-17 00:20:57.001 [INFO][4325] ipam/ipam.go 158: Attempting to load block cidr=192.168.60.128/26 host="ci-4081.3.6-n-8cc98427e3" Jan 17 00:20:57.137764 containerd[1461]: 2026-01-17 00:20:57.014 [INFO][4325] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.60.128/26 host="ci-4081.3.6-n-8cc98427e3" Jan 17 00:20:57.137764 containerd[1461]: 2026-01-17 00:20:57.014 [INFO][4325] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.60.128/26 handle="k8s-pod-network.15dee02e237760eb1fe41c111ddbb0e74eeb26e077aec87648e60105787f0640" host="ci-4081.3.6-n-8cc98427e3" Jan 17 00:20:57.137764 containerd[1461]: 2026-01-17 00:20:57.017 [INFO][4325] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.15dee02e237760eb1fe41c111ddbb0e74eeb26e077aec87648e60105787f0640 Jan 17 00:20:57.137764 containerd[1461]: 2026-01-17 00:20:57.028 [INFO][4325] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.60.128/26 handle="k8s-pod-network.15dee02e237760eb1fe41c111ddbb0e74eeb26e077aec87648e60105787f0640" host="ci-4081.3.6-n-8cc98427e3" Jan 17 00:20:57.137764 containerd[1461]: 2026-01-17 00:20:57.047 [INFO][4325] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.60.133/26] block=192.168.60.128/26 handle="k8s-pod-network.15dee02e237760eb1fe41c111ddbb0e74eeb26e077aec87648e60105787f0640" host="ci-4081.3.6-n-8cc98427e3" Jan 17 00:20:57.137764 containerd[1461]: 2026-01-17 00:20:57.047 [INFO][4325] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.60.133/26] handle="k8s-pod-network.15dee02e237760eb1fe41c111ddbb0e74eeb26e077aec87648e60105787f0640" host="ci-4081.3.6-n-8cc98427e3" Jan 17 00:20:57.137764 containerd[1461]: 2026-01-17 00:20:57.047 [INFO][4325] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:20:57.137764 containerd[1461]: 2026-01-17 00:20:57.048 [INFO][4325] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.60.133/26] IPv6=[] ContainerID="15dee02e237760eb1fe41c111ddbb0e74eeb26e077aec87648e60105787f0640" HandleID="k8s-pod-network.15dee02e237760eb1fe41c111ddbb0e74eeb26e077aec87648e60105787f0640" Workload="ci--4081.3.6--n--8cc98427e3-k8s-csi--node--driver--dcsb9-eth0" Jan 17 00:20:57.141345 containerd[1461]: 2026-01-17 00:20:57.059 [INFO][4298] cni-plugin/k8s.go 418: Populated endpoint ContainerID="15dee02e237760eb1fe41c111ddbb0e74eeb26e077aec87648e60105787f0640" Namespace="calico-system" Pod="csi-node-driver-dcsb9" WorkloadEndpoint="ci--4081.3.6--n--8cc98427e3-k8s-csi--node--driver--dcsb9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8cc98427e3-k8s-csi--node--driver--dcsb9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"96bd27e8-f4d7-4ca9-8ceb-fc56f28a33f0", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8cc98427e3", ContainerID:"", Pod:"csi-node-driver-dcsb9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.60.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5eb1ed15ebd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:20:57.141345 containerd[1461]: 2026-01-17 00:20:57.059 [INFO][4298] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.60.133/32] ContainerID="15dee02e237760eb1fe41c111ddbb0e74eeb26e077aec87648e60105787f0640" Namespace="calico-system" Pod="csi-node-driver-dcsb9" WorkloadEndpoint="ci--4081.3.6--n--8cc98427e3-k8s-csi--node--driver--dcsb9-eth0" Jan 17 00:20:57.141345 containerd[1461]: 2026-01-17 00:20:57.060 [INFO][4298] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5eb1ed15ebd ContainerID="15dee02e237760eb1fe41c111ddbb0e74eeb26e077aec87648e60105787f0640" Namespace="calico-system" Pod="csi-node-driver-dcsb9" WorkloadEndpoint="ci--4081.3.6--n--8cc98427e3-k8s-csi--node--driver--dcsb9-eth0" Jan 17 00:20:57.141345 containerd[1461]: 2026-01-17 00:20:57.106 [INFO][4298] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="15dee02e237760eb1fe41c111ddbb0e74eeb26e077aec87648e60105787f0640" Namespace="calico-system" Pod="csi-node-driver-dcsb9" WorkloadEndpoint="ci--4081.3.6--n--8cc98427e3-k8s-csi--node--driver--dcsb9-eth0" Jan 17 00:20:57.141345 containerd[1461]: 2026-01-17 00:20:57.109 [INFO][4298] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="15dee02e237760eb1fe41c111ddbb0e74eeb26e077aec87648e60105787f0640" Namespace="calico-system" Pod="csi-node-driver-dcsb9" WorkloadEndpoint="ci--4081.3.6--n--8cc98427e3-k8s-csi--node--driver--dcsb9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8cc98427e3-k8s-csi--node--driver--dcsb9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"96bd27e8-f4d7-4ca9-8ceb-fc56f28a33f0", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8cc98427e3", ContainerID:"15dee02e237760eb1fe41c111ddbb0e74eeb26e077aec87648e60105787f0640", Pod:"csi-node-driver-dcsb9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.60.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5eb1ed15ebd", MAC:"0e:3d:47:f1:2f:53", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:20:57.141345 containerd[1461]: 2026-01-17 00:20:57.129 [INFO][4298] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="15dee02e237760eb1fe41c111ddbb0e74eeb26e077aec87648e60105787f0640" Namespace="calico-system" Pod="csi-node-driver-dcsb9" WorkloadEndpoint="ci--4081.3.6--n--8cc98427e3-k8s-csi--node--driver--dcsb9-eth0" Jan 17 00:20:57.170277 containerd[1461]: time="2026-01-17T00:20:57.169944698Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:20:57.170277 containerd[1461]: time="2026-01-17T00:20:57.170027436Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:20:57.170277 containerd[1461]: time="2026-01-17T00:20:57.170043970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:20:57.170277 containerd[1461]: time="2026-01-17T00:20:57.170147265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:20:57.235827 systemd[1]: Started cri-containerd-15dee02e237760eb1fe41c111ddbb0e74eeb26e077aec87648e60105787f0640.scope - libcontainer container 15dee02e237760eb1fe41c111ddbb0e74eeb26e077aec87648e60105787f0640. Jan 17 00:20:57.263488 containerd[1461]: time="2026-01-17T00:20:57.263366509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b9877fd47-255j9,Uid:1b489003-62f2-46b7-a6af-3a3a669c193c,Namespace:calico-system,Attempt:1,} returns sandbox id \"5ba847bda35e000f761ef1d0536abf15a71752c10544e67c9bb81c9bea14c6f7\"" Jan 17 00:20:57.267480 containerd[1461]: time="2026-01-17T00:20:57.267423827Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:20:57.340452 containerd[1461]: time="2026-01-17T00:20:57.340336945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dcsb9,Uid:96bd27e8-f4d7-4ca9-8ceb-fc56f28a33f0,Namespace:calico-system,Attempt:1,} returns sandbox id \"15dee02e237760eb1fe41c111ddbb0e74eeb26e077aec87648e60105787f0640\"" Jan 17 00:20:57.351834 containerd[1461]: time="2026-01-17T00:20:57.351589679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-m56rm,Uid:a4529381-2d40-4d70-a757-b0ee2c920e64,Namespace:calico-system,Attempt:1,} returns sandbox id \"a6ad293a84963c23f531545421abd9c30eef1b59ddbfff375cbb5401a249fbbf\"" Jan 17 00:20:57.406487 systemd[1]: run-netns-cni\x2de113d8f6\x2d67b2\x2dabf6\x2db732\x2d5cf2250944a3.mount: Deactivated successfully. Jan 17 00:20:57.475758 systemd-networkd[1371]: calia9f80d6b616: Gained IPv6LL Jan 17 00:20:57.604327 containerd[1461]: time="2026-01-17T00:20:57.604054326Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:20:57.606012 containerd[1461]: time="2026-01-17T00:20:57.605723448Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:20:57.606012 containerd[1461]: time="2026-01-17T00:20:57.605832184Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:20:57.607340 kubelet[2529]: E0117 00:20:57.606244 2529 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:20:57.607340 kubelet[2529]: E0117 00:20:57.606297 2529 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:20:57.607340 kubelet[2529]: E0117 00:20:57.606560 2529 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zgjhz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-b9877fd47-255j9_calico-system(1b489003-62f2-46b7-a6af-3a3a669c193c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:20:57.607757 containerd[1461]: time="2026-01-17T00:20:57.607105385Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:20:57.608203 kubelet[2529]: E0117 00:20:57.608035 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-b9877fd47-255j9" podUID="1b489003-62f2-46b7-a6af-3a3a669c193c" Jan 17 00:20:57.623763 kubelet[2529]: E0117 00:20:57.623300 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-b9877fd47-255j9" podUID="1b489003-62f2-46b7-a6af-3a3a669c193c" Jan 17 00:20:57.631612 kubelet[2529]: E0117 00:20:57.629913 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-669cbdb5c4-xt5pt" podUID="fee43243-8ebd-4cd2-afa5-ba57dc078efe" Jan 17 00:20:57.947934 containerd[1461]: time="2026-01-17T00:20:57.947710146Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:20:57.949244 containerd[1461]: time="2026-01-17T00:20:57.949104603Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:20:57.949244 containerd[1461]: time="2026-01-17T00:20:57.949161791Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:20:57.949829 kubelet[2529]: E0117 00:20:57.949600 2529 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:20:57.949829 kubelet[2529]: E0117 00:20:57.949659 2529 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:20:57.950131 kubelet[2529]: E0117 00:20:57.949922 2529 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4k797,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dcsb9_calico-system(96bd27e8-f4d7-4ca9-8ceb-fc56f28a33f0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:20:57.950289 containerd[1461]: time="2026-01-17T00:20:57.950214051Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:20:58.179969 systemd-networkd[1371]: calic31e565c037: Gained IPv6LL Jan 17 00:20:58.289427 containerd[1461]: time="2026-01-17T00:20:58.289257198Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:20:58.290929 containerd[1461]: time="2026-01-17T00:20:58.290747386Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:20:58.290929 containerd[1461]: time="2026-01-17T00:20:58.290841355Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:20:58.291141 kubelet[2529]: E0117 00:20:58.291070 2529 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:20:58.291184 kubelet[2529]: E0117 00:20:58.291137 2529 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:20:58.293264 containerd[1461]: time="2026-01-17T00:20:58.291435952Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:20:58.293394 kubelet[2529]: E0117 00:20:58.292851 2529 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dxlpp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-m56rm_calico-system(a4529381-2d40-4d70-a757-b0ee2c920e64): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:20:58.295297 kubelet[2529]: E0117 00:20:58.294876 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-m56rm" podUID="a4529381-2d40-4d70-a757-b0ee2c920e64" Jan 17 00:20:58.584794 containerd[1461]: time="2026-01-17T00:20:58.584146288Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:20:58.587290 containerd[1461]: time="2026-01-17T00:20:58.587220009Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:20:58.587454 containerd[1461]: time="2026-01-17T00:20:58.587362654Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:20:58.588509 kubelet[2529]: E0117 00:20:58.588115 2529 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:20:58.588509 kubelet[2529]: E0117 00:20:58.588194 2529 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:20:58.588509 kubelet[2529]: E0117 00:20:58.588392 2529 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4k797,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dcsb9_calico-system(96bd27e8-f4d7-4ca9-8ceb-fc56f28a33f0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:20:58.590446 kubelet[2529]: E0117 00:20:58.590379 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dcsb9" podUID="96bd27e8-f4d7-4ca9-8ceb-fc56f28a33f0" Jan 17 00:20:58.631835 systemd-networkd[1371]: cali50b5449aa60: Gained IPv6LL Jan 17 00:20:58.636947 kubelet[2529]: E0117 00:20:58.636610 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-m56rm" podUID="a4529381-2d40-4d70-a757-b0ee2c920e64" Jan 17 00:20:58.638315 kubelet[2529]: E0117 00:20:58.638244 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-b9877fd47-255j9" podUID="1b489003-62f2-46b7-a6af-3a3a669c193c" Jan 17 00:20:58.639623 kubelet[2529]: E0117 00:20:58.639577 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dcsb9" podUID="96bd27e8-f4d7-4ca9-8ceb-fc56f28a33f0" Jan 17 00:20:59.011918 systemd-networkd[1371]: cali5eb1ed15ebd: Gained IPv6LL Jan 17 00:21:00.138575 containerd[1461]: time="2026-01-17T00:21:00.138070637Z" level=info msg="StopPodSandbox for \"f4977e4ced5aa68a533aaba1fd2c49d870d22e29935c3b6a89d9a0566e959ac7\"" Jan 17 00:21:00.140465 containerd[1461]: time="2026-01-17T00:21:00.139677673Z" level=info msg="StopPodSandbox for \"ade9cf2cd9c59dee832d466cc511609b3e68e63411929e06444ebf7b6f7b746c\"" Jan 17 00:21:00.142101 containerd[1461]: time="2026-01-17T00:21:00.141240894Z" level=info msg="StopPodSandbox for \"4972674ea7086c5f3a0512271193226956bd1cd7f22b559feafd9ec1036b1221\"" Jan 17 00:21:00.340557 containerd[1461]: 2026-01-17 00:21:00.253 [INFO][4525] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f4977e4ced5aa68a533aaba1fd2c49d870d22e29935c3b6a89d9a0566e959ac7" Jan 17 00:21:00.340557 containerd[1461]: 2026-01-17 00:21:00.253 [INFO][4525] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f4977e4ced5aa68a533aaba1fd2c49d870d22e29935c3b6a89d9a0566e959ac7" iface="eth0" netns="/var/run/netns/cni-48c74cca-3163-5f71-40a5-50c7adbf71db" Jan 17 00:21:00.340557 containerd[1461]: 2026-01-17 00:21:00.254 [INFO][4525] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f4977e4ced5aa68a533aaba1fd2c49d870d22e29935c3b6a89d9a0566e959ac7" iface="eth0" netns="/var/run/netns/cni-48c74cca-3163-5f71-40a5-50c7adbf71db" Jan 17 00:21:00.340557 containerd[1461]: 2026-01-17 00:21:00.254 [INFO][4525] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f4977e4ced5aa68a533aaba1fd2c49d870d22e29935c3b6a89d9a0566e959ac7" iface="eth0" netns="/var/run/netns/cni-48c74cca-3163-5f71-40a5-50c7adbf71db" Jan 17 00:21:00.340557 containerd[1461]: 2026-01-17 00:21:00.255 [INFO][4525] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f4977e4ced5aa68a533aaba1fd2c49d870d22e29935c3b6a89d9a0566e959ac7" Jan 17 00:21:00.340557 containerd[1461]: 2026-01-17 00:21:00.256 [INFO][4525] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f4977e4ced5aa68a533aaba1fd2c49d870d22e29935c3b6a89d9a0566e959ac7" Jan 17 00:21:00.340557 containerd[1461]: 2026-01-17 00:21:00.306 [INFO][4544] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f4977e4ced5aa68a533aaba1fd2c49d870d22e29935c3b6a89d9a0566e959ac7" HandleID="k8s-pod-network.f4977e4ced5aa68a533aaba1fd2c49d870d22e29935c3b6a89d9a0566e959ac7" Workload="ci--4081.3.6--n--8cc98427e3-k8s-calico--apiserver--669cbdb5c4--j86b8-eth0" Jan 17 00:21:00.340557 containerd[1461]: 2026-01-17 00:21:00.308 [INFO][4544] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:21:00.340557 containerd[1461]: 2026-01-17 00:21:00.308 [INFO][4544] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:21:00.340557 containerd[1461]: 2026-01-17 00:21:00.325 [WARNING][4544] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f4977e4ced5aa68a533aaba1fd2c49d870d22e29935c3b6a89d9a0566e959ac7" HandleID="k8s-pod-network.f4977e4ced5aa68a533aaba1fd2c49d870d22e29935c3b6a89d9a0566e959ac7" Workload="ci--4081.3.6--n--8cc98427e3-k8s-calico--apiserver--669cbdb5c4--j86b8-eth0" Jan 17 00:21:00.340557 containerd[1461]: 2026-01-17 00:21:00.325 [INFO][4544] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f4977e4ced5aa68a533aaba1fd2c49d870d22e29935c3b6a89d9a0566e959ac7" HandleID="k8s-pod-network.f4977e4ced5aa68a533aaba1fd2c49d870d22e29935c3b6a89d9a0566e959ac7" Workload="ci--4081.3.6--n--8cc98427e3-k8s-calico--apiserver--669cbdb5c4--j86b8-eth0" Jan 17 00:21:00.340557 containerd[1461]: 2026-01-17 00:21:00.329 [INFO][4544] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:21:00.340557 containerd[1461]: 2026-01-17 00:21:00.333 [INFO][4525] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f4977e4ced5aa68a533aaba1fd2c49d870d22e29935c3b6a89d9a0566e959ac7" Jan 17 00:21:00.342183 containerd[1461]: time="2026-01-17T00:21:00.342061028Z" level=info msg="TearDown network for sandbox \"f4977e4ced5aa68a533aaba1fd2c49d870d22e29935c3b6a89d9a0566e959ac7\" successfully" Jan 17 00:21:00.342183 containerd[1461]: time="2026-01-17T00:21:00.342094770Z" level=info msg="StopPodSandbox for \"f4977e4ced5aa68a533aaba1fd2c49d870d22e29935c3b6a89d9a0566e959ac7\" returns successfully" Jan 17 00:21:00.344374 containerd[1461]: time="2026-01-17T00:21:00.344339585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-669cbdb5c4-j86b8,Uid:8c0578bf-2fb3-4218-b665-10ff5fcbea9f,Namespace:calico-apiserver,Attempt:1,}" Jan 17 00:21:00.352879 systemd[1]: run-netns-cni\x2d48c74cca\x2d3163\x2d5f71\x2d40a5\x2d50c7adbf71db.mount: Deactivated successfully. Jan 17 00:21:00.404875 containerd[1461]: 2026-01-17 00:21:00.287 [INFO][4526] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4972674ea7086c5f3a0512271193226956bd1cd7f22b559feafd9ec1036b1221" Jan 17 00:21:00.404875 containerd[1461]: 2026-01-17 00:21:00.293 [INFO][4526] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4972674ea7086c5f3a0512271193226956bd1cd7f22b559feafd9ec1036b1221" iface="eth0" netns="/var/run/netns/cni-cb92b7f2-09c7-a89f-0757-a977715286d4" Jan 17 00:21:00.404875 containerd[1461]: 2026-01-17 00:21:00.294 [INFO][4526] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4972674ea7086c5f3a0512271193226956bd1cd7f22b559feafd9ec1036b1221" iface="eth0" netns="/var/run/netns/cni-cb92b7f2-09c7-a89f-0757-a977715286d4" Jan 17 00:21:00.404875 containerd[1461]: 2026-01-17 00:21:00.294 [INFO][4526] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4972674ea7086c5f3a0512271193226956bd1cd7f22b559feafd9ec1036b1221" iface="eth0" netns="/var/run/netns/cni-cb92b7f2-09c7-a89f-0757-a977715286d4" Jan 17 00:21:00.404875 containerd[1461]: 2026-01-17 00:21:00.294 [INFO][4526] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4972674ea7086c5f3a0512271193226956bd1cd7f22b559feafd9ec1036b1221" Jan 17 00:21:00.404875 containerd[1461]: 2026-01-17 00:21:00.295 [INFO][4526] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4972674ea7086c5f3a0512271193226956bd1cd7f22b559feafd9ec1036b1221" Jan 17 00:21:00.404875 containerd[1461]: 2026-01-17 00:21:00.348 [INFO][4550] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4972674ea7086c5f3a0512271193226956bd1cd7f22b559feafd9ec1036b1221" HandleID="k8s-pod-network.4972674ea7086c5f3a0512271193226956bd1cd7f22b559feafd9ec1036b1221" Workload="ci--4081.3.6--n--8cc98427e3-k8s-coredns--674b8bbfcf--kg6z8-eth0" Jan 17 00:21:00.404875 containerd[1461]: 2026-01-17 00:21:00.350 [INFO][4550] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:21:00.404875 containerd[1461]: 2026-01-17 00:21:00.350 [INFO][4550] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:21:00.404875 containerd[1461]: 2026-01-17 00:21:00.380 [WARNING][4550] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4972674ea7086c5f3a0512271193226956bd1cd7f22b559feafd9ec1036b1221" HandleID="k8s-pod-network.4972674ea7086c5f3a0512271193226956bd1cd7f22b559feafd9ec1036b1221" Workload="ci--4081.3.6--n--8cc98427e3-k8s-coredns--674b8bbfcf--kg6z8-eth0" Jan 17 00:21:00.404875 containerd[1461]: 2026-01-17 00:21:00.380 [INFO][4550] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4972674ea7086c5f3a0512271193226956bd1cd7f22b559feafd9ec1036b1221" HandleID="k8s-pod-network.4972674ea7086c5f3a0512271193226956bd1cd7f22b559feafd9ec1036b1221" Workload="ci--4081.3.6--n--8cc98427e3-k8s-coredns--674b8bbfcf--kg6z8-eth0" Jan 17 00:21:00.404875 containerd[1461]: 2026-01-17 00:21:00.387 [INFO][4550] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:21:00.404875 containerd[1461]: 2026-01-17 00:21:00.396 [INFO][4526] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4972674ea7086c5f3a0512271193226956bd1cd7f22b559feafd9ec1036b1221" Jan 17 00:21:00.410225 systemd[1]: run-netns-cni\x2dcb92b7f2\x2d09c7\x2da89f\x2d0757\x2da977715286d4.mount: Deactivated successfully. Jan 17 00:21:00.412307 containerd[1461]: time="2026-01-17T00:21:00.412149812Z" level=info msg="TearDown network for sandbox \"4972674ea7086c5f3a0512271193226956bd1cd7f22b559feafd9ec1036b1221\" successfully" Jan 17 00:21:00.412307 containerd[1461]: time="2026-01-17T00:21:00.412188356Z" level=info msg="StopPodSandbox for \"4972674ea7086c5f3a0512271193226956bd1cd7f22b559feafd9ec1036b1221\" returns successfully" Jan 17 00:21:00.413564 kubelet[2529]: E0117 00:21:00.412902 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:21:00.415725 containerd[1461]: time="2026-01-17T00:21:00.414971476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-kg6z8,Uid:16e8fb15-593a-4be8-833b-05df43f1e4e7,Namespace:kube-system,Attempt:1,}" Jan 17 00:21:00.434671 containerd[1461]: 2026-01-17 00:21:00.293 [INFO][4524] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ade9cf2cd9c59dee832d466cc511609b3e68e63411929e06444ebf7b6f7b746c" Jan 17 00:21:00.434671 containerd[1461]: 2026-01-17 00:21:00.293 [INFO][4524] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ade9cf2cd9c59dee832d466cc511609b3e68e63411929e06444ebf7b6f7b746c" iface="eth0" netns="/var/run/netns/cni-83b74a2b-c153-4e96-c393-a9b7b546c54c" Jan 17 00:21:00.434671 containerd[1461]: 2026-01-17 00:21:00.294 [INFO][4524] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ade9cf2cd9c59dee832d466cc511609b3e68e63411929e06444ebf7b6f7b746c" iface="eth0" netns="/var/run/netns/cni-83b74a2b-c153-4e96-c393-a9b7b546c54c" Jan 17 00:21:00.434671 containerd[1461]: 2026-01-17 00:21:00.294 [INFO][4524] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ade9cf2cd9c59dee832d466cc511609b3e68e63411929e06444ebf7b6f7b746c" iface="eth0" netns="/var/run/netns/cni-83b74a2b-c153-4e96-c393-a9b7b546c54c" Jan 17 00:21:00.434671 containerd[1461]: 2026-01-17 00:21:00.295 [INFO][4524] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ade9cf2cd9c59dee832d466cc511609b3e68e63411929e06444ebf7b6f7b746c" Jan 17 00:21:00.434671 containerd[1461]: 2026-01-17 00:21:00.295 [INFO][4524] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ade9cf2cd9c59dee832d466cc511609b3e68e63411929e06444ebf7b6f7b746c" Jan 17 00:21:00.434671 containerd[1461]: 2026-01-17 00:21:00.392 [INFO][4552] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ade9cf2cd9c59dee832d466cc511609b3e68e63411929e06444ebf7b6f7b746c" HandleID="k8s-pod-network.ade9cf2cd9c59dee832d466cc511609b3e68e63411929e06444ebf7b6f7b746c" Workload="ci--4081.3.6--n--8cc98427e3-k8s-coredns--674b8bbfcf--2jwfn-eth0" Jan 17 00:21:00.434671 containerd[1461]: 2026-01-17 00:21:00.393 [INFO][4552] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:21:00.434671 containerd[1461]: 2026-01-17 00:21:00.393 [INFO][4552] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:21:00.434671 containerd[1461]: 2026-01-17 00:21:00.420 [WARNING][4552] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ade9cf2cd9c59dee832d466cc511609b3e68e63411929e06444ebf7b6f7b746c" HandleID="k8s-pod-network.ade9cf2cd9c59dee832d466cc511609b3e68e63411929e06444ebf7b6f7b746c" Workload="ci--4081.3.6--n--8cc98427e3-k8s-coredns--674b8bbfcf--2jwfn-eth0" Jan 17 00:21:00.434671 containerd[1461]: 2026-01-17 00:21:00.420 [INFO][4552] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ade9cf2cd9c59dee832d466cc511609b3e68e63411929e06444ebf7b6f7b746c" HandleID="k8s-pod-network.ade9cf2cd9c59dee832d466cc511609b3e68e63411929e06444ebf7b6f7b746c" Workload="ci--4081.3.6--n--8cc98427e3-k8s-coredns--674b8bbfcf--2jwfn-eth0" Jan 17 00:21:00.434671 containerd[1461]: 2026-01-17 00:21:00.424 [INFO][4552] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:21:00.434671 containerd[1461]: 2026-01-17 00:21:00.429 [INFO][4524] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ade9cf2cd9c59dee832d466cc511609b3e68e63411929e06444ebf7b6f7b746c" Jan 17 00:21:00.437181 containerd[1461]: time="2026-01-17T00:21:00.434816899Z" level=info msg="TearDown network for sandbox \"ade9cf2cd9c59dee832d466cc511609b3e68e63411929e06444ebf7b6f7b746c\" successfully" Jan 17 00:21:00.437181 containerd[1461]: time="2026-01-17T00:21:00.434847008Z" level=info msg="StopPodSandbox for \"ade9cf2cd9c59dee832d466cc511609b3e68e63411929e06444ebf7b6f7b746c\" returns successfully" Jan 17 00:21:00.437238 kubelet[2529]: E0117 00:21:00.435200 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:21:00.444520 containerd[1461]: time="2026-01-17T00:21:00.441984904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2jwfn,Uid:c5483e8b-299a-4a15-8ed6-7af74d3f03f3,Namespace:kube-system,Attempt:1,}" Jan 17 00:21:00.703819 systemd-networkd[1371]: cali18d8a3d70c6: Link UP Jan 17 00:21:00.708476 systemd-networkd[1371]: cali18d8a3d70c6: Gained carrier Jan 17 00:21:00.745766 containerd[1461]: 2026-01-17 00:21:00.540 [INFO][4574] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--8cc98427e3-k8s-coredns--674b8bbfcf--kg6z8-eth0 coredns-674b8bbfcf- kube-system 16e8fb15-593a-4be8-833b-05df43f1e4e7 1061 0 2026-01-17 00:20:11 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-8cc98427e3 coredns-674b8bbfcf-kg6z8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali18d8a3d70c6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="57770fd02966dad6893867846ae59026d306315b9c243e88c9faae3ad214d75b" Namespace="kube-system" Pod="coredns-674b8bbfcf-kg6z8" WorkloadEndpoint="ci--4081.3.6--n--8cc98427e3-k8s-coredns--674b8bbfcf--kg6z8-" Jan 17 00:21:00.745766 containerd[1461]: 2026-01-17 00:21:00.542 [INFO][4574] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="57770fd02966dad6893867846ae59026d306315b9c243e88c9faae3ad214d75b" Namespace="kube-system" Pod="coredns-674b8bbfcf-kg6z8" WorkloadEndpoint="ci--4081.3.6--n--8cc98427e3-k8s-coredns--674b8bbfcf--kg6z8-eth0" Jan 17 00:21:00.745766 containerd[1461]: 2026-01-17 00:21:00.618 [INFO][4602] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="57770fd02966dad6893867846ae59026d306315b9c243e88c9faae3ad214d75b" HandleID="k8s-pod-network.57770fd02966dad6893867846ae59026d306315b9c243e88c9faae3ad214d75b" Workload="ci--4081.3.6--n--8cc98427e3-k8s-coredns--674b8bbfcf--kg6z8-eth0" Jan 17 00:21:00.745766 containerd[1461]: 2026-01-17 00:21:00.622 [INFO][4602] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="57770fd02966dad6893867846ae59026d306315b9c243e88c9faae3ad214d75b" HandleID="k8s-pod-network.57770fd02966dad6893867846ae59026d306315b9c243e88c9faae3ad214d75b" Workload="ci--4081.3.6--n--8cc98427e3-k8s-coredns--674b8bbfcf--kg6z8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5800), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-8cc98427e3", "pod":"coredns-674b8bbfcf-kg6z8", "timestamp":"2026-01-17 00:21:00.618461621 +0000 UTC"}, Hostname:"ci-4081.3.6-n-8cc98427e3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:21:00.745766 containerd[1461]: 2026-01-17 00:21:00.622 [INFO][4602] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:21:00.745766 containerd[1461]: 2026-01-17 00:21:00.622 [INFO][4602] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:21:00.745766 containerd[1461]: 2026-01-17 00:21:00.622 [INFO][4602] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-8cc98427e3' Jan 17 00:21:00.745766 containerd[1461]: 2026-01-17 00:21:00.637 [INFO][4602] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.57770fd02966dad6893867846ae59026d306315b9c243e88c9faae3ad214d75b" host="ci-4081.3.6-n-8cc98427e3" Jan 17 00:21:00.745766 containerd[1461]: 2026-01-17 00:21:00.650 [INFO][4602] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-8cc98427e3" Jan 17 00:21:00.745766 containerd[1461]: 2026-01-17 00:21:00.657 [INFO][4602] ipam/ipam.go 511: Trying affinity for 192.168.60.128/26 host="ci-4081.3.6-n-8cc98427e3" Jan 17 00:21:00.745766 containerd[1461]: 2026-01-17 00:21:00.660 [INFO][4602] ipam/ipam.go 158: Attempting to load block cidr=192.168.60.128/26 host="ci-4081.3.6-n-8cc98427e3" Jan 17 00:21:00.745766 containerd[1461]: 2026-01-17 00:21:00.663 [INFO][4602] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.60.128/26 host="ci-4081.3.6-n-8cc98427e3" Jan 17 00:21:00.745766 containerd[1461]: 2026-01-17 00:21:00.663 [INFO][4602] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.60.128/26 handle="k8s-pod-network.57770fd02966dad6893867846ae59026d306315b9c243e88c9faae3ad214d75b" host="ci-4081.3.6-n-8cc98427e3" Jan 17 00:21:00.745766 containerd[1461]: 2026-01-17 00:21:00.665 [INFO][4602] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.57770fd02966dad6893867846ae59026d306315b9c243e88c9faae3ad214d75b Jan 17 00:21:00.745766 containerd[1461]: 2026-01-17 00:21:00.673 [INFO][4602] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.60.128/26 handle="k8s-pod-network.57770fd02966dad6893867846ae59026d306315b9c243e88c9faae3ad214d75b" host="ci-4081.3.6-n-8cc98427e3" Jan 17 00:21:00.745766 containerd[1461]: 2026-01-17 00:21:00.683 [INFO][4602] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.60.134/26] block=192.168.60.128/26 handle="k8s-pod-network.57770fd02966dad6893867846ae59026d306315b9c243e88c9faae3ad214d75b" host="ci-4081.3.6-n-8cc98427e3" Jan 17 00:21:00.745766 containerd[1461]: 2026-01-17 00:21:00.683 [INFO][4602] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.60.134/26] handle="k8s-pod-network.57770fd02966dad6893867846ae59026d306315b9c243e88c9faae3ad214d75b" host="ci-4081.3.6-n-8cc98427e3" Jan 17 00:21:00.745766 containerd[1461]: 2026-01-17 00:21:00.684 [INFO][4602] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:21:00.745766 containerd[1461]: 2026-01-17 00:21:00.684 [INFO][4602] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.60.134/26] IPv6=[] ContainerID="57770fd02966dad6893867846ae59026d306315b9c243e88c9faae3ad214d75b" HandleID="k8s-pod-network.57770fd02966dad6893867846ae59026d306315b9c243e88c9faae3ad214d75b" Workload="ci--4081.3.6--n--8cc98427e3-k8s-coredns--674b8bbfcf--kg6z8-eth0" Jan 17 00:21:00.747698 containerd[1461]: 2026-01-17 00:21:00.691 [INFO][4574] cni-plugin/k8s.go 418: Populated endpoint ContainerID="57770fd02966dad6893867846ae59026d306315b9c243e88c9faae3ad214d75b" Namespace="kube-system" Pod="coredns-674b8bbfcf-kg6z8" WorkloadEndpoint="ci--4081.3.6--n--8cc98427e3-k8s-coredns--674b8bbfcf--kg6z8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8cc98427e3-k8s-coredns--674b8bbfcf--kg6z8-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"16e8fb15-593a-4be8-833b-05df43f1e4e7", ResourceVersion:"1061", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8cc98427e3", ContainerID:"", Pod:"coredns-674b8bbfcf-kg6z8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.60.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali18d8a3d70c6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:21:00.747698 containerd[1461]: 2026-01-17 00:21:00.692 [INFO][4574] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.60.134/32] ContainerID="57770fd02966dad6893867846ae59026d306315b9c243e88c9faae3ad214d75b" Namespace="kube-system" Pod="coredns-674b8bbfcf-kg6z8" WorkloadEndpoint="ci--4081.3.6--n--8cc98427e3-k8s-coredns--674b8bbfcf--kg6z8-eth0" Jan 17 00:21:00.747698 containerd[1461]: 2026-01-17 00:21:00.694 [INFO][4574] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali18d8a3d70c6 ContainerID="57770fd02966dad6893867846ae59026d306315b9c243e88c9faae3ad214d75b" Namespace="kube-system" Pod="coredns-674b8bbfcf-kg6z8" WorkloadEndpoint="ci--4081.3.6--n--8cc98427e3-k8s-coredns--674b8bbfcf--kg6z8-eth0" Jan 17 00:21:00.747698 containerd[1461]: 2026-01-17 00:21:00.706 [INFO][4574] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="57770fd02966dad6893867846ae59026d306315b9c243e88c9faae3ad214d75b" Namespace="kube-system" Pod="coredns-674b8bbfcf-kg6z8" WorkloadEndpoint="ci--4081.3.6--n--8cc98427e3-k8s-coredns--674b8bbfcf--kg6z8-eth0" Jan 17 00:21:00.747698 containerd[1461]: 2026-01-17 00:21:00.709 [INFO][4574] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="57770fd02966dad6893867846ae59026d306315b9c243e88c9faae3ad214d75b" Namespace="kube-system" Pod="coredns-674b8bbfcf-kg6z8" WorkloadEndpoint="ci--4081.3.6--n--8cc98427e3-k8s-coredns--674b8bbfcf--kg6z8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8cc98427e3-k8s-coredns--674b8bbfcf--kg6z8-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"16e8fb15-593a-4be8-833b-05df43f1e4e7", ResourceVersion:"1061", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8cc98427e3", ContainerID:"57770fd02966dad6893867846ae59026d306315b9c243e88c9faae3ad214d75b", Pod:"coredns-674b8bbfcf-kg6z8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.60.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali18d8a3d70c6", MAC:"4a:2d:ab:76:b8:73", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:21:00.747698 containerd[1461]: 2026-01-17 00:21:00.730 [INFO][4574] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="57770fd02966dad6893867846ae59026d306315b9c243e88c9faae3ad214d75b" Namespace="kube-system" Pod="coredns-674b8bbfcf-kg6z8" WorkloadEndpoint="ci--4081.3.6--n--8cc98427e3-k8s-coredns--674b8bbfcf--kg6z8-eth0" Jan 17 00:21:00.805648 containerd[1461]: time="2026-01-17T00:21:00.805467623Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:21:00.809973 containerd[1461]: time="2026-01-17T00:21:00.805591567Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:21:00.815366 containerd[1461]: time="2026-01-17T00:21:00.815177015Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:21:00.815849 containerd[1461]: time="2026-01-17T00:21:00.815361231Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:21:00.825119 systemd-networkd[1371]: calib926a2f663f: Link UP Jan 17 00:21:00.827323 systemd-networkd[1371]: calib926a2f663f: Gained carrier Jan 17 00:21:00.877779 systemd[1]: Started cri-containerd-57770fd02966dad6893867846ae59026d306315b9c243e88c9faae3ad214d75b.scope - libcontainer container 57770fd02966dad6893867846ae59026d306315b9c243e88c9faae3ad214d75b. Jan 17 00:21:00.883701 containerd[1461]: 2026-01-17 00:21:00.551 [INFO][4563] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--8cc98427e3-k8s-calico--apiserver--669cbdb5c4--j86b8-eth0 calico-apiserver-669cbdb5c4- calico-apiserver 8c0578bf-2fb3-4218-b665-10ff5fcbea9f 1060 0 2026-01-17 00:20:25 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:669cbdb5c4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-8cc98427e3 calico-apiserver-669cbdb5c4-j86b8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib926a2f663f [] [] }} ContainerID="08a2903c6ac52d227069f21d6134e33d9afcfab933a242949f31f682d0d7504b" Namespace="calico-apiserver" Pod="calico-apiserver-669cbdb5c4-j86b8" WorkloadEndpoint="ci--4081.3.6--n--8cc98427e3-k8s-calico--apiserver--669cbdb5c4--j86b8-" Jan 17 00:21:00.883701 containerd[1461]: 2026-01-17 00:21:00.553 [INFO][4563] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="08a2903c6ac52d227069f21d6134e33d9afcfab933a242949f31f682d0d7504b" Namespace="calico-apiserver" Pod="calico-apiserver-669cbdb5c4-j86b8" WorkloadEndpoint="ci--4081.3.6--n--8cc98427e3-k8s-calico--apiserver--669cbdb5c4--j86b8-eth0" Jan 17 00:21:00.883701 containerd[1461]: 2026-01-17 00:21:00.632 [INFO][4608] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="08a2903c6ac52d227069f21d6134e33d9afcfab933a242949f31f682d0d7504b" HandleID="k8s-pod-network.08a2903c6ac52d227069f21d6134e33d9afcfab933a242949f31f682d0d7504b" Workload="ci--4081.3.6--n--8cc98427e3-k8s-calico--apiserver--669cbdb5c4--j86b8-eth0" Jan 17 00:21:00.883701 containerd[1461]: 2026-01-17 00:21:00.632 [INFO][4608] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="08a2903c6ac52d227069f21d6134e33d9afcfab933a242949f31f682d0d7504b" HandleID="k8s-pod-network.08a2903c6ac52d227069f21d6134e33d9afcfab933a242949f31f682d0d7504b" Workload="ci--4081.3.6--n--8cc98427e3-k8s-calico--apiserver--669cbdb5c4--j86b8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d58f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-8cc98427e3", "pod":"calico-apiserver-669cbdb5c4-j86b8", "timestamp":"2026-01-17 00:21:00.632476372 +0000 UTC"}, Hostname:"ci-4081.3.6-n-8cc98427e3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:21:00.883701 containerd[1461]: 2026-01-17 00:21:00.632 [INFO][4608] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:21:00.883701 containerd[1461]: 2026-01-17 00:21:00.684 [INFO][4608] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:21:00.883701 containerd[1461]: 2026-01-17 00:21:00.685 [INFO][4608] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-8cc98427e3' Jan 17 00:21:00.883701 containerd[1461]: 2026-01-17 00:21:00.743 [INFO][4608] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.08a2903c6ac52d227069f21d6134e33d9afcfab933a242949f31f682d0d7504b" host="ci-4081.3.6-n-8cc98427e3" Jan 17 00:21:00.883701 containerd[1461]: 2026-01-17 00:21:00.757 [INFO][4608] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-8cc98427e3" Jan 17 00:21:00.883701 containerd[1461]: 2026-01-17 00:21:00.771 [INFO][4608] ipam/ipam.go 511: Trying affinity for 192.168.60.128/26 host="ci-4081.3.6-n-8cc98427e3" Jan 17 00:21:00.883701 containerd[1461]: 2026-01-17 00:21:00.778 [INFO][4608] ipam/ipam.go 158: Attempting to load block cidr=192.168.60.128/26 host="ci-4081.3.6-n-8cc98427e3" Jan 17 00:21:00.883701 containerd[1461]: 2026-01-17 00:21:00.785 [INFO][4608] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.60.128/26 host="ci-4081.3.6-n-8cc98427e3" Jan 17 00:21:00.883701 containerd[1461]: 2026-01-17 00:21:00.785 [INFO][4608] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.60.128/26 handle="k8s-pod-network.08a2903c6ac52d227069f21d6134e33d9afcfab933a242949f31f682d0d7504b" host="ci-4081.3.6-n-8cc98427e3" Jan 17 00:21:00.883701 containerd[1461]: 2026-01-17 00:21:00.789 [INFO][4608] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.08a2903c6ac52d227069f21d6134e33d9afcfab933a242949f31f682d0d7504b Jan 17 00:21:00.883701 containerd[1461]: 2026-01-17 00:21:00.797 [INFO][4608] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.60.128/26 handle="k8s-pod-network.08a2903c6ac52d227069f21d6134e33d9afcfab933a242949f31f682d0d7504b" host="ci-4081.3.6-n-8cc98427e3" Jan 17 00:21:00.883701 containerd[1461]: 2026-01-17 00:21:00.807 [INFO][4608] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.60.135/26] block=192.168.60.128/26 handle="k8s-pod-network.08a2903c6ac52d227069f21d6134e33d9afcfab933a242949f31f682d0d7504b" host="ci-4081.3.6-n-8cc98427e3" Jan 17 00:21:00.883701 containerd[1461]: 2026-01-17 00:21:00.808 [INFO][4608] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.60.135/26] handle="k8s-pod-network.08a2903c6ac52d227069f21d6134e33d9afcfab933a242949f31f682d0d7504b" host="ci-4081.3.6-n-8cc98427e3" Jan 17 00:21:00.883701 containerd[1461]: 2026-01-17 00:21:00.808 [INFO][4608] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:21:00.883701 containerd[1461]: 2026-01-17 00:21:00.808 [INFO][4608] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.60.135/26] IPv6=[] ContainerID="08a2903c6ac52d227069f21d6134e33d9afcfab933a242949f31f682d0d7504b" HandleID="k8s-pod-network.08a2903c6ac52d227069f21d6134e33d9afcfab933a242949f31f682d0d7504b" Workload="ci--4081.3.6--n--8cc98427e3-k8s-calico--apiserver--669cbdb5c4--j86b8-eth0" Jan 17 00:21:00.884835 containerd[1461]: 2026-01-17 00:21:00.818 [INFO][4563] cni-plugin/k8s.go 418: Populated endpoint ContainerID="08a2903c6ac52d227069f21d6134e33d9afcfab933a242949f31f682d0d7504b" Namespace="calico-apiserver" Pod="calico-apiserver-669cbdb5c4-j86b8" WorkloadEndpoint="ci--4081.3.6--n--8cc98427e3-k8s-calico--apiserver--669cbdb5c4--j86b8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8cc98427e3-k8s-calico--apiserver--669cbdb5c4--j86b8-eth0", GenerateName:"calico-apiserver-669cbdb5c4-", Namespace:"calico-apiserver", SelfLink:"", UID:"8c0578bf-2fb3-4218-b665-10ff5fcbea9f", ResourceVersion:"1060", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"669cbdb5c4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8cc98427e3", ContainerID:"", Pod:"calico-apiserver-669cbdb5c4-j86b8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.60.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib926a2f663f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:21:00.884835 containerd[1461]: 2026-01-17 00:21:00.818 [INFO][4563] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.60.135/32] ContainerID="08a2903c6ac52d227069f21d6134e33d9afcfab933a242949f31f682d0d7504b" Namespace="calico-apiserver" Pod="calico-apiserver-669cbdb5c4-j86b8" WorkloadEndpoint="ci--4081.3.6--n--8cc98427e3-k8s-calico--apiserver--669cbdb5c4--j86b8-eth0" Jan 17 00:21:00.884835 containerd[1461]: 2026-01-17 00:21:00.819 [INFO][4563] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib926a2f663f ContainerID="08a2903c6ac52d227069f21d6134e33d9afcfab933a242949f31f682d0d7504b" Namespace="calico-apiserver" Pod="calico-apiserver-669cbdb5c4-j86b8" WorkloadEndpoint="ci--4081.3.6--n--8cc98427e3-k8s-calico--apiserver--669cbdb5c4--j86b8-eth0" Jan 17 00:21:00.884835 containerd[1461]: 2026-01-17 00:21:00.827 [INFO][4563] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="08a2903c6ac52d227069f21d6134e33d9afcfab933a242949f31f682d0d7504b" Namespace="calico-apiserver" Pod="calico-apiserver-669cbdb5c4-j86b8" WorkloadEndpoint="ci--4081.3.6--n--8cc98427e3-k8s-calico--apiserver--669cbdb5c4--j86b8-eth0" Jan 17 00:21:00.884835 containerd[1461]: 2026-01-17 00:21:00.831 [INFO][4563] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="08a2903c6ac52d227069f21d6134e33d9afcfab933a242949f31f682d0d7504b" Namespace="calico-apiserver" Pod="calico-apiserver-669cbdb5c4-j86b8" WorkloadEndpoint="ci--4081.3.6--n--8cc98427e3-k8s-calico--apiserver--669cbdb5c4--j86b8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8cc98427e3-k8s-calico--apiserver--669cbdb5c4--j86b8-eth0", GenerateName:"calico-apiserver-669cbdb5c4-", Namespace:"calico-apiserver", SelfLink:"", UID:"8c0578bf-2fb3-4218-b665-10ff5fcbea9f", ResourceVersion:"1060", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"669cbdb5c4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8cc98427e3", ContainerID:"08a2903c6ac52d227069f21d6134e33d9afcfab933a242949f31f682d0d7504b", Pod:"calico-apiserver-669cbdb5c4-j86b8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.60.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib926a2f663f", MAC:"2a:f2:f3:92:e7:ef", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:21:00.884835 containerd[1461]: 2026-01-17 00:21:00.860 [INFO][4563] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="08a2903c6ac52d227069f21d6134e33d9afcfab933a242949f31f682d0d7504b" Namespace="calico-apiserver" Pod="calico-apiserver-669cbdb5c4-j86b8" WorkloadEndpoint="ci--4081.3.6--n--8cc98427e3-k8s-calico--apiserver--669cbdb5c4--j86b8-eth0" Jan 17 00:21:00.968428 containerd[1461]: time="2026-01-17T00:21:00.967354029Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:21:00.968711 containerd[1461]: time="2026-01-17T00:21:00.968348865Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:21:00.968711 containerd[1461]: time="2026-01-17T00:21:00.968380306Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:21:00.974583 containerd[1461]: time="2026-01-17T00:21:00.972108485Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:21:00.984955 systemd-networkd[1371]: calie11697c5d15: Link UP Jan 17 00:21:00.986794 systemd-networkd[1371]: calie11697c5d15: Gained carrier Jan 17 00:21:01.000420 containerd[1461]: time="2026-01-17T00:21:00.999701087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-kg6z8,Uid:16e8fb15-593a-4be8-833b-05df43f1e4e7,Namespace:kube-system,Attempt:1,} returns sandbox id \"57770fd02966dad6893867846ae59026d306315b9c243e88c9faae3ad214d75b\"" Jan 17 00:21:01.003032 kubelet[2529]: E0117 00:21:01.002959 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:21:01.035653 containerd[1461]: time="2026-01-17T00:21:01.033978803Z" level=info msg="CreateContainer within sandbox \"57770fd02966dad6893867846ae59026d306315b9c243e88c9faae3ad214d75b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:21:01.089835 systemd[1]: Started cri-containerd-08a2903c6ac52d227069f21d6134e33d9afcfab933a242949f31f682d0d7504b.scope - libcontainer container 08a2903c6ac52d227069f21d6134e33d9afcfab933a242949f31f682d0d7504b. Jan 17 00:21:01.096620 containerd[1461]: 2026-01-17 00:21:00.564 [INFO][4586] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--8cc98427e3-k8s-coredns--674b8bbfcf--2jwfn-eth0 coredns-674b8bbfcf- kube-system c5483e8b-299a-4a15-8ed6-7af74d3f03f3 1062 0 2026-01-17 00:20:11 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-8cc98427e3 coredns-674b8bbfcf-2jwfn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie11697c5d15 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b1b8b99763f5050f75ac1978d1114e76fedc2801e92966c571cb33eeeaf3a7e3" Namespace="kube-system" Pod="coredns-674b8bbfcf-2jwfn" WorkloadEndpoint="ci--4081.3.6--n--8cc98427e3-k8s-coredns--674b8bbfcf--2jwfn-" Jan 17 00:21:01.096620 containerd[1461]: 2026-01-17 00:21:00.565 [INFO][4586] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b1b8b99763f5050f75ac1978d1114e76fedc2801e92966c571cb33eeeaf3a7e3" Namespace="kube-system" Pod="coredns-674b8bbfcf-2jwfn" WorkloadEndpoint="ci--4081.3.6--n--8cc98427e3-k8s-coredns--674b8bbfcf--2jwfn-eth0" Jan 17 00:21:01.096620 containerd[1461]: 2026-01-17 00:21:00.633 [INFO][4613] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b1b8b99763f5050f75ac1978d1114e76fedc2801e92966c571cb33eeeaf3a7e3" HandleID="k8s-pod-network.b1b8b99763f5050f75ac1978d1114e76fedc2801e92966c571cb33eeeaf3a7e3" Workload="ci--4081.3.6--n--8cc98427e3-k8s-coredns--674b8bbfcf--2jwfn-eth0" Jan 17 00:21:01.096620 containerd[1461]: 2026-01-17 00:21:00.634 [INFO][4613] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b1b8b99763f5050f75ac1978d1114e76fedc2801e92966c571cb33eeeaf3a7e3" HandleID="k8s-pod-network.b1b8b99763f5050f75ac1978d1114e76fedc2801e92966c571cb33eeeaf3a7e3" Workload="ci--4081.3.6--n--8cc98427e3-k8s-coredns--674b8bbfcf--2jwfn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ac1e0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-8cc98427e3", "pod":"coredns-674b8bbfcf-2jwfn", "timestamp":"2026-01-17 00:21:00.633861966 +0000 UTC"}, Hostname:"ci-4081.3.6-n-8cc98427e3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:21:01.096620 containerd[1461]: 2026-01-17 00:21:00.634 [INFO][4613] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:21:01.096620 containerd[1461]: 2026-01-17 00:21:00.809 [INFO][4613] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:21:01.096620 containerd[1461]: 2026-01-17 00:21:00.813 [INFO][4613] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-8cc98427e3' Jan 17 00:21:01.096620 containerd[1461]: 2026-01-17 00:21:00.840 [INFO][4613] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b1b8b99763f5050f75ac1978d1114e76fedc2801e92966c571cb33eeeaf3a7e3" host="ci-4081.3.6-n-8cc98427e3" Jan 17 00:21:01.096620 containerd[1461]: 2026-01-17 00:21:00.873 [INFO][4613] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-8cc98427e3" Jan 17 00:21:01.096620 containerd[1461]: 2026-01-17 00:21:00.900 [INFO][4613] ipam/ipam.go 511: Trying affinity for 192.168.60.128/26 host="ci-4081.3.6-n-8cc98427e3" Jan 17 00:21:01.096620 containerd[1461]: 2026-01-17 00:21:00.911 [INFO][4613] ipam/ipam.go 158: Attempting to load block cidr=192.168.60.128/26 host="ci-4081.3.6-n-8cc98427e3" Jan 17 00:21:01.096620 containerd[1461]: 2026-01-17 00:21:00.919 [INFO][4613] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.60.128/26 host="ci-4081.3.6-n-8cc98427e3" Jan 17 00:21:01.096620 containerd[1461]: 2026-01-17 00:21:00.919 [INFO][4613] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.60.128/26 handle="k8s-pod-network.b1b8b99763f5050f75ac1978d1114e76fedc2801e92966c571cb33eeeaf3a7e3" host="ci-4081.3.6-n-8cc98427e3" Jan 17 00:21:01.096620 containerd[1461]: 2026-01-17 00:21:00.925 [INFO][4613] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b1b8b99763f5050f75ac1978d1114e76fedc2801e92966c571cb33eeeaf3a7e3 Jan 17 00:21:01.096620 containerd[1461]: 2026-01-17 00:21:00.937 [INFO][4613] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.60.128/26 handle="k8s-pod-network.b1b8b99763f5050f75ac1978d1114e76fedc2801e92966c571cb33eeeaf3a7e3" host="ci-4081.3.6-n-8cc98427e3" Jan 17 00:21:01.096620 containerd[1461]: 2026-01-17 00:21:00.959 [INFO][4613] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.60.136/26] block=192.168.60.128/26 handle="k8s-pod-network.b1b8b99763f5050f75ac1978d1114e76fedc2801e92966c571cb33eeeaf3a7e3" host="ci-4081.3.6-n-8cc98427e3" Jan 17 00:21:01.096620 containerd[1461]: 2026-01-17 00:21:00.959 [INFO][4613] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.60.136/26] handle="k8s-pod-network.b1b8b99763f5050f75ac1978d1114e76fedc2801e92966c571cb33eeeaf3a7e3" host="ci-4081.3.6-n-8cc98427e3" Jan 17 00:21:01.096620 containerd[1461]: 2026-01-17 00:21:00.959 [INFO][4613] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:21:01.096620 containerd[1461]: 2026-01-17 00:21:00.959 [INFO][4613] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.60.136/26] IPv6=[] ContainerID="b1b8b99763f5050f75ac1978d1114e76fedc2801e92966c571cb33eeeaf3a7e3" HandleID="k8s-pod-network.b1b8b99763f5050f75ac1978d1114e76fedc2801e92966c571cb33eeeaf3a7e3" Workload="ci--4081.3.6--n--8cc98427e3-k8s-coredns--674b8bbfcf--2jwfn-eth0" Jan 17 00:21:01.098990 containerd[1461]: 2026-01-17 00:21:00.976 [INFO][4586] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b1b8b99763f5050f75ac1978d1114e76fedc2801e92966c571cb33eeeaf3a7e3" Namespace="kube-system" Pod="coredns-674b8bbfcf-2jwfn" WorkloadEndpoint="ci--4081.3.6--n--8cc98427e3-k8s-coredns--674b8bbfcf--2jwfn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8cc98427e3-k8s-coredns--674b8bbfcf--2jwfn-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c5483e8b-299a-4a15-8ed6-7af74d3f03f3", ResourceVersion:"1062", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8cc98427e3", ContainerID:"", Pod:"coredns-674b8bbfcf-2jwfn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.60.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie11697c5d15", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:21:01.098990 containerd[1461]: 2026-01-17 00:21:00.977 [INFO][4586] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.60.136/32] ContainerID="b1b8b99763f5050f75ac1978d1114e76fedc2801e92966c571cb33eeeaf3a7e3" Namespace="kube-system" Pod="coredns-674b8bbfcf-2jwfn" WorkloadEndpoint="ci--4081.3.6--n--8cc98427e3-k8s-coredns--674b8bbfcf--2jwfn-eth0" Jan 17 00:21:01.098990 containerd[1461]: 2026-01-17 00:21:00.977 [INFO][4586] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie11697c5d15 ContainerID="b1b8b99763f5050f75ac1978d1114e76fedc2801e92966c571cb33eeeaf3a7e3" Namespace="kube-system" Pod="coredns-674b8bbfcf-2jwfn" WorkloadEndpoint="ci--4081.3.6--n--8cc98427e3-k8s-coredns--674b8bbfcf--2jwfn-eth0" Jan 17 00:21:01.098990 containerd[1461]: 2026-01-17 00:21:00.991 [INFO][4586] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b1b8b99763f5050f75ac1978d1114e76fedc2801e92966c571cb33eeeaf3a7e3" Namespace="kube-system" Pod="coredns-674b8bbfcf-2jwfn" WorkloadEndpoint="ci--4081.3.6--n--8cc98427e3-k8s-coredns--674b8bbfcf--2jwfn-eth0" Jan 17 00:21:01.098990 containerd[1461]: 2026-01-17 00:21:00.995 [INFO][4586] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b1b8b99763f5050f75ac1978d1114e76fedc2801e92966c571cb33eeeaf3a7e3" Namespace="kube-system" Pod="coredns-674b8bbfcf-2jwfn" WorkloadEndpoint="ci--4081.3.6--n--8cc98427e3-k8s-coredns--674b8bbfcf--2jwfn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8cc98427e3-k8s-coredns--674b8bbfcf--2jwfn-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c5483e8b-299a-4a15-8ed6-7af74d3f03f3", ResourceVersion:"1062", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8cc98427e3", ContainerID:"b1b8b99763f5050f75ac1978d1114e76fedc2801e92966c571cb33eeeaf3a7e3", Pod:"coredns-674b8bbfcf-2jwfn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.60.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie11697c5d15", MAC:"2a:53:5b:5a:83:6d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:21:01.098990 containerd[1461]: 2026-01-17 00:21:01.068 [INFO][4586] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b1b8b99763f5050f75ac1978d1114e76fedc2801e92966c571cb33eeeaf3a7e3" Namespace="kube-system" Pod="coredns-674b8bbfcf-2jwfn" WorkloadEndpoint="ci--4081.3.6--n--8cc98427e3-k8s-coredns--674b8bbfcf--2jwfn-eth0" Jan 17 00:21:01.137762 containerd[1461]: time="2026-01-17T00:21:01.137525338Z" level=info msg="CreateContainer within sandbox \"57770fd02966dad6893867846ae59026d306315b9c243e88c9faae3ad214d75b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"88b426692a07736f02ef2b8355e081858c8312a16d76f61ae8bc5081dd2c5f53\"" Jan 17 00:21:01.141016 containerd[1461]: time="2026-01-17T00:21:01.140939781Z" level=info msg="StartContainer for \"88b426692a07736f02ef2b8355e081858c8312a16d76f61ae8bc5081dd2c5f53\"" Jan 17 00:21:01.173813 containerd[1461]: time="2026-01-17T00:21:01.171616223Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:21:01.173813 containerd[1461]: time="2026-01-17T00:21:01.173781945Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:21:01.175129 containerd[1461]: time="2026-01-17T00:21:01.174037067Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:21:01.175129 containerd[1461]: time="2026-01-17T00:21:01.174300291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:21:01.219761 systemd[1]: Started cri-containerd-88b426692a07736f02ef2b8355e081858c8312a16d76f61ae8bc5081dd2c5f53.scope - libcontainer container 88b426692a07736f02ef2b8355e081858c8312a16d76f61ae8bc5081dd2c5f53. Jan 17 00:21:01.224045 systemd[1]: Started cri-containerd-b1b8b99763f5050f75ac1978d1114e76fedc2801e92966c571cb33eeeaf3a7e3.scope - libcontainer container b1b8b99763f5050f75ac1978d1114e76fedc2801e92966c571cb33eeeaf3a7e3. Jan 17 00:21:01.349880 containerd[1461]: time="2026-01-17T00:21:01.349710467Z" level=info msg="StartContainer for \"88b426692a07736f02ef2b8355e081858c8312a16d76f61ae8bc5081dd2c5f53\" returns successfully" Jan 17 00:21:01.358400 systemd[1]: run-netns-cni\x2d83b74a2b\x2dc153\x2d4e96\x2dc393\x2da9b7b546c54c.mount: Deactivated successfully. Jan 17 00:21:01.383670 containerd[1461]: time="2026-01-17T00:21:01.383273073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2jwfn,Uid:c5483e8b-299a-4a15-8ed6-7af74d3f03f3,Namespace:kube-system,Attempt:1,} returns sandbox id \"b1b8b99763f5050f75ac1978d1114e76fedc2801e92966c571cb33eeeaf3a7e3\"" Jan 17 00:21:01.387577 kubelet[2529]: E0117 00:21:01.387516 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:21:01.396870 containerd[1461]: time="2026-01-17T00:21:01.395909348Z" level=info msg="CreateContainer within sandbox \"b1b8b99763f5050f75ac1978d1114e76fedc2801e92966c571cb33eeeaf3a7e3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:21:01.437387 containerd[1461]: time="2026-01-17T00:21:01.437319076Z" level=info msg="CreateContainer within sandbox \"b1b8b99763f5050f75ac1978d1114e76fedc2801e92966c571cb33eeeaf3a7e3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"123ec22e53ac06f4c18033ad469e4007818caf0346083a697a75d0a7a86c68fb\"" Jan 17 00:21:01.442013 containerd[1461]: time="2026-01-17T00:21:01.438423458Z" level=info msg="StartContainer for \"123ec22e53ac06f4c18033ad469e4007818caf0346083a697a75d0a7a86c68fb\"" Jan 17 00:21:01.439211 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2831943397.mount: Deactivated successfully. Jan 17 00:21:01.518883 systemd[1]: Started cri-containerd-123ec22e53ac06f4c18033ad469e4007818caf0346083a697a75d0a7a86c68fb.scope - libcontainer container 123ec22e53ac06f4c18033ad469e4007818caf0346083a697a75d0a7a86c68fb. Jan 17 00:21:01.601787 containerd[1461]: time="2026-01-17T00:21:01.601319235Z" level=info msg="StartContainer for \"123ec22e53ac06f4c18033ad469e4007818caf0346083a697a75d0a7a86c68fb\" returns successfully" Jan 17 00:21:01.650651 kubelet[2529]: E0117 00:21:01.650567 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:21:01.657645 kubelet[2529]: E0117 00:21:01.656110 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:21:01.753601 containerd[1461]: time="2026-01-17T00:21:01.753458234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-669cbdb5c4-j86b8,Uid:8c0578bf-2fb3-4218-b665-10ff5fcbea9f,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"08a2903c6ac52d227069f21d6134e33d9afcfab933a242949f31f682d0d7504b\"" Jan 17 00:21:01.763303 containerd[1461]: time="2026-01-17T00:21:01.762709995Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:21:01.781583 kubelet[2529]: I0117 00:21:01.781357 2529 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-2jwfn" podStartSLOduration=50.781324642 podStartE2EDuration="50.781324642s" podCreationTimestamp="2026-01-17 00:20:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:21:01.730089627 +0000 UTC m=+56.790361936" watchObservedRunningTime="2026-01-17 00:21:01.781324642 +0000 UTC m=+56.841596929" Jan 17 00:21:01.781979 kubelet[2529]: I0117 00:21:01.781926 2529 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-kg6z8" podStartSLOduration=50.781909992 podStartE2EDuration="50.781909992s" podCreationTimestamp="2026-01-17 00:20:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:21:01.779067211 +0000 UTC m=+56.839339493" watchObservedRunningTime="2026-01-17 00:21:01.781909992 +0000 UTC m=+56.842182289" Jan 17 00:21:01.892202 systemd-networkd[1371]: calib926a2f663f: Gained IPv6LL Jan 17 00:21:01.956086 systemd-networkd[1371]: cali18d8a3d70c6: Gained IPv6LL Jan 17 00:21:02.095801 containerd[1461]: time="2026-01-17T00:21:02.095638518Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:21:02.097036 containerd[1461]: time="2026-01-17T00:21:02.096977989Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:21:02.097246 containerd[1461]: time="2026-01-17T00:21:02.097006100Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:21:02.097275 kubelet[2529]: E0117 00:21:02.097231 2529 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:21:02.097318 kubelet[2529]: E0117 00:21:02.097285 2529 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:21:02.097797 kubelet[2529]: E0117 00:21:02.097433 2529 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d6ptw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-669cbdb5c4-j86b8_calico-apiserver(8c0578bf-2fb3-4218-b665-10ff5fcbea9f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:21:02.099641 kubelet[2529]: E0117 00:21:02.099576 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-669cbdb5c4-j86b8" podUID="8c0578bf-2fb3-4218-b665-10ff5fcbea9f" Jan 17 00:21:02.404775 systemd-networkd[1371]: calie11697c5d15: Gained IPv6LL Jan 17 00:21:02.661124 kubelet[2529]: E0117 00:21:02.660615 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:21:02.663680 kubelet[2529]: E0117 00:21:02.663518 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:21:02.665436 kubelet[2529]: E0117 00:21:02.665380 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-669cbdb5c4-j86b8" podUID="8c0578bf-2fb3-4218-b665-10ff5fcbea9f" Jan 17 00:21:03.664841 kubelet[2529]: E0117 00:21:03.664484 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-669cbdb5c4-j86b8" podUID="8c0578bf-2fb3-4218-b665-10ff5fcbea9f" Jan 17 00:21:03.666798 kubelet[2529]: E0117 00:21:03.666710 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:21:04.667068 kubelet[2529]: E0117 00:21:04.666972 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:21:05.131634 containerd[1461]: time="2026-01-17T00:21:05.131387260Z" level=info msg="StopPodSandbox for \"838a260361119b936c4c3d02f67478bead9836787e5be40d80110848c4e9b5ac\"" Jan 17 00:21:05.276514 containerd[1461]: 2026-01-17 00:21:05.206 [WARNING][4873] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="838a260361119b936c4c3d02f67478bead9836787e5be40d80110848c4e9b5ac" WorkloadEndpoint="ci--4081.3.6--n--8cc98427e3-k8s-whisker--6497847d59--vfrxp-eth0" Jan 17 00:21:05.276514 containerd[1461]: 2026-01-17 00:21:05.206 [INFO][4873] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="838a260361119b936c4c3d02f67478bead9836787e5be40d80110848c4e9b5ac" Jan 17 00:21:05.276514 containerd[1461]: 2026-01-17 00:21:05.206 [INFO][4873] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="838a260361119b936c4c3d02f67478bead9836787e5be40d80110848c4e9b5ac" iface="eth0" netns="" Jan 17 00:21:05.276514 containerd[1461]: 2026-01-17 00:21:05.206 [INFO][4873] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="838a260361119b936c4c3d02f67478bead9836787e5be40d80110848c4e9b5ac" Jan 17 00:21:05.276514 containerd[1461]: 2026-01-17 00:21:05.206 [INFO][4873] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="838a260361119b936c4c3d02f67478bead9836787e5be40d80110848c4e9b5ac" Jan 17 00:21:05.276514 containerd[1461]: 2026-01-17 00:21:05.256 [INFO][4883] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="838a260361119b936c4c3d02f67478bead9836787e5be40d80110848c4e9b5ac" HandleID="k8s-pod-network.838a260361119b936c4c3d02f67478bead9836787e5be40d80110848c4e9b5ac" Workload="ci--4081.3.6--n--8cc98427e3-k8s-whisker--6497847d59--vfrxp-eth0" Jan 17 00:21:05.276514 containerd[1461]: 2026-01-17 00:21:05.256 [INFO][4883] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:21:05.276514 containerd[1461]: 2026-01-17 00:21:05.256 [INFO][4883] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:21:05.276514 containerd[1461]: 2026-01-17 00:21:05.267 [WARNING][4883] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="838a260361119b936c4c3d02f67478bead9836787e5be40d80110848c4e9b5ac" HandleID="k8s-pod-network.838a260361119b936c4c3d02f67478bead9836787e5be40d80110848c4e9b5ac" Workload="ci--4081.3.6--n--8cc98427e3-k8s-whisker--6497847d59--vfrxp-eth0" Jan 17 00:21:05.276514 containerd[1461]: 2026-01-17 00:21:05.267 [INFO][4883] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="838a260361119b936c4c3d02f67478bead9836787e5be40d80110848c4e9b5ac" HandleID="k8s-pod-network.838a260361119b936c4c3d02f67478bead9836787e5be40d80110848c4e9b5ac" Workload="ci--4081.3.6--n--8cc98427e3-k8s-whisker--6497847d59--vfrxp-eth0" Jan 17 00:21:05.276514 containerd[1461]: 2026-01-17 00:21:05.269 [INFO][4883] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:21:05.276514 containerd[1461]: 2026-01-17 00:21:05.272 [INFO][4873] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="838a260361119b936c4c3d02f67478bead9836787e5be40d80110848c4e9b5ac" Jan 17 00:21:05.276514 containerd[1461]: time="2026-01-17T00:21:05.276327934Z" level=info msg="TearDown network for sandbox \"838a260361119b936c4c3d02f67478bead9836787e5be40d80110848c4e9b5ac\" successfully" Jan 17 00:21:05.276514 containerd[1461]: time="2026-01-17T00:21:05.276367201Z" level=info msg="StopPodSandbox for \"838a260361119b936c4c3d02f67478bead9836787e5be40d80110848c4e9b5ac\" returns successfully" Jan 17 00:21:05.279619 containerd[1461]: time="2026-01-17T00:21:05.277278022Z" level=info msg="RemovePodSandbox for \"838a260361119b936c4c3d02f67478bead9836787e5be40d80110848c4e9b5ac\"" Jan 17 00:21:05.281182 containerd[1461]: time="2026-01-17T00:21:05.281124235Z" level=info msg="Forcibly stopping sandbox \"838a260361119b936c4c3d02f67478bead9836787e5be40d80110848c4e9b5ac\"" Jan 17 00:21:05.389667 containerd[1461]: 2026-01-17 00:21:05.342 [WARNING][4898] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="838a260361119b936c4c3d02f67478bead9836787e5be40d80110848c4e9b5ac" WorkloadEndpoint="ci--4081.3.6--n--8cc98427e3-k8s-whisker--6497847d59--vfrxp-eth0" Jan 17 00:21:05.389667 containerd[1461]: 2026-01-17 00:21:05.343 [INFO][4898] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="838a260361119b936c4c3d02f67478bead9836787e5be40d80110848c4e9b5ac" Jan 17 00:21:05.389667 containerd[1461]: 2026-01-17 00:21:05.343 [INFO][4898] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="838a260361119b936c4c3d02f67478bead9836787e5be40d80110848c4e9b5ac" iface="eth0" netns="" Jan 17 00:21:05.389667 containerd[1461]: 2026-01-17 00:21:05.343 [INFO][4898] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="838a260361119b936c4c3d02f67478bead9836787e5be40d80110848c4e9b5ac" Jan 17 00:21:05.389667 containerd[1461]: 2026-01-17 00:21:05.343 [INFO][4898] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="838a260361119b936c4c3d02f67478bead9836787e5be40d80110848c4e9b5ac" Jan 17 00:21:05.389667 containerd[1461]: 2026-01-17 00:21:05.372 [INFO][4905] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="838a260361119b936c4c3d02f67478bead9836787e5be40d80110848c4e9b5ac" HandleID="k8s-pod-network.838a260361119b936c4c3d02f67478bead9836787e5be40d80110848c4e9b5ac" Workload="ci--4081.3.6--n--8cc98427e3-k8s-whisker--6497847d59--vfrxp-eth0" Jan 17 00:21:05.389667 containerd[1461]: 2026-01-17 00:21:05.372 [INFO][4905] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:21:05.389667 containerd[1461]: 2026-01-17 00:21:05.372 [INFO][4905] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:21:05.389667 containerd[1461]: 2026-01-17 00:21:05.382 [WARNING][4905] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="838a260361119b936c4c3d02f67478bead9836787e5be40d80110848c4e9b5ac" HandleID="k8s-pod-network.838a260361119b936c4c3d02f67478bead9836787e5be40d80110848c4e9b5ac" Workload="ci--4081.3.6--n--8cc98427e3-k8s-whisker--6497847d59--vfrxp-eth0" Jan 17 00:21:05.389667 containerd[1461]: 2026-01-17 00:21:05.382 [INFO][4905] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="838a260361119b936c4c3d02f67478bead9836787e5be40d80110848c4e9b5ac" HandleID="k8s-pod-network.838a260361119b936c4c3d02f67478bead9836787e5be40d80110848c4e9b5ac" Workload="ci--4081.3.6--n--8cc98427e3-k8s-whisker--6497847d59--vfrxp-eth0" Jan 17 00:21:05.389667 containerd[1461]: 2026-01-17 00:21:05.384 [INFO][4905] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:21:05.389667 containerd[1461]: 2026-01-17 00:21:05.386 [INFO][4898] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="838a260361119b936c4c3d02f67478bead9836787e5be40d80110848c4e9b5ac" Jan 17 00:21:05.391865 containerd[1461]: time="2026-01-17T00:21:05.391322036Z" level=info msg="TearDown network for sandbox \"838a260361119b936c4c3d02f67478bead9836787e5be40d80110848c4e9b5ac\" successfully" Jan 17 00:21:05.401643 containerd[1461]: time="2026-01-17T00:21:05.401492843Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"838a260361119b936c4c3d02f67478bead9836787e5be40d80110848c4e9b5ac\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:21:05.401865 containerd[1461]: time="2026-01-17T00:21:05.401675977Z" level=info msg="RemovePodSandbox \"838a260361119b936c4c3d02f67478bead9836787e5be40d80110848c4e9b5ac\" returns successfully" Jan 17 00:21:05.402588 containerd[1461]: time="2026-01-17T00:21:05.402512677Z" level=info msg="StopPodSandbox for \"87a14bc904253110840c44a9cbfb2d2c7b945fa5100db2395842b3b79fa3f745\"" Jan 17 00:21:05.515087 containerd[1461]: 2026-01-17 00:21:05.454 [WARNING][4919] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="87a14bc904253110840c44a9cbfb2d2c7b945fa5100db2395842b3b79fa3f745" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8cc98427e3-k8s-csi--node--driver--dcsb9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"96bd27e8-f4d7-4ca9-8ceb-fc56f28a33f0", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8cc98427e3", ContainerID:"15dee02e237760eb1fe41c111ddbb0e74eeb26e077aec87648e60105787f0640", Pod:"csi-node-driver-dcsb9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.60.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5eb1ed15ebd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:21:05.515087 containerd[1461]: 2026-01-17 00:21:05.455 [INFO][4919] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="87a14bc904253110840c44a9cbfb2d2c7b945fa5100db2395842b3b79fa3f745" Jan 17 00:21:05.515087 containerd[1461]: 2026-01-17 00:21:05.455 [INFO][4919] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="87a14bc904253110840c44a9cbfb2d2c7b945fa5100db2395842b3b79fa3f745" iface="eth0" netns="" Jan 17 00:21:05.515087 containerd[1461]: 2026-01-17 00:21:05.455 [INFO][4919] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="87a14bc904253110840c44a9cbfb2d2c7b945fa5100db2395842b3b79fa3f745" Jan 17 00:21:05.515087 containerd[1461]: 2026-01-17 00:21:05.455 [INFO][4919] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="87a14bc904253110840c44a9cbfb2d2c7b945fa5100db2395842b3b79fa3f745" Jan 17 00:21:05.515087 containerd[1461]: 2026-01-17 00:21:05.495 [INFO][4927] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="87a14bc904253110840c44a9cbfb2d2c7b945fa5100db2395842b3b79fa3f745" HandleID="k8s-pod-network.87a14bc904253110840c44a9cbfb2d2c7b945fa5100db2395842b3b79fa3f745" Workload="ci--4081.3.6--n--8cc98427e3-k8s-csi--node--driver--dcsb9-eth0" Jan 17 00:21:05.515087 containerd[1461]: 2026-01-17 00:21:05.495 [INFO][4927] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:21:05.515087 containerd[1461]: 2026-01-17 00:21:05.495 [INFO][4927] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:21:05.515087 containerd[1461]: 2026-01-17 00:21:05.506 [WARNING][4927] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="87a14bc904253110840c44a9cbfb2d2c7b945fa5100db2395842b3b79fa3f745" HandleID="k8s-pod-network.87a14bc904253110840c44a9cbfb2d2c7b945fa5100db2395842b3b79fa3f745" Workload="ci--4081.3.6--n--8cc98427e3-k8s-csi--node--driver--dcsb9-eth0" Jan 17 00:21:05.515087 containerd[1461]: 2026-01-17 00:21:05.506 [INFO][4927] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="87a14bc904253110840c44a9cbfb2d2c7b945fa5100db2395842b3b79fa3f745" HandleID="k8s-pod-network.87a14bc904253110840c44a9cbfb2d2c7b945fa5100db2395842b3b79fa3f745" Workload="ci--4081.3.6--n--8cc98427e3-k8s-csi--node--driver--dcsb9-eth0" Jan 17 00:21:05.515087 containerd[1461]: 2026-01-17 00:21:05.508 [INFO][4927] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:21:05.515087 containerd[1461]: 2026-01-17 00:21:05.511 [INFO][4919] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="87a14bc904253110840c44a9cbfb2d2c7b945fa5100db2395842b3b79fa3f745" Jan 17 00:21:05.517977 containerd[1461]: time="2026-01-17T00:21:05.515138676Z" level=info msg="TearDown network for sandbox \"87a14bc904253110840c44a9cbfb2d2c7b945fa5100db2395842b3b79fa3f745\" successfully" Jan 17 00:21:05.517977 containerd[1461]: time="2026-01-17T00:21:05.515180284Z" level=info msg="StopPodSandbox for \"87a14bc904253110840c44a9cbfb2d2c7b945fa5100db2395842b3b79fa3f745\" returns successfully" Jan 17 00:21:05.517977 containerd[1461]: time="2026-01-17T00:21:05.515734786Z" level=info msg="RemovePodSandbox for \"87a14bc904253110840c44a9cbfb2d2c7b945fa5100db2395842b3b79fa3f745\"" Jan 17 00:21:05.517977 containerd[1461]: time="2026-01-17T00:21:05.515764804Z" level=info msg="Forcibly stopping sandbox \"87a14bc904253110840c44a9cbfb2d2c7b945fa5100db2395842b3b79fa3f745\"" Jan 17 00:21:05.633474 containerd[1461]: 2026-01-17 00:21:05.590 [WARNING][4941] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="87a14bc904253110840c44a9cbfb2d2c7b945fa5100db2395842b3b79fa3f745" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8cc98427e3-k8s-csi--node--driver--dcsb9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"96bd27e8-f4d7-4ca9-8ceb-fc56f28a33f0", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8cc98427e3", ContainerID:"15dee02e237760eb1fe41c111ddbb0e74eeb26e077aec87648e60105787f0640", Pod:"csi-node-driver-dcsb9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.60.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5eb1ed15ebd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:21:05.633474 containerd[1461]: 2026-01-17 00:21:05.590 [INFO][4941] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="87a14bc904253110840c44a9cbfb2d2c7b945fa5100db2395842b3b79fa3f745" Jan 17 00:21:05.633474 containerd[1461]: 2026-01-17 00:21:05.590 [INFO][4941] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="87a14bc904253110840c44a9cbfb2d2c7b945fa5100db2395842b3b79fa3f745" iface="eth0" netns="" Jan 17 00:21:05.633474 containerd[1461]: 2026-01-17 00:21:05.590 [INFO][4941] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="87a14bc904253110840c44a9cbfb2d2c7b945fa5100db2395842b3b79fa3f745" Jan 17 00:21:05.633474 containerd[1461]: 2026-01-17 00:21:05.591 [INFO][4941] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="87a14bc904253110840c44a9cbfb2d2c7b945fa5100db2395842b3b79fa3f745" Jan 17 00:21:05.633474 containerd[1461]: 2026-01-17 00:21:05.617 [INFO][4951] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="87a14bc904253110840c44a9cbfb2d2c7b945fa5100db2395842b3b79fa3f745" HandleID="k8s-pod-network.87a14bc904253110840c44a9cbfb2d2c7b945fa5100db2395842b3b79fa3f745" Workload="ci--4081.3.6--n--8cc98427e3-k8s-csi--node--driver--dcsb9-eth0" Jan 17 00:21:05.633474 containerd[1461]: 2026-01-17 00:21:05.617 [INFO][4951] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:21:05.633474 containerd[1461]: 2026-01-17 00:21:05.617 [INFO][4951] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:21:05.633474 containerd[1461]: 2026-01-17 00:21:05.626 [WARNING][4951] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="87a14bc904253110840c44a9cbfb2d2c7b945fa5100db2395842b3b79fa3f745" HandleID="k8s-pod-network.87a14bc904253110840c44a9cbfb2d2c7b945fa5100db2395842b3b79fa3f745" Workload="ci--4081.3.6--n--8cc98427e3-k8s-csi--node--driver--dcsb9-eth0" Jan 17 00:21:05.633474 containerd[1461]: 2026-01-17 00:21:05.626 [INFO][4951] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="87a14bc904253110840c44a9cbfb2d2c7b945fa5100db2395842b3b79fa3f745" HandleID="k8s-pod-network.87a14bc904253110840c44a9cbfb2d2c7b945fa5100db2395842b3b79fa3f745" Workload="ci--4081.3.6--n--8cc98427e3-k8s-csi--node--driver--dcsb9-eth0" Jan 17 00:21:05.633474 containerd[1461]: 2026-01-17 00:21:05.629 [INFO][4951] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:21:05.633474 containerd[1461]: 2026-01-17 00:21:05.630 [INFO][4941] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="87a14bc904253110840c44a9cbfb2d2c7b945fa5100db2395842b3b79fa3f745" Jan 17 00:21:05.635033 containerd[1461]: time="2026-01-17T00:21:05.633634765Z" level=info msg="TearDown network for sandbox \"87a14bc904253110840c44a9cbfb2d2c7b945fa5100db2395842b3b79fa3f745\" successfully" Jan 17 00:21:05.637510 containerd[1461]: time="2026-01-17T00:21:05.637416752Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"87a14bc904253110840c44a9cbfb2d2c7b945fa5100db2395842b3b79fa3f745\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:21:05.637510 containerd[1461]: time="2026-01-17T00:21:05.637488978Z" level=info msg="RemovePodSandbox \"87a14bc904253110840c44a9cbfb2d2c7b945fa5100db2395842b3b79fa3f745\" returns successfully" Jan 17 00:21:05.638109 containerd[1461]: time="2026-01-17T00:21:05.638076961Z" level=info msg="StopPodSandbox for \"508319c222abee0c5b89c340610184580e04a9ff4363de9efe822c84e09b5507\"" Jan 17 00:21:05.732755 containerd[1461]: 2026-01-17 00:21:05.686 [WARNING][4965] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="508319c222abee0c5b89c340610184580e04a9ff4363de9efe822c84e09b5507" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8cc98427e3-k8s-goldmane--666569f655--m56rm-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"a4529381-2d40-4d70-a757-b0ee2c920e64", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8cc98427e3", ContainerID:"a6ad293a84963c23f531545421abd9c30eef1b59ddbfff375cbb5401a249fbbf", Pod:"goldmane-666569f655-m56rm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.60.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali50b5449aa60", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:21:05.732755 containerd[1461]: 2026-01-17 00:21:05.686 [INFO][4965] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="508319c222abee0c5b89c340610184580e04a9ff4363de9efe822c84e09b5507" Jan 17 00:21:05.732755 containerd[1461]: 2026-01-17 00:21:05.686 [INFO][4965] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="508319c222abee0c5b89c340610184580e04a9ff4363de9efe822c84e09b5507" iface="eth0" netns="" Jan 17 00:21:05.732755 containerd[1461]: 2026-01-17 00:21:05.686 [INFO][4965] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="508319c222abee0c5b89c340610184580e04a9ff4363de9efe822c84e09b5507" Jan 17 00:21:05.732755 containerd[1461]: 2026-01-17 00:21:05.686 [INFO][4965] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="508319c222abee0c5b89c340610184580e04a9ff4363de9efe822c84e09b5507" Jan 17 00:21:05.732755 containerd[1461]: 2026-01-17 00:21:05.715 [INFO][4972] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="508319c222abee0c5b89c340610184580e04a9ff4363de9efe822c84e09b5507" HandleID="k8s-pod-network.508319c222abee0c5b89c340610184580e04a9ff4363de9efe822c84e09b5507" Workload="ci--4081.3.6--n--8cc98427e3-k8s-goldmane--666569f655--m56rm-eth0" Jan 17 00:21:05.732755 containerd[1461]: 2026-01-17 00:21:05.716 [INFO][4972] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:21:05.732755 containerd[1461]: 2026-01-17 00:21:05.716 [INFO][4972] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:21:05.732755 containerd[1461]: 2026-01-17 00:21:05.725 [WARNING][4972] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="508319c222abee0c5b89c340610184580e04a9ff4363de9efe822c84e09b5507" HandleID="k8s-pod-network.508319c222abee0c5b89c340610184580e04a9ff4363de9efe822c84e09b5507" Workload="ci--4081.3.6--n--8cc98427e3-k8s-goldmane--666569f655--m56rm-eth0" Jan 17 00:21:05.732755 containerd[1461]: 2026-01-17 00:21:05.725 [INFO][4972] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="508319c222abee0c5b89c340610184580e04a9ff4363de9efe822c84e09b5507" HandleID="k8s-pod-network.508319c222abee0c5b89c340610184580e04a9ff4363de9efe822c84e09b5507" Workload="ci--4081.3.6--n--8cc98427e3-k8s-goldmane--666569f655--m56rm-eth0" Jan 17 00:21:05.732755 containerd[1461]: 2026-01-17 00:21:05.727 [INFO][4972] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:21:05.732755 containerd[1461]: 2026-01-17 00:21:05.730 [INFO][4965] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="508319c222abee0c5b89c340610184580e04a9ff4363de9efe822c84e09b5507" Jan 17 00:21:05.734274 containerd[1461]: time="2026-01-17T00:21:05.732764083Z" level=info msg="TearDown network for sandbox \"508319c222abee0c5b89c340610184580e04a9ff4363de9efe822c84e09b5507\" successfully" Jan 17 00:21:05.734274 containerd[1461]: time="2026-01-17T00:21:05.732819757Z" level=info msg="StopPodSandbox for \"508319c222abee0c5b89c340610184580e04a9ff4363de9efe822c84e09b5507\" returns successfully" Jan 17 00:21:05.736120 containerd[1461]: time="2026-01-17T00:21:05.736071340Z" level=info msg="RemovePodSandbox for \"508319c222abee0c5b89c340610184580e04a9ff4363de9efe822c84e09b5507\"" Jan 17 00:21:05.736218 containerd[1461]: time="2026-01-17T00:21:05.736132594Z" level=info msg="Forcibly stopping sandbox \"508319c222abee0c5b89c340610184580e04a9ff4363de9efe822c84e09b5507\"" Jan 17 00:21:05.839575 containerd[1461]: 2026-01-17 00:21:05.791 [WARNING][4986] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="508319c222abee0c5b89c340610184580e04a9ff4363de9efe822c84e09b5507" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8cc98427e3-k8s-goldmane--666569f655--m56rm-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"a4529381-2d40-4d70-a757-b0ee2c920e64", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8cc98427e3", ContainerID:"a6ad293a84963c23f531545421abd9c30eef1b59ddbfff375cbb5401a249fbbf", Pod:"goldmane-666569f655-m56rm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.60.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali50b5449aa60", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:21:05.839575 containerd[1461]: 2026-01-17 00:21:05.792 [INFO][4986] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="508319c222abee0c5b89c340610184580e04a9ff4363de9efe822c84e09b5507" Jan 17 00:21:05.839575 containerd[1461]: 2026-01-17 00:21:05.792 [INFO][4986] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="508319c222abee0c5b89c340610184580e04a9ff4363de9efe822c84e09b5507" iface="eth0" netns="" Jan 17 00:21:05.839575 containerd[1461]: 2026-01-17 00:21:05.792 [INFO][4986] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="508319c222abee0c5b89c340610184580e04a9ff4363de9efe822c84e09b5507" Jan 17 00:21:05.839575 containerd[1461]: 2026-01-17 00:21:05.792 [INFO][4986] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="508319c222abee0c5b89c340610184580e04a9ff4363de9efe822c84e09b5507" Jan 17 00:21:05.839575 containerd[1461]: 2026-01-17 00:21:05.819 [INFO][4993] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="508319c222abee0c5b89c340610184580e04a9ff4363de9efe822c84e09b5507" HandleID="k8s-pod-network.508319c222abee0c5b89c340610184580e04a9ff4363de9efe822c84e09b5507" Workload="ci--4081.3.6--n--8cc98427e3-k8s-goldmane--666569f655--m56rm-eth0" Jan 17 00:21:05.839575 containerd[1461]: 2026-01-17 00:21:05.819 [INFO][4993] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:21:05.839575 containerd[1461]: 2026-01-17 00:21:05.819 [INFO][4993] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:21:05.839575 containerd[1461]: 2026-01-17 00:21:05.832 [WARNING][4993] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="508319c222abee0c5b89c340610184580e04a9ff4363de9efe822c84e09b5507" HandleID="k8s-pod-network.508319c222abee0c5b89c340610184580e04a9ff4363de9efe822c84e09b5507" Workload="ci--4081.3.6--n--8cc98427e3-k8s-goldmane--666569f655--m56rm-eth0" Jan 17 00:21:05.839575 containerd[1461]: 2026-01-17 00:21:05.832 [INFO][4993] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="508319c222abee0c5b89c340610184580e04a9ff4363de9efe822c84e09b5507" HandleID="k8s-pod-network.508319c222abee0c5b89c340610184580e04a9ff4363de9efe822c84e09b5507" Workload="ci--4081.3.6--n--8cc98427e3-k8s-goldmane--666569f655--m56rm-eth0" Jan 17 00:21:05.839575 containerd[1461]: 2026-01-17 00:21:05.835 [INFO][4993] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:21:05.839575 containerd[1461]: 2026-01-17 00:21:05.837 [INFO][4986] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="508319c222abee0c5b89c340610184580e04a9ff4363de9efe822c84e09b5507" Jan 17 00:21:05.841759 containerd[1461]: time="2026-01-17T00:21:05.840353545Z" level=info msg="TearDown network for sandbox \"508319c222abee0c5b89c340610184580e04a9ff4363de9efe822c84e09b5507\" successfully" Jan 17 00:21:05.851236 containerd[1461]: time="2026-01-17T00:21:05.851171170Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"508319c222abee0c5b89c340610184580e04a9ff4363de9efe822c84e09b5507\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:21:05.851437 containerd[1461]: time="2026-01-17T00:21:05.851268586Z" level=info msg="RemovePodSandbox \"508319c222abee0c5b89c340610184580e04a9ff4363de9efe822c84e09b5507\" returns successfully" Jan 17 00:21:05.852564 containerd[1461]: time="2026-01-17T00:21:05.852032124Z" level=info msg="StopPodSandbox for \"4972674ea7086c5f3a0512271193226956bd1cd7f22b559feafd9ec1036b1221\"" Jan 17 00:21:05.953201 containerd[1461]: 2026-01-17 00:21:05.896 [WARNING][5007] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4972674ea7086c5f3a0512271193226956bd1cd7f22b559feafd9ec1036b1221" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8cc98427e3-k8s-coredns--674b8bbfcf--kg6z8-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"16e8fb15-593a-4be8-833b-05df43f1e4e7", ResourceVersion:"1106", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8cc98427e3", ContainerID:"57770fd02966dad6893867846ae59026d306315b9c243e88c9faae3ad214d75b", Pod:"coredns-674b8bbfcf-kg6z8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.60.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali18d8a3d70c6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:21:05.953201 containerd[1461]: 2026-01-17 00:21:05.896 [INFO][5007] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4972674ea7086c5f3a0512271193226956bd1cd7f22b559feafd9ec1036b1221" Jan 17 00:21:05.953201 containerd[1461]: 2026-01-17 00:21:05.896 [INFO][5007] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4972674ea7086c5f3a0512271193226956bd1cd7f22b559feafd9ec1036b1221" iface="eth0" netns="" Jan 17 00:21:05.953201 containerd[1461]: 2026-01-17 00:21:05.896 [INFO][5007] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4972674ea7086c5f3a0512271193226956bd1cd7f22b559feafd9ec1036b1221" Jan 17 00:21:05.953201 containerd[1461]: 2026-01-17 00:21:05.896 [INFO][5007] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4972674ea7086c5f3a0512271193226956bd1cd7f22b559feafd9ec1036b1221" Jan 17 00:21:05.953201 containerd[1461]: 2026-01-17 00:21:05.934 [INFO][5014] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4972674ea7086c5f3a0512271193226956bd1cd7f22b559feafd9ec1036b1221" HandleID="k8s-pod-network.4972674ea7086c5f3a0512271193226956bd1cd7f22b559feafd9ec1036b1221" Workload="ci--4081.3.6--n--8cc98427e3-k8s-coredns--674b8bbfcf--kg6z8-eth0" Jan 17 00:21:05.953201 containerd[1461]: 2026-01-17 00:21:05.934 [INFO][5014] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:21:05.953201 containerd[1461]: 2026-01-17 00:21:05.934 [INFO][5014] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:21:05.953201 containerd[1461]: 2026-01-17 00:21:05.945 [WARNING][5014] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4972674ea7086c5f3a0512271193226956bd1cd7f22b559feafd9ec1036b1221" HandleID="k8s-pod-network.4972674ea7086c5f3a0512271193226956bd1cd7f22b559feafd9ec1036b1221" Workload="ci--4081.3.6--n--8cc98427e3-k8s-coredns--674b8bbfcf--kg6z8-eth0" Jan 17 00:21:05.953201 containerd[1461]: 2026-01-17 00:21:05.945 [INFO][5014] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4972674ea7086c5f3a0512271193226956bd1cd7f22b559feafd9ec1036b1221" HandleID="k8s-pod-network.4972674ea7086c5f3a0512271193226956bd1cd7f22b559feafd9ec1036b1221" Workload="ci--4081.3.6--n--8cc98427e3-k8s-coredns--674b8bbfcf--kg6z8-eth0" Jan 17 00:21:05.953201 containerd[1461]: 2026-01-17 00:21:05.948 [INFO][5014] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:21:05.953201 containerd[1461]: 2026-01-17 00:21:05.950 [INFO][5007] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4972674ea7086c5f3a0512271193226956bd1cd7f22b559feafd9ec1036b1221" Jan 17 00:21:05.954628 containerd[1461]: time="2026-01-17T00:21:05.953496554Z" level=info msg="TearDown network for sandbox \"4972674ea7086c5f3a0512271193226956bd1cd7f22b559feafd9ec1036b1221\" successfully" Jan 17 00:21:05.954628 containerd[1461]: time="2026-01-17T00:21:05.953714613Z" level=info msg="StopPodSandbox for \"4972674ea7086c5f3a0512271193226956bd1cd7f22b559feafd9ec1036b1221\" returns successfully" Jan 17 00:21:05.955357 containerd[1461]: time="2026-01-17T00:21:05.955035357Z" level=info msg="RemovePodSandbox for \"4972674ea7086c5f3a0512271193226956bd1cd7f22b559feafd9ec1036b1221\"" Jan 17 00:21:05.955357 containerd[1461]: time="2026-01-17T00:21:05.955080289Z" level=info msg="Forcibly stopping sandbox \"4972674ea7086c5f3a0512271193226956bd1cd7f22b559feafd9ec1036b1221\"" Jan 17 00:21:06.066979 containerd[1461]: 2026-01-17 00:21:06.012 [WARNING][5028] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4972674ea7086c5f3a0512271193226956bd1cd7f22b559feafd9ec1036b1221" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8cc98427e3-k8s-coredns--674b8bbfcf--kg6z8-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"16e8fb15-593a-4be8-833b-05df43f1e4e7", ResourceVersion:"1106", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8cc98427e3", ContainerID:"57770fd02966dad6893867846ae59026d306315b9c243e88c9faae3ad214d75b", Pod:"coredns-674b8bbfcf-kg6z8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.60.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali18d8a3d70c6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:21:06.066979 containerd[1461]: 2026-01-17 00:21:06.015 [INFO][5028] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4972674ea7086c5f3a0512271193226956bd1cd7f22b559feafd9ec1036b1221" Jan 17 00:21:06.066979 containerd[1461]: 2026-01-17 00:21:06.015 [INFO][5028] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4972674ea7086c5f3a0512271193226956bd1cd7f22b559feafd9ec1036b1221" iface="eth0" netns="" Jan 17 00:21:06.066979 containerd[1461]: 2026-01-17 00:21:06.015 [INFO][5028] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4972674ea7086c5f3a0512271193226956bd1cd7f22b559feafd9ec1036b1221" Jan 17 00:21:06.066979 containerd[1461]: 2026-01-17 00:21:06.015 [INFO][5028] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4972674ea7086c5f3a0512271193226956bd1cd7f22b559feafd9ec1036b1221" Jan 17 00:21:06.066979 containerd[1461]: 2026-01-17 00:21:06.045 [INFO][5036] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4972674ea7086c5f3a0512271193226956bd1cd7f22b559feafd9ec1036b1221" HandleID="k8s-pod-network.4972674ea7086c5f3a0512271193226956bd1cd7f22b559feafd9ec1036b1221" Workload="ci--4081.3.6--n--8cc98427e3-k8s-coredns--674b8bbfcf--kg6z8-eth0" Jan 17 00:21:06.066979 containerd[1461]: 2026-01-17 00:21:06.045 [INFO][5036] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:21:06.066979 containerd[1461]: 2026-01-17 00:21:06.045 [INFO][5036] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:21:06.066979 containerd[1461]: 2026-01-17 00:21:06.057 [WARNING][5036] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4972674ea7086c5f3a0512271193226956bd1cd7f22b559feafd9ec1036b1221" HandleID="k8s-pod-network.4972674ea7086c5f3a0512271193226956bd1cd7f22b559feafd9ec1036b1221" Workload="ci--4081.3.6--n--8cc98427e3-k8s-coredns--674b8bbfcf--kg6z8-eth0" Jan 17 00:21:06.066979 containerd[1461]: 2026-01-17 00:21:06.057 [INFO][5036] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4972674ea7086c5f3a0512271193226956bd1cd7f22b559feafd9ec1036b1221" HandleID="k8s-pod-network.4972674ea7086c5f3a0512271193226956bd1cd7f22b559feafd9ec1036b1221" Workload="ci--4081.3.6--n--8cc98427e3-k8s-coredns--674b8bbfcf--kg6z8-eth0" Jan 17 00:21:06.066979 containerd[1461]: 2026-01-17 00:21:06.060 [INFO][5036] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:21:06.066979 containerd[1461]: 2026-01-17 00:21:06.062 [INFO][5028] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4972674ea7086c5f3a0512271193226956bd1cd7f22b559feafd9ec1036b1221" Jan 17 00:21:06.066979 containerd[1461]: time="2026-01-17T00:21:06.065652708Z" level=info msg="TearDown network for sandbox \"4972674ea7086c5f3a0512271193226956bd1cd7f22b559feafd9ec1036b1221\" successfully" Jan 17 00:21:06.074806 containerd[1461]: time="2026-01-17T00:21:06.074652604Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4972674ea7086c5f3a0512271193226956bd1cd7f22b559feafd9ec1036b1221\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:21:06.074806 containerd[1461]: time="2026-01-17T00:21:06.074775341Z" level=info msg="RemovePodSandbox \"4972674ea7086c5f3a0512271193226956bd1cd7f22b559feafd9ec1036b1221\" returns successfully" Jan 17 00:21:06.076369 containerd[1461]: time="2026-01-17T00:21:06.075880472Z" level=info msg="StopPodSandbox for \"3ffc5e8506d7df4b7bf7be1a5ecceee233557676031fc04e84d88f72bc18b9a5\"" Jan 17 00:21:06.182583 containerd[1461]: 2026-01-17 00:21:06.136 [WARNING][5050] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3ffc5e8506d7df4b7bf7be1a5ecceee233557676031fc04e84d88f72bc18b9a5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8cc98427e3-k8s-calico--apiserver--669cbdb5c4--xt5pt-eth0", GenerateName:"calico-apiserver-669cbdb5c4-", Namespace:"calico-apiserver", SelfLink:"", UID:"fee43243-8ebd-4cd2-afa5-ba57dc078efe", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"669cbdb5c4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8cc98427e3", ContainerID:"182a9d80a4722bb8a4377249036931f5ca4d73ac18028a907ad301d8e126869a", Pod:"calico-apiserver-669cbdb5c4-xt5pt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.60.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia9f80d6b616", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:21:06.182583 containerd[1461]: 2026-01-17 00:21:06.137 [INFO][5050] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3ffc5e8506d7df4b7bf7be1a5ecceee233557676031fc04e84d88f72bc18b9a5" Jan 17 00:21:06.182583 containerd[1461]: 2026-01-17 00:21:06.137 [INFO][5050] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3ffc5e8506d7df4b7bf7be1a5ecceee233557676031fc04e84d88f72bc18b9a5" iface="eth0" netns="" Jan 17 00:21:06.182583 containerd[1461]: 2026-01-17 00:21:06.137 [INFO][5050] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3ffc5e8506d7df4b7bf7be1a5ecceee233557676031fc04e84d88f72bc18b9a5" Jan 17 00:21:06.182583 containerd[1461]: 2026-01-17 00:21:06.137 [INFO][5050] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3ffc5e8506d7df4b7bf7be1a5ecceee233557676031fc04e84d88f72bc18b9a5" Jan 17 00:21:06.182583 containerd[1461]: 2026-01-17 00:21:06.166 [INFO][5057] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3ffc5e8506d7df4b7bf7be1a5ecceee233557676031fc04e84d88f72bc18b9a5" HandleID="k8s-pod-network.3ffc5e8506d7df4b7bf7be1a5ecceee233557676031fc04e84d88f72bc18b9a5" Workload="ci--4081.3.6--n--8cc98427e3-k8s-calico--apiserver--669cbdb5c4--xt5pt-eth0" Jan 17 00:21:06.182583 containerd[1461]: 2026-01-17 00:21:06.166 [INFO][5057] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:21:06.182583 containerd[1461]: 2026-01-17 00:21:06.166 [INFO][5057] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:21:06.182583 containerd[1461]: 2026-01-17 00:21:06.174 [WARNING][5057] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3ffc5e8506d7df4b7bf7be1a5ecceee233557676031fc04e84d88f72bc18b9a5" HandleID="k8s-pod-network.3ffc5e8506d7df4b7bf7be1a5ecceee233557676031fc04e84d88f72bc18b9a5" Workload="ci--4081.3.6--n--8cc98427e3-k8s-calico--apiserver--669cbdb5c4--xt5pt-eth0" Jan 17 00:21:06.182583 containerd[1461]: 2026-01-17 00:21:06.174 [INFO][5057] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3ffc5e8506d7df4b7bf7be1a5ecceee233557676031fc04e84d88f72bc18b9a5" HandleID="k8s-pod-network.3ffc5e8506d7df4b7bf7be1a5ecceee233557676031fc04e84d88f72bc18b9a5" Workload="ci--4081.3.6--n--8cc98427e3-k8s-calico--apiserver--669cbdb5c4--xt5pt-eth0" Jan 17 00:21:06.182583 containerd[1461]: 2026-01-17 00:21:06.177 [INFO][5057] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:21:06.182583 containerd[1461]: 2026-01-17 00:21:06.180 [INFO][5050] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3ffc5e8506d7df4b7bf7be1a5ecceee233557676031fc04e84d88f72bc18b9a5" Jan 17 00:21:06.185107 containerd[1461]: time="2026-01-17T00:21:06.182643134Z" level=info msg="TearDown network for sandbox \"3ffc5e8506d7df4b7bf7be1a5ecceee233557676031fc04e84d88f72bc18b9a5\" successfully" Jan 17 00:21:06.185107 containerd[1461]: time="2026-01-17T00:21:06.182680900Z" level=info msg="StopPodSandbox for \"3ffc5e8506d7df4b7bf7be1a5ecceee233557676031fc04e84d88f72bc18b9a5\" returns successfully" Jan 17 00:21:06.185107 containerd[1461]: time="2026-01-17T00:21:06.183323860Z" level=info msg="RemovePodSandbox for \"3ffc5e8506d7df4b7bf7be1a5ecceee233557676031fc04e84d88f72bc18b9a5\"" Jan 17 00:21:06.185107 containerd[1461]: time="2026-01-17T00:21:06.183371209Z" level=info msg="Forcibly stopping sandbox \"3ffc5e8506d7df4b7bf7be1a5ecceee233557676031fc04e84d88f72bc18b9a5\"" Jan 17 00:21:06.286694 containerd[1461]: 2026-01-17 00:21:06.236 [WARNING][5071] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3ffc5e8506d7df4b7bf7be1a5ecceee233557676031fc04e84d88f72bc18b9a5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8cc98427e3-k8s-calico--apiserver--669cbdb5c4--xt5pt-eth0", GenerateName:"calico-apiserver-669cbdb5c4-", Namespace:"calico-apiserver", SelfLink:"", UID:"fee43243-8ebd-4cd2-afa5-ba57dc078efe", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"669cbdb5c4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8cc98427e3", ContainerID:"182a9d80a4722bb8a4377249036931f5ca4d73ac18028a907ad301d8e126869a", Pod:"calico-apiserver-669cbdb5c4-xt5pt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.60.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia9f80d6b616", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:21:06.286694 containerd[1461]: 2026-01-17 00:21:06.236 [INFO][5071] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3ffc5e8506d7df4b7bf7be1a5ecceee233557676031fc04e84d88f72bc18b9a5" Jan 17 00:21:06.286694 containerd[1461]: 2026-01-17 00:21:06.236 [INFO][5071] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3ffc5e8506d7df4b7bf7be1a5ecceee233557676031fc04e84d88f72bc18b9a5" iface="eth0" netns="" Jan 17 00:21:06.286694 containerd[1461]: 2026-01-17 00:21:06.236 [INFO][5071] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3ffc5e8506d7df4b7bf7be1a5ecceee233557676031fc04e84d88f72bc18b9a5" Jan 17 00:21:06.286694 containerd[1461]: 2026-01-17 00:21:06.236 [INFO][5071] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3ffc5e8506d7df4b7bf7be1a5ecceee233557676031fc04e84d88f72bc18b9a5" Jan 17 00:21:06.286694 containerd[1461]: 2026-01-17 00:21:06.268 [INFO][5079] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3ffc5e8506d7df4b7bf7be1a5ecceee233557676031fc04e84d88f72bc18b9a5" HandleID="k8s-pod-network.3ffc5e8506d7df4b7bf7be1a5ecceee233557676031fc04e84d88f72bc18b9a5" Workload="ci--4081.3.6--n--8cc98427e3-k8s-calico--apiserver--669cbdb5c4--xt5pt-eth0" Jan 17 00:21:06.286694 containerd[1461]: 2026-01-17 00:21:06.268 [INFO][5079] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:21:06.286694 containerd[1461]: 2026-01-17 00:21:06.268 [INFO][5079] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:21:06.286694 containerd[1461]: 2026-01-17 00:21:06.278 [WARNING][5079] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3ffc5e8506d7df4b7bf7be1a5ecceee233557676031fc04e84d88f72bc18b9a5" HandleID="k8s-pod-network.3ffc5e8506d7df4b7bf7be1a5ecceee233557676031fc04e84d88f72bc18b9a5" Workload="ci--4081.3.6--n--8cc98427e3-k8s-calico--apiserver--669cbdb5c4--xt5pt-eth0" Jan 17 00:21:06.286694 containerd[1461]: 2026-01-17 00:21:06.278 [INFO][5079] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3ffc5e8506d7df4b7bf7be1a5ecceee233557676031fc04e84d88f72bc18b9a5" HandleID="k8s-pod-network.3ffc5e8506d7df4b7bf7be1a5ecceee233557676031fc04e84d88f72bc18b9a5" Workload="ci--4081.3.6--n--8cc98427e3-k8s-calico--apiserver--669cbdb5c4--xt5pt-eth0" Jan 17 00:21:06.286694 containerd[1461]: 2026-01-17 00:21:06.280 [INFO][5079] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:21:06.286694 containerd[1461]: 2026-01-17 00:21:06.283 [INFO][5071] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3ffc5e8506d7df4b7bf7be1a5ecceee233557676031fc04e84d88f72bc18b9a5" Jan 17 00:21:06.288586 containerd[1461]: time="2026-01-17T00:21:06.287422280Z" level=info msg="TearDown network for sandbox \"3ffc5e8506d7df4b7bf7be1a5ecceee233557676031fc04e84d88f72bc18b9a5\" successfully" Jan 17 00:21:06.292126 containerd[1461]: time="2026-01-17T00:21:06.292074872Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3ffc5e8506d7df4b7bf7be1a5ecceee233557676031fc04e84d88f72bc18b9a5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:21:06.292414 containerd[1461]: time="2026-01-17T00:21:06.292371921Z" level=info msg="RemovePodSandbox \"3ffc5e8506d7df4b7bf7be1a5ecceee233557676031fc04e84d88f72bc18b9a5\" returns successfully" Jan 17 00:21:06.293525 containerd[1461]: time="2026-01-17T00:21:06.293421021Z" level=info msg="StopPodSandbox for \"0c1039da7ae9b20848b087b8181a55c71afdddbbdba3827847f3f1622b16355c\"" Jan 17 00:21:06.416744 containerd[1461]: 2026-01-17 00:21:06.360 [WARNING][5093] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0c1039da7ae9b20848b087b8181a55c71afdddbbdba3827847f3f1622b16355c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8cc98427e3-k8s-calico--kube--controllers--b9877fd47--255j9-eth0", GenerateName:"calico-kube-controllers-b9877fd47-", Namespace:"calico-system", SelfLink:"", UID:"1b489003-62f2-46b7-a6af-3a3a669c193c", ResourceVersion:"1051", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b9877fd47", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8cc98427e3", ContainerID:"5ba847bda35e000f761ef1d0536abf15a71752c10544e67c9bb81c9bea14c6f7", Pod:"calico-kube-controllers-b9877fd47-255j9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.60.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic31e565c037", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:21:06.416744 containerd[1461]: 2026-01-17 00:21:06.361 [INFO][5093] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0c1039da7ae9b20848b087b8181a55c71afdddbbdba3827847f3f1622b16355c" Jan 17 00:21:06.416744 containerd[1461]: 2026-01-17 00:21:06.361 [INFO][5093] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0c1039da7ae9b20848b087b8181a55c71afdddbbdba3827847f3f1622b16355c" iface="eth0" netns="" Jan 17 00:21:06.416744 containerd[1461]: 2026-01-17 00:21:06.361 [INFO][5093] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0c1039da7ae9b20848b087b8181a55c71afdddbbdba3827847f3f1622b16355c" Jan 17 00:21:06.416744 containerd[1461]: 2026-01-17 00:21:06.361 [INFO][5093] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0c1039da7ae9b20848b087b8181a55c71afdddbbdba3827847f3f1622b16355c" Jan 17 00:21:06.416744 containerd[1461]: 2026-01-17 00:21:06.398 [INFO][5100] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0c1039da7ae9b20848b087b8181a55c71afdddbbdba3827847f3f1622b16355c" HandleID="k8s-pod-network.0c1039da7ae9b20848b087b8181a55c71afdddbbdba3827847f3f1622b16355c" Workload="ci--4081.3.6--n--8cc98427e3-k8s-calico--kube--controllers--b9877fd47--255j9-eth0" Jan 17 00:21:06.416744 containerd[1461]: 2026-01-17 00:21:06.399 [INFO][5100] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:21:06.416744 containerd[1461]: 2026-01-17 00:21:06.399 [INFO][5100] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:21:06.416744 containerd[1461]: 2026-01-17 00:21:06.409 [WARNING][5100] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0c1039da7ae9b20848b087b8181a55c71afdddbbdba3827847f3f1622b16355c" HandleID="k8s-pod-network.0c1039da7ae9b20848b087b8181a55c71afdddbbdba3827847f3f1622b16355c" Workload="ci--4081.3.6--n--8cc98427e3-k8s-calico--kube--controllers--b9877fd47--255j9-eth0" Jan 17 00:21:06.416744 containerd[1461]: 2026-01-17 00:21:06.409 [INFO][5100] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0c1039da7ae9b20848b087b8181a55c71afdddbbdba3827847f3f1622b16355c" HandleID="k8s-pod-network.0c1039da7ae9b20848b087b8181a55c71afdddbbdba3827847f3f1622b16355c" Workload="ci--4081.3.6--n--8cc98427e3-k8s-calico--kube--controllers--b9877fd47--255j9-eth0" Jan 17 00:21:06.416744 containerd[1461]: 2026-01-17 00:21:06.411 [INFO][5100] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:21:06.416744 containerd[1461]: 2026-01-17 00:21:06.414 [INFO][5093] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0c1039da7ae9b20848b087b8181a55c71afdddbbdba3827847f3f1622b16355c" Jan 17 00:21:06.418754 containerd[1461]: time="2026-01-17T00:21:06.416803249Z" level=info msg="TearDown network for sandbox \"0c1039da7ae9b20848b087b8181a55c71afdddbbdba3827847f3f1622b16355c\" successfully" Jan 17 00:21:06.418754 containerd[1461]: time="2026-01-17T00:21:06.416839398Z" level=info msg="StopPodSandbox for \"0c1039da7ae9b20848b087b8181a55c71afdddbbdba3827847f3f1622b16355c\" returns successfully" Jan 17 00:21:06.418754 containerd[1461]: time="2026-01-17T00:21:06.417814994Z" level=info msg="RemovePodSandbox for \"0c1039da7ae9b20848b087b8181a55c71afdddbbdba3827847f3f1622b16355c\"" Jan 17 00:21:06.418754 containerd[1461]: time="2026-01-17T00:21:06.417859664Z" level=info msg="Forcibly stopping sandbox \"0c1039da7ae9b20848b087b8181a55c71afdddbbdba3827847f3f1622b16355c\"" Jan 17 00:21:06.521772 containerd[1461]: 2026-01-17 00:21:06.472 [WARNING][5114] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0c1039da7ae9b20848b087b8181a55c71afdddbbdba3827847f3f1622b16355c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8cc98427e3-k8s-calico--kube--controllers--b9877fd47--255j9-eth0", GenerateName:"calico-kube-controllers-b9877fd47-", Namespace:"calico-system", SelfLink:"", UID:"1b489003-62f2-46b7-a6af-3a3a669c193c", ResourceVersion:"1051", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b9877fd47", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8cc98427e3", ContainerID:"5ba847bda35e000f761ef1d0536abf15a71752c10544e67c9bb81c9bea14c6f7", Pod:"calico-kube-controllers-b9877fd47-255j9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.60.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic31e565c037", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:21:06.521772 containerd[1461]: 2026-01-17 00:21:06.472 [INFO][5114] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0c1039da7ae9b20848b087b8181a55c71afdddbbdba3827847f3f1622b16355c" Jan 17 00:21:06.521772 containerd[1461]: 2026-01-17 00:21:06.472 [INFO][5114] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0c1039da7ae9b20848b087b8181a55c71afdddbbdba3827847f3f1622b16355c" iface="eth0" netns="" Jan 17 00:21:06.521772 containerd[1461]: 2026-01-17 00:21:06.472 [INFO][5114] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0c1039da7ae9b20848b087b8181a55c71afdddbbdba3827847f3f1622b16355c" Jan 17 00:21:06.521772 containerd[1461]: 2026-01-17 00:21:06.473 [INFO][5114] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0c1039da7ae9b20848b087b8181a55c71afdddbbdba3827847f3f1622b16355c" Jan 17 00:21:06.521772 containerd[1461]: 2026-01-17 00:21:06.502 [INFO][5121] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0c1039da7ae9b20848b087b8181a55c71afdddbbdba3827847f3f1622b16355c" HandleID="k8s-pod-network.0c1039da7ae9b20848b087b8181a55c71afdddbbdba3827847f3f1622b16355c" Workload="ci--4081.3.6--n--8cc98427e3-k8s-calico--kube--controllers--b9877fd47--255j9-eth0" Jan 17 00:21:06.521772 containerd[1461]: 2026-01-17 00:21:06.503 [INFO][5121] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:21:06.521772 containerd[1461]: 2026-01-17 00:21:06.503 [INFO][5121] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:21:06.521772 containerd[1461]: 2026-01-17 00:21:06.513 [WARNING][5121] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0c1039da7ae9b20848b087b8181a55c71afdddbbdba3827847f3f1622b16355c" HandleID="k8s-pod-network.0c1039da7ae9b20848b087b8181a55c71afdddbbdba3827847f3f1622b16355c" Workload="ci--4081.3.6--n--8cc98427e3-k8s-calico--kube--controllers--b9877fd47--255j9-eth0" Jan 17 00:21:06.521772 containerd[1461]: 2026-01-17 00:21:06.513 [INFO][5121] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0c1039da7ae9b20848b087b8181a55c71afdddbbdba3827847f3f1622b16355c" HandleID="k8s-pod-network.0c1039da7ae9b20848b087b8181a55c71afdddbbdba3827847f3f1622b16355c" Workload="ci--4081.3.6--n--8cc98427e3-k8s-calico--kube--controllers--b9877fd47--255j9-eth0" Jan 17 00:21:06.521772 containerd[1461]: 2026-01-17 00:21:06.517 [INFO][5121] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:21:06.521772 containerd[1461]: 2026-01-17 00:21:06.519 [INFO][5114] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0c1039da7ae9b20848b087b8181a55c71afdddbbdba3827847f3f1622b16355c" Jan 17 00:21:06.521772 containerd[1461]: time="2026-01-17T00:21:06.521668472Z" level=info msg="TearDown network for sandbox \"0c1039da7ae9b20848b087b8181a55c71afdddbbdba3827847f3f1622b16355c\" successfully" Jan 17 00:21:06.526305 containerd[1461]: time="2026-01-17T00:21:06.526080511Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0c1039da7ae9b20848b087b8181a55c71afdddbbdba3827847f3f1622b16355c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:21:06.526305 containerd[1461]: time="2026-01-17T00:21:06.526166866Z" level=info msg="RemovePodSandbox \"0c1039da7ae9b20848b087b8181a55c71afdddbbdba3827847f3f1622b16355c\" returns successfully" Jan 17 00:21:06.526888 containerd[1461]: time="2026-01-17T00:21:06.526866570Z" level=info msg="StopPodSandbox for \"ade9cf2cd9c59dee832d466cc511609b3e68e63411929e06444ebf7b6f7b746c\"" Jan 17 00:21:06.624810 containerd[1461]: 2026-01-17 00:21:06.578 [WARNING][5135] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ade9cf2cd9c59dee832d466cc511609b3e68e63411929e06444ebf7b6f7b746c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8cc98427e3-k8s-coredns--674b8bbfcf--2jwfn-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c5483e8b-299a-4a15-8ed6-7af74d3f03f3", ResourceVersion:"1084", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8cc98427e3", ContainerID:"b1b8b99763f5050f75ac1978d1114e76fedc2801e92966c571cb33eeeaf3a7e3", Pod:"coredns-674b8bbfcf-2jwfn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.60.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie11697c5d15", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:21:06.624810 containerd[1461]: 2026-01-17 00:21:06.578 [INFO][5135] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ade9cf2cd9c59dee832d466cc511609b3e68e63411929e06444ebf7b6f7b746c" Jan 17 00:21:06.624810 containerd[1461]: 2026-01-17 00:21:06.578 [INFO][5135] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ade9cf2cd9c59dee832d466cc511609b3e68e63411929e06444ebf7b6f7b746c" iface="eth0" netns="" Jan 17 00:21:06.624810 containerd[1461]: 2026-01-17 00:21:06.578 [INFO][5135] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ade9cf2cd9c59dee832d466cc511609b3e68e63411929e06444ebf7b6f7b746c" Jan 17 00:21:06.624810 containerd[1461]: 2026-01-17 00:21:06.579 [INFO][5135] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ade9cf2cd9c59dee832d466cc511609b3e68e63411929e06444ebf7b6f7b746c" Jan 17 00:21:06.624810 containerd[1461]: 2026-01-17 00:21:06.607 [INFO][5143] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ade9cf2cd9c59dee832d466cc511609b3e68e63411929e06444ebf7b6f7b746c" HandleID="k8s-pod-network.ade9cf2cd9c59dee832d466cc511609b3e68e63411929e06444ebf7b6f7b746c" Workload="ci--4081.3.6--n--8cc98427e3-k8s-coredns--674b8bbfcf--2jwfn-eth0" Jan 17 00:21:06.624810 containerd[1461]: 2026-01-17 00:21:06.608 [INFO][5143] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:21:06.624810 containerd[1461]: 2026-01-17 00:21:06.608 [INFO][5143] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:21:06.624810 containerd[1461]: 2026-01-17 00:21:06.617 [WARNING][5143] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ade9cf2cd9c59dee832d466cc511609b3e68e63411929e06444ebf7b6f7b746c" HandleID="k8s-pod-network.ade9cf2cd9c59dee832d466cc511609b3e68e63411929e06444ebf7b6f7b746c" Workload="ci--4081.3.6--n--8cc98427e3-k8s-coredns--674b8bbfcf--2jwfn-eth0" Jan 17 00:21:06.624810 containerd[1461]: 2026-01-17 00:21:06.617 [INFO][5143] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ade9cf2cd9c59dee832d466cc511609b3e68e63411929e06444ebf7b6f7b746c" HandleID="k8s-pod-network.ade9cf2cd9c59dee832d466cc511609b3e68e63411929e06444ebf7b6f7b746c" Workload="ci--4081.3.6--n--8cc98427e3-k8s-coredns--674b8bbfcf--2jwfn-eth0" Jan 17 00:21:06.624810 containerd[1461]: 2026-01-17 00:21:06.619 [INFO][5143] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:21:06.624810 containerd[1461]: 2026-01-17 00:21:06.622 [INFO][5135] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ade9cf2cd9c59dee832d466cc511609b3e68e63411929e06444ebf7b6f7b746c" Jan 17 00:21:06.624810 containerd[1461]: time="2026-01-17T00:21:06.624694127Z" level=info msg="TearDown network for sandbox \"ade9cf2cd9c59dee832d466cc511609b3e68e63411929e06444ebf7b6f7b746c\" successfully" Jan 17 00:21:06.624810 containerd[1461]: time="2026-01-17T00:21:06.624742942Z" level=info msg="StopPodSandbox for \"ade9cf2cd9c59dee832d466cc511609b3e68e63411929e06444ebf7b6f7b746c\" returns successfully" Jan 17 00:21:06.626623 containerd[1461]: time="2026-01-17T00:21:06.626399927Z" level=info msg="RemovePodSandbox for \"ade9cf2cd9c59dee832d466cc511609b3e68e63411929e06444ebf7b6f7b746c\"" Jan 17 00:21:06.626623 containerd[1461]: time="2026-01-17T00:21:06.626439104Z" level=info msg="Forcibly stopping sandbox \"ade9cf2cd9c59dee832d466cc511609b3e68e63411929e06444ebf7b6f7b746c\"" Jan 17 00:21:06.734595 containerd[1461]: 2026-01-17 00:21:06.686 [WARNING][5157] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ade9cf2cd9c59dee832d466cc511609b3e68e63411929e06444ebf7b6f7b746c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8cc98427e3-k8s-coredns--674b8bbfcf--2jwfn-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c5483e8b-299a-4a15-8ed6-7af74d3f03f3", ResourceVersion:"1084", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8cc98427e3", ContainerID:"b1b8b99763f5050f75ac1978d1114e76fedc2801e92966c571cb33eeeaf3a7e3", Pod:"coredns-674b8bbfcf-2jwfn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.60.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie11697c5d15", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:21:06.734595 containerd[1461]: 2026-01-17 00:21:06.687 [INFO][5157] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ade9cf2cd9c59dee832d466cc511609b3e68e63411929e06444ebf7b6f7b746c" Jan 17 00:21:06.734595 containerd[1461]: 2026-01-17 00:21:06.687 [INFO][5157] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ade9cf2cd9c59dee832d466cc511609b3e68e63411929e06444ebf7b6f7b746c" iface="eth0" netns="" Jan 17 00:21:06.734595 containerd[1461]: 2026-01-17 00:21:06.687 [INFO][5157] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ade9cf2cd9c59dee832d466cc511609b3e68e63411929e06444ebf7b6f7b746c" Jan 17 00:21:06.734595 containerd[1461]: 2026-01-17 00:21:06.687 [INFO][5157] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ade9cf2cd9c59dee832d466cc511609b3e68e63411929e06444ebf7b6f7b746c" Jan 17 00:21:06.734595 containerd[1461]: 2026-01-17 00:21:06.718 [INFO][5165] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ade9cf2cd9c59dee832d466cc511609b3e68e63411929e06444ebf7b6f7b746c" HandleID="k8s-pod-network.ade9cf2cd9c59dee832d466cc511609b3e68e63411929e06444ebf7b6f7b746c" Workload="ci--4081.3.6--n--8cc98427e3-k8s-coredns--674b8bbfcf--2jwfn-eth0" Jan 17 00:21:06.734595 containerd[1461]: 2026-01-17 00:21:06.718 [INFO][5165] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:21:06.734595 containerd[1461]: 2026-01-17 00:21:06.718 [INFO][5165] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:21:06.734595 containerd[1461]: 2026-01-17 00:21:06.727 [WARNING][5165] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ade9cf2cd9c59dee832d466cc511609b3e68e63411929e06444ebf7b6f7b746c" HandleID="k8s-pod-network.ade9cf2cd9c59dee832d466cc511609b3e68e63411929e06444ebf7b6f7b746c" Workload="ci--4081.3.6--n--8cc98427e3-k8s-coredns--674b8bbfcf--2jwfn-eth0" Jan 17 00:21:06.734595 containerd[1461]: 2026-01-17 00:21:06.727 [INFO][5165] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ade9cf2cd9c59dee832d466cc511609b3e68e63411929e06444ebf7b6f7b746c" HandleID="k8s-pod-network.ade9cf2cd9c59dee832d466cc511609b3e68e63411929e06444ebf7b6f7b746c" Workload="ci--4081.3.6--n--8cc98427e3-k8s-coredns--674b8bbfcf--2jwfn-eth0" Jan 17 00:21:06.734595 containerd[1461]: 2026-01-17 00:21:06.729 [INFO][5165] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:21:06.734595 containerd[1461]: 2026-01-17 00:21:06.731 [INFO][5157] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ade9cf2cd9c59dee832d466cc511609b3e68e63411929e06444ebf7b6f7b746c" Jan 17 00:21:06.734595 containerd[1461]: time="2026-01-17T00:21:06.734337238Z" level=info msg="TearDown network for sandbox \"ade9cf2cd9c59dee832d466cc511609b3e68e63411929e06444ebf7b6f7b746c\" successfully" Jan 17 00:21:06.741987 containerd[1461]: time="2026-01-17T00:21:06.741432706Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ade9cf2cd9c59dee832d466cc511609b3e68e63411929e06444ebf7b6f7b746c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:21:06.741987 containerd[1461]: time="2026-01-17T00:21:06.741510770Z" level=info msg="RemovePodSandbox \"ade9cf2cd9c59dee832d466cc511609b3e68e63411929e06444ebf7b6f7b746c\" returns successfully" Jan 17 00:21:06.742675 containerd[1461]: time="2026-01-17T00:21:06.742581362Z" level=info msg="StopPodSandbox for \"f4977e4ced5aa68a533aaba1fd2c49d870d22e29935c3b6a89d9a0566e959ac7\"" Jan 17 00:21:06.857526 containerd[1461]: 2026-01-17 00:21:06.793 [WARNING][5179] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f4977e4ced5aa68a533aaba1fd2c49d870d22e29935c3b6a89d9a0566e959ac7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8cc98427e3-k8s-calico--apiserver--669cbdb5c4--j86b8-eth0", GenerateName:"calico-apiserver-669cbdb5c4-", Namespace:"calico-apiserver", SelfLink:"", UID:"8c0578bf-2fb3-4218-b665-10ff5fcbea9f", ResourceVersion:"1119", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"669cbdb5c4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8cc98427e3", ContainerID:"08a2903c6ac52d227069f21d6134e33d9afcfab933a242949f31f682d0d7504b", Pod:"calico-apiserver-669cbdb5c4-j86b8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.60.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib926a2f663f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:21:06.857526 containerd[1461]: 2026-01-17 00:21:06.793 [INFO][5179] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f4977e4ced5aa68a533aaba1fd2c49d870d22e29935c3b6a89d9a0566e959ac7" Jan 17 00:21:06.857526 containerd[1461]: 2026-01-17 00:21:06.793 [INFO][5179] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f4977e4ced5aa68a533aaba1fd2c49d870d22e29935c3b6a89d9a0566e959ac7" iface="eth0" netns="" Jan 17 00:21:06.857526 containerd[1461]: 2026-01-17 00:21:06.794 [INFO][5179] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f4977e4ced5aa68a533aaba1fd2c49d870d22e29935c3b6a89d9a0566e959ac7" Jan 17 00:21:06.857526 containerd[1461]: 2026-01-17 00:21:06.794 [INFO][5179] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f4977e4ced5aa68a533aaba1fd2c49d870d22e29935c3b6a89d9a0566e959ac7" Jan 17 00:21:06.857526 containerd[1461]: 2026-01-17 00:21:06.823 [INFO][5186] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f4977e4ced5aa68a533aaba1fd2c49d870d22e29935c3b6a89d9a0566e959ac7" HandleID="k8s-pod-network.f4977e4ced5aa68a533aaba1fd2c49d870d22e29935c3b6a89d9a0566e959ac7" Workload="ci--4081.3.6--n--8cc98427e3-k8s-calico--apiserver--669cbdb5c4--j86b8-eth0" Jan 17 00:21:06.857526 containerd[1461]: 2026-01-17 00:21:06.823 [INFO][5186] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:21:06.857526 containerd[1461]: 2026-01-17 00:21:06.824 [INFO][5186] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:21:06.857526 containerd[1461]: 2026-01-17 00:21:06.835 [WARNING][5186] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f4977e4ced5aa68a533aaba1fd2c49d870d22e29935c3b6a89d9a0566e959ac7" HandleID="k8s-pod-network.f4977e4ced5aa68a533aaba1fd2c49d870d22e29935c3b6a89d9a0566e959ac7" Workload="ci--4081.3.6--n--8cc98427e3-k8s-calico--apiserver--669cbdb5c4--j86b8-eth0" Jan 17 00:21:06.857526 containerd[1461]: 2026-01-17 00:21:06.835 [INFO][5186] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f4977e4ced5aa68a533aaba1fd2c49d870d22e29935c3b6a89d9a0566e959ac7" HandleID="k8s-pod-network.f4977e4ced5aa68a533aaba1fd2c49d870d22e29935c3b6a89d9a0566e959ac7" Workload="ci--4081.3.6--n--8cc98427e3-k8s-calico--apiserver--669cbdb5c4--j86b8-eth0" Jan 17 00:21:06.857526 containerd[1461]: 2026-01-17 00:21:06.848 [INFO][5186] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:21:06.857526 containerd[1461]: 2026-01-17 00:21:06.852 [INFO][5179] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f4977e4ced5aa68a533aaba1fd2c49d870d22e29935c3b6a89d9a0566e959ac7" Jan 17 00:21:06.857526 containerd[1461]: time="2026-01-17T00:21:06.857039658Z" level=info msg="TearDown network for sandbox \"f4977e4ced5aa68a533aaba1fd2c49d870d22e29935c3b6a89d9a0566e959ac7\" successfully" Jan 17 00:21:06.857526 containerd[1461]: time="2026-01-17T00:21:06.857074801Z" level=info msg="StopPodSandbox for \"f4977e4ced5aa68a533aaba1fd2c49d870d22e29935c3b6a89d9a0566e959ac7\" returns successfully" Jan 17 00:21:06.860500 containerd[1461]: time="2026-01-17T00:21:06.859673294Z" level=info msg="RemovePodSandbox for \"f4977e4ced5aa68a533aaba1fd2c49d870d22e29935c3b6a89d9a0566e959ac7\"" Jan 17 00:21:06.860500 containerd[1461]: time="2026-01-17T00:21:06.859989200Z" level=info msg="Forcibly stopping sandbox \"f4977e4ced5aa68a533aaba1fd2c49d870d22e29935c3b6a89d9a0566e959ac7\"" Jan 17 00:21:06.967415 containerd[1461]: 2026-01-17 00:21:06.917 [WARNING][5200] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f4977e4ced5aa68a533aaba1fd2c49d870d22e29935c3b6a89d9a0566e959ac7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8cc98427e3-k8s-calico--apiserver--669cbdb5c4--j86b8-eth0", GenerateName:"calico-apiserver-669cbdb5c4-", Namespace:"calico-apiserver", SelfLink:"", UID:"8c0578bf-2fb3-4218-b665-10ff5fcbea9f", ResourceVersion:"1119", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 20, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"669cbdb5c4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8cc98427e3", ContainerID:"08a2903c6ac52d227069f21d6134e33d9afcfab933a242949f31f682d0d7504b", Pod:"calico-apiserver-669cbdb5c4-j86b8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.60.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib926a2f663f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:21:06.967415 containerd[1461]: 2026-01-17 00:21:06.918 [INFO][5200] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f4977e4ced5aa68a533aaba1fd2c49d870d22e29935c3b6a89d9a0566e959ac7" Jan 17 00:21:06.967415 containerd[1461]: 2026-01-17 00:21:06.918 [INFO][5200] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f4977e4ced5aa68a533aaba1fd2c49d870d22e29935c3b6a89d9a0566e959ac7" iface="eth0" netns="" Jan 17 00:21:06.967415 containerd[1461]: 2026-01-17 00:21:06.918 [INFO][5200] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f4977e4ced5aa68a533aaba1fd2c49d870d22e29935c3b6a89d9a0566e959ac7" Jan 17 00:21:06.967415 containerd[1461]: 2026-01-17 00:21:06.918 [INFO][5200] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f4977e4ced5aa68a533aaba1fd2c49d870d22e29935c3b6a89d9a0566e959ac7" Jan 17 00:21:06.967415 containerd[1461]: 2026-01-17 00:21:06.949 [INFO][5209] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f4977e4ced5aa68a533aaba1fd2c49d870d22e29935c3b6a89d9a0566e959ac7" HandleID="k8s-pod-network.f4977e4ced5aa68a533aaba1fd2c49d870d22e29935c3b6a89d9a0566e959ac7" Workload="ci--4081.3.6--n--8cc98427e3-k8s-calico--apiserver--669cbdb5c4--j86b8-eth0" Jan 17 00:21:06.967415 containerd[1461]: 2026-01-17 00:21:06.949 [INFO][5209] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:21:06.967415 containerd[1461]: 2026-01-17 00:21:06.949 [INFO][5209] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:21:06.967415 containerd[1461]: 2026-01-17 00:21:06.958 [WARNING][5209] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f4977e4ced5aa68a533aaba1fd2c49d870d22e29935c3b6a89d9a0566e959ac7" HandleID="k8s-pod-network.f4977e4ced5aa68a533aaba1fd2c49d870d22e29935c3b6a89d9a0566e959ac7" Workload="ci--4081.3.6--n--8cc98427e3-k8s-calico--apiserver--669cbdb5c4--j86b8-eth0" Jan 17 00:21:06.967415 containerd[1461]: 2026-01-17 00:21:06.958 [INFO][5209] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f4977e4ced5aa68a533aaba1fd2c49d870d22e29935c3b6a89d9a0566e959ac7" HandleID="k8s-pod-network.f4977e4ced5aa68a533aaba1fd2c49d870d22e29935c3b6a89d9a0566e959ac7" Workload="ci--4081.3.6--n--8cc98427e3-k8s-calico--apiserver--669cbdb5c4--j86b8-eth0" Jan 17 00:21:06.967415 containerd[1461]: 2026-01-17 00:21:06.961 [INFO][5209] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:21:06.967415 containerd[1461]: 2026-01-17 00:21:06.964 [INFO][5200] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f4977e4ced5aa68a533aaba1fd2c49d870d22e29935c3b6a89d9a0566e959ac7" Jan 17 00:21:06.968091 containerd[1461]: time="2026-01-17T00:21:06.967455699Z" level=info msg="TearDown network for sandbox \"f4977e4ced5aa68a533aaba1fd2c49d870d22e29935c3b6a89d9a0566e959ac7\" successfully" Jan 17 00:21:06.970981 containerd[1461]: time="2026-01-17T00:21:06.970891370Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f4977e4ced5aa68a533aaba1fd2c49d870d22e29935c3b6a89d9a0566e959ac7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:21:06.970981 containerd[1461]: time="2026-01-17T00:21:06.970979973Z" level=info msg="RemovePodSandbox \"f4977e4ced5aa68a533aaba1fd2c49d870d22e29935c3b6a89d9a0566e959ac7\" returns successfully" Jan 17 00:21:09.806183 systemd[1]: Started sshd@7-209.38.74.55:22-4.153.228.146:57014.service - OpenSSH per-connection server daemon (4.153.228.146:57014). Jan 17 00:21:10.139134 containerd[1461]: time="2026-01-17T00:21:10.138483981Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:21:10.324696 sshd[5219]: Accepted publickey for core from 4.153.228.146 port 57014 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:21:10.327588 sshd[5219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:21:10.337643 systemd-logind[1446]: New session 8 of user core. Jan 17 00:21:10.346922 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 00:21:10.487003 containerd[1461]: time="2026-01-17T00:21:10.486781400Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:21:10.490090 containerd[1461]: time="2026-01-17T00:21:10.489943390Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:21:10.490090 containerd[1461]: time="2026-01-17T00:21:10.490008675Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:21:10.492067 kubelet[2529]: E0117 00:21:10.490464 2529 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:21:10.492067 kubelet[2529]: E0117 00:21:10.490562 2529 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:21:10.492067 kubelet[2529]: E0117 00:21:10.490734 2529 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:d662067c3c9b46248b5886cf8459eddd,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fr94t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-65bcbb5f55-c8kgh_calico-system(dbdec3ca-a9b5-4e95-bddf-4459d785adf7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:21:10.494464 containerd[1461]: time="2026-01-17T00:21:10.494414772Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:21:10.821649 containerd[1461]: time="2026-01-17T00:21:10.821316054Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:21:10.823587 containerd[1461]: time="2026-01-17T00:21:10.822651176Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:21:10.823587 containerd[1461]: time="2026-01-17T00:21:10.822742105Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:21:10.826659 kubelet[2529]: E0117 00:21:10.825832 2529 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:21:10.826659 kubelet[2529]: E0117 00:21:10.825926 2529 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:21:10.826659 kubelet[2529]: E0117 00:21:10.826095 2529 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fr94t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-65bcbb5f55-c8kgh_calico-system(dbdec3ca-a9b5-4e95-bddf-4459d785adf7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:21:10.828569 kubelet[2529]: E0117 00:21:10.828287 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65bcbb5f55-c8kgh" podUID="dbdec3ca-a9b5-4e95-bddf-4459d785adf7" Jan 17 00:21:11.148900 containerd[1461]: time="2026-01-17T00:21:11.148829454Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:21:11.198323 sshd[5219]: pam_unix(sshd:session): session closed for user core Jan 17 00:21:11.205785 systemd[1]: sshd@7-209.38.74.55:22-4.153.228.146:57014.service: Deactivated successfully. Jan 17 00:21:11.211031 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 00:21:11.216704 systemd-logind[1446]: Session 8 logged out. Waiting for processes to exit. Jan 17 00:21:11.219779 systemd-logind[1446]: Removed session 8. Jan 17 00:21:11.505083 containerd[1461]: time="2026-01-17T00:21:11.504796939Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:21:11.506089 containerd[1461]: time="2026-01-17T00:21:11.506031750Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:21:11.506244 containerd[1461]: time="2026-01-17T00:21:11.506195038Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:21:11.506510 kubelet[2529]: E0117 00:21:11.506384 2529 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:21:11.507096 kubelet[2529]: E0117 00:21:11.506516 2529 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:21:11.507200 containerd[1461]: time="2026-01-17T00:21:11.507165544Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:21:11.520331 kubelet[2529]: E0117 00:21:11.506823 2529 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dxlpp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-m56rm_calico-system(a4529381-2d40-4d70-a757-b0ee2c920e64): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:21:11.522400 kubelet[2529]: E0117 00:21:11.522303 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-m56rm" podUID="a4529381-2d40-4d70-a757-b0ee2c920e64" Jan 17 00:21:11.830173 containerd[1461]: time="2026-01-17T00:21:11.829860958Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:21:11.831151 containerd[1461]: time="2026-01-17T00:21:11.831090774Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:21:11.831562 containerd[1461]: time="2026-01-17T00:21:11.831160739Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:21:11.831889 kubelet[2529]: E0117 00:21:11.831747 2529 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:21:11.831889 kubelet[2529]: E0117 00:21:11.831808 2529 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:21:11.832267 kubelet[2529]: E0117 00:21:11.832169 2529 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zgjhz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-b9877fd47-255j9_calico-system(1b489003-62f2-46b7-a6af-3a3a669c193c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:21:11.833557 containerd[1461]: time="2026-01-17T00:21:11.832906213Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:21:11.833656 kubelet[2529]: E0117 00:21:11.833381 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-b9877fd47-255j9" podUID="1b489003-62f2-46b7-a6af-3a3a669c193c" Jan 17 00:21:12.158760 containerd[1461]: time="2026-01-17T00:21:12.158553974Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:21:12.159772 containerd[1461]: time="2026-01-17T00:21:12.159654236Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:21:12.159772 containerd[1461]: time="2026-01-17T00:21:12.159725229Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:21:12.161251 kubelet[2529]: E0117 00:21:12.160021 2529 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:21:12.161251 kubelet[2529]: E0117 00:21:12.160074 2529 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:21:12.161251 kubelet[2529]: E0117 00:21:12.160374 2529 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4k797,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dcsb9_calico-system(96bd27e8-f4d7-4ca9-8ceb-fc56f28a33f0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:21:12.163164 containerd[1461]: time="2026-01-17T00:21:12.162520214Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:21:12.467403 containerd[1461]: time="2026-01-17T00:21:12.467173897Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:21:12.468276 containerd[1461]: time="2026-01-17T00:21:12.468211831Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:21:12.468385 containerd[1461]: time="2026-01-17T00:21:12.468301910Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:21:12.468681 kubelet[2529]: E0117 00:21:12.468614 2529 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:21:12.468774 kubelet[2529]: E0117 00:21:12.468687 2529 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:21:12.469005 kubelet[2529]: E0117 00:21:12.468937 2529 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zds2l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-669cbdb5c4-xt5pt_calico-apiserver(fee43243-8ebd-4cd2-afa5-ba57dc078efe): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:21:12.470035 containerd[1461]: time="2026-01-17T00:21:12.469289189Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:21:12.470740 kubelet[2529]: E0117 00:21:12.470696 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-669cbdb5c4-xt5pt" podUID="fee43243-8ebd-4cd2-afa5-ba57dc078efe" Jan 17 00:21:12.666547 kubelet[2529]: E0117 00:21:12.666223 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:21:12.782698 containerd[1461]: time="2026-01-17T00:21:12.782472652Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:21:12.784211 containerd[1461]: time="2026-01-17T00:21:12.784120180Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:21:12.784479 containerd[1461]: time="2026-01-17T00:21:12.784130316Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:21:12.784608 kubelet[2529]: E0117 00:21:12.784475 2529 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:21:12.784881 kubelet[2529]: E0117 00:21:12.784659 2529 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:21:12.784919 kubelet[2529]: E0117 00:21:12.784862 2529 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4k797,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dcsb9_calico-system(96bd27e8-f4d7-4ca9-8ceb-fc56f28a33f0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:21:12.786880 kubelet[2529]: E0117 00:21:12.786804 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dcsb9" podUID="96bd27e8-f4d7-4ca9-8ceb-fc56f28a33f0" Jan 17 00:21:16.285060 systemd[1]: Started sshd@8-209.38.74.55:22-4.153.228.146:36182.service - OpenSSH per-connection server daemon (4.153.228.146:36182). Jan 17 00:21:16.714305 sshd[5247]: Accepted publickey for core from 4.153.228.146 port 36182 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:21:16.717258 sshd[5247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:21:16.725290 systemd-logind[1446]: New session 9 of user core. Jan 17 00:21:16.732869 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 00:21:17.138963 containerd[1461]: time="2026-01-17T00:21:17.138718857Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:21:17.150382 sshd[5247]: pam_unix(sshd:session): session closed for user core Jan 17 00:21:17.154982 systemd-logind[1446]: Session 9 logged out. Waiting for processes to exit. Jan 17 00:21:17.156407 systemd[1]: sshd@8-209.38.74.55:22-4.153.228.146:36182.service: Deactivated successfully. Jan 17 00:21:17.159333 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 00:21:17.162505 systemd-logind[1446]: Removed session 9. Jan 17 00:21:17.500384 containerd[1461]: time="2026-01-17T00:21:17.499973745Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:21:17.501661 containerd[1461]: time="2026-01-17T00:21:17.501432611Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:21:17.501661 containerd[1461]: time="2026-01-17T00:21:17.501491860Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:21:17.501902 kubelet[2529]: E0117 00:21:17.501786 2529 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:21:17.501902 kubelet[2529]: E0117 00:21:17.501841 2529 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:21:17.502439 kubelet[2529]: E0117 00:21:17.502040 2529 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d6ptw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-669cbdb5c4-j86b8_calico-apiserver(8c0578bf-2fb3-4218-b665-10ff5fcbea9f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:21:17.503573 kubelet[2529]: E0117 00:21:17.503450 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-669cbdb5c4-j86b8" podUID="8c0578bf-2fb3-4218-b665-10ff5fcbea9f" Jan 17 00:21:21.140559 kubelet[2529]: E0117 00:21:21.137875 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:21:22.233955 systemd[1]: Started sshd@9-209.38.74.55:22-4.153.228.146:36198.service - OpenSSH per-connection server daemon (4.153.228.146:36198). Jan 17 00:21:22.665789 sshd[5262]: Accepted publickey for core from 4.153.228.146 port 36198 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:21:22.667633 sshd[5262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:21:22.673940 systemd-logind[1446]: New session 10 of user core. Jan 17 00:21:22.676893 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 00:21:23.071103 sshd[5262]: pam_unix(sshd:session): session closed for user core Jan 17 00:21:23.077668 systemd-logind[1446]: Session 10 logged out. Waiting for processes to exit. Jan 17 00:21:23.078098 systemd[1]: sshd@9-209.38.74.55:22-4.153.228.146:36198.service: Deactivated successfully. Jan 17 00:21:23.080546 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 00:21:23.081810 systemd-logind[1446]: Removed session 10. Jan 17 00:21:23.138944 kubelet[2529]: E0117 00:21:23.137831 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:21:23.138944 kubelet[2529]: E0117 00:21:23.138231 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:21:23.146719 systemd[1]: Started sshd@10-209.38.74.55:22-4.153.228.146:36206.service - OpenSSH per-connection server daemon (4.153.228.146:36206). Jan 17 00:21:23.597412 sshd[5275]: Accepted publickey for core from 4.153.228.146 port 36206 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:21:23.599682 sshd[5275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:21:23.607395 systemd-logind[1446]: New session 11 of user core. Jan 17 00:21:23.613246 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 00:21:23.724730 kubelet[2529]: E0117 00:21:23.724692 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:21:24.135502 sshd[5275]: pam_unix(sshd:session): session closed for user core Jan 17 00:21:24.141320 kubelet[2529]: E0117 00:21:24.139488 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-b9877fd47-255j9" podUID="1b489003-62f2-46b7-a6af-3a3a669c193c" Jan 17 00:21:24.141320 kubelet[2529]: E0117 00:21:24.140359 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-669cbdb5c4-xt5pt" podUID="fee43243-8ebd-4cd2-afa5-ba57dc078efe" Jan 17 00:21:24.141320 kubelet[2529]: E0117 00:21:24.141063 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dcsb9" podUID="96bd27e8-f4d7-4ca9-8ceb-fc56f28a33f0" Jan 17 00:21:24.147392 systemd[1]: sshd@10-209.38.74.55:22-4.153.228.146:36206.service: Deactivated successfully. Jan 17 00:21:24.151632 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 00:21:24.155651 systemd-logind[1446]: Session 11 logged out. Waiting for processes to exit. Jan 17 00:21:24.157245 systemd-logind[1446]: Removed session 11. Jan 17 00:21:24.223015 systemd[1]: Started sshd@11-209.38.74.55:22-4.153.228.146:36210.service - OpenSSH per-connection server daemon (4.153.228.146:36210). Jan 17 00:21:24.670594 sshd[5309]: Accepted publickey for core from 4.153.228.146 port 36210 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:21:24.671872 sshd[5309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:21:24.679036 systemd-logind[1446]: New session 12 of user core. Jan 17 00:21:24.683858 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 00:21:25.081665 sshd[5309]: pam_unix(sshd:session): session closed for user core Jan 17 00:21:25.086299 systemd[1]: sshd@11-209.38.74.55:22-4.153.228.146:36210.service: Deactivated successfully. Jan 17 00:21:25.088954 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 00:21:25.090218 systemd-logind[1446]: Session 12 logged out. Waiting for processes to exit. Jan 17 00:21:25.092144 systemd-logind[1446]: Removed session 12. Jan 17 00:21:26.139873 kubelet[2529]: E0117 00:21:26.139804 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65bcbb5f55-c8kgh" podUID="dbdec3ca-a9b5-4e95-bddf-4459d785adf7" Jan 17 00:21:27.140300 kubelet[2529]: E0117 00:21:27.138615 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-m56rm" podUID="a4529381-2d40-4d70-a757-b0ee2c920e64" Jan 17 00:21:28.140051 kubelet[2529]: E0117 00:21:28.138833 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-669cbdb5c4-j86b8" podUID="8c0578bf-2fb3-4218-b665-10ff5fcbea9f" Jan 17 00:21:30.168220 systemd[1]: Started sshd@12-209.38.74.55:22-4.153.228.146:51048.service - OpenSSH per-connection server daemon (4.153.228.146:51048). Jan 17 00:21:30.598431 sshd[5328]: Accepted publickey for core from 4.153.228.146 port 51048 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:21:30.600274 sshd[5328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:21:30.608176 systemd-logind[1446]: New session 13 of user core. Jan 17 00:21:30.614728 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 00:21:31.010963 sshd[5328]: pam_unix(sshd:session): session closed for user core Jan 17 00:21:31.021735 systemd[1]: sshd@12-209.38.74.55:22-4.153.228.146:51048.service: Deactivated successfully. Jan 17 00:21:31.024551 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 00:21:31.027289 systemd-logind[1446]: Session 13 logged out. Waiting for processes to exit. Jan 17 00:21:31.028609 systemd-logind[1446]: Removed session 13. Jan 17 00:21:36.100968 systemd[1]: Started sshd@13-209.38.74.55:22-4.153.228.146:34788.service - OpenSSH per-connection server daemon (4.153.228.146:34788). Jan 17 00:21:36.630568 sshd[5346]: Accepted publickey for core from 4.153.228.146 port 34788 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:21:36.634644 sshd[5346]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:21:36.642864 systemd-logind[1446]: New session 14 of user core. Jan 17 00:21:36.651901 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 00:21:37.093567 sshd[5346]: pam_unix(sshd:session): session closed for user core Jan 17 00:21:37.098486 systemd[1]: sshd@13-209.38.74.55:22-4.153.228.146:34788.service: Deactivated successfully. Jan 17 00:21:37.102158 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 00:21:37.103430 systemd-logind[1446]: Session 14 logged out. Waiting for processes to exit. Jan 17 00:21:37.104736 systemd-logind[1446]: Removed session 14. Jan 17 00:21:37.143624 containerd[1461]: time="2026-01-17T00:21:37.143266375Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:21:37.487313 containerd[1461]: time="2026-01-17T00:21:37.487125150Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:21:37.488582 containerd[1461]: time="2026-01-17T00:21:37.488495839Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:21:37.488943 kubelet[2529]: E0117 00:21:37.488813 2529 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:21:37.491123 kubelet[2529]: E0117 00:21:37.488921 2529 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:21:37.491123 kubelet[2529]: E0117 00:21:37.489418 2529 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4k797,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dcsb9_calico-system(96bd27e8-f4d7-4ca9-8ceb-fc56f28a33f0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:21:37.491316 containerd[1461]: time="2026-01-17T00:21:37.488508480Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:21:37.491921 containerd[1461]: time="2026-01-17T00:21:37.491585422Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:21:37.820094 containerd[1461]: time="2026-01-17T00:21:37.819724467Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:21:37.821490 containerd[1461]: time="2026-01-17T00:21:37.821438806Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:21:37.821886 containerd[1461]: time="2026-01-17T00:21:37.821618992Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:21:37.821989 kubelet[2529]: E0117 00:21:37.821938 2529 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:21:37.822129 kubelet[2529]: E0117 00:21:37.822012 2529 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:21:37.822415 kubelet[2529]: E0117 00:21:37.822340 2529 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zds2l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-669cbdb5c4-xt5pt_calico-apiserver(fee43243-8ebd-4cd2-afa5-ba57dc078efe): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:21:37.823685 containerd[1461]: time="2026-01-17T00:21:37.822845312Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:21:37.824932 kubelet[2529]: E0117 00:21:37.823954 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-669cbdb5c4-xt5pt" podUID="fee43243-8ebd-4cd2-afa5-ba57dc078efe" Jan 17 00:21:38.162853 containerd[1461]: time="2026-01-17T00:21:38.162785192Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:21:38.163972 containerd[1461]: time="2026-01-17T00:21:38.163923793Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:21:38.164078 containerd[1461]: time="2026-01-17T00:21:38.164020057Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:21:38.164370 kubelet[2529]: E0117 00:21:38.164321 2529 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:21:38.164485 kubelet[2529]: E0117 00:21:38.164381 2529 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:21:38.164693 kubelet[2529]: E0117 00:21:38.164644 2529 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4k797,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dcsb9_calico-system(96bd27e8-f4d7-4ca9-8ceb-fc56f28a33f0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:21:38.165856 containerd[1461]: time="2026-01-17T00:21:38.165823978Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:21:38.166423 kubelet[2529]: E0117 00:21:38.166269 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dcsb9" podUID="96bd27e8-f4d7-4ca9-8ceb-fc56f28a33f0" Jan 17 00:21:38.511006 containerd[1461]: time="2026-01-17T00:21:38.510690923Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:21:38.512416 containerd[1461]: time="2026-01-17T00:21:38.512309165Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:21:38.512416 containerd[1461]: time="2026-01-17T00:21:38.512376140Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:21:38.512674 kubelet[2529]: E0117 00:21:38.512612 2529 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:21:38.512674 kubelet[2529]: E0117 00:21:38.512666 2529 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:21:38.513215 kubelet[2529]: E0117 00:21:38.512795 2529 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:d662067c3c9b46248b5886cf8459eddd,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fr94t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-65bcbb5f55-c8kgh_calico-system(dbdec3ca-a9b5-4e95-bddf-4459d785adf7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:21:38.515584 containerd[1461]: time="2026-01-17T00:21:38.515525227Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:21:38.965914 containerd[1461]: time="2026-01-17T00:21:38.965809881Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:21:38.966999 containerd[1461]: time="2026-01-17T00:21:38.966859016Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:21:38.966999 containerd[1461]: time="2026-01-17T00:21:38.966928700Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:21:38.967243 kubelet[2529]: E0117 00:21:38.967132 2529 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:21:38.967243 kubelet[2529]: E0117 00:21:38.967228 2529 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:21:38.967407 kubelet[2529]: E0117 00:21:38.967361 2529 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fr94t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-65bcbb5f55-c8kgh_calico-system(dbdec3ca-a9b5-4e95-bddf-4459d785adf7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:21:38.969051 kubelet[2529]: E0117 00:21:38.968966 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65bcbb5f55-c8kgh" podUID="dbdec3ca-a9b5-4e95-bddf-4459d785adf7" Jan 17 00:21:39.139528 containerd[1461]: time="2026-01-17T00:21:39.138608094Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:21:39.453702 containerd[1461]: time="2026-01-17T00:21:39.452335394Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:21:39.455727 containerd[1461]: time="2026-01-17T00:21:39.454799049Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:21:39.455727 containerd[1461]: time="2026-01-17T00:21:39.454912869Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:21:39.457215 kubelet[2529]: E0117 00:21:39.455769 2529 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:21:39.457215 kubelet[2529]: E0117 00:21:39.455835 2529 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:21:39.457215 kubelet[2529]: E0117 00:21:39.456213 2529 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zgjhz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-b9877fd47-255j9_calico-system(1b489003-62f2-46b7-a6af-3a3a669c193c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:21:39.457666 kubelet[2529]: E0117 00:21:39.457588 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-b9877fd47-255j9" podUID="1b489003-62f2-46b7-a6af-3a3a669c193c" Jan 17 00:21:40.144215 containerd[1461]: time="2026-01-17T00:21:40.144093105Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:21:40.455767 containerd[1461]: time="2026-01-17T00:21:40.454954788Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:21:40.457253 containerd[1461]: time="2026-01-17T00:21:40.456834456Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:21:40.457253 containerd[1461]: time="2026-01-17T00:21:40.457018579Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:21:40.457639 kubelet[2529]: E0117 00:21:40.457305 2529 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:21:40.458821 kubelet[2529]: E0117 00:21:40.457664 2529 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:21:40.458821 kubelet[2529]: E0117 00:21:40.457848 2529 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d6ptw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-669cbdb5c4-j86b8_calico-apiserver(8c0578bf-2fb3-4218-b665-10ff5fcbea9f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:21:40.460552 kubelet[2529]: E0117 00:21:40.460393 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-669cbdb5c4-j86b8" podUID="8c0578bf-2fb3-4218-b665-10ff5fcbea9f" Jan 17 00:21:42.141526 containerd[1461]: time="2026-01-17T00:21:42.140338780Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:21:42.172026 systemd[1]: Started sshd@14-209.38.74.55:22-4.153.228.146:34796.service - OpenSSH per-connection server daemon (4.153.228.146:34796). Jan 17 00:21:42.469000 containerd[1461]: time="2026-01-17T00:21:42.468386925Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:21:42.469981 containerd[1461]: time="2026-01-17T00:21:42.469719672Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:21:42.469981 containerd[1461]: time="2026-01-17T00:21:42.469814544Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:21:42.470685 kubelet[2529]: E0117 00:21:42.470249 2529 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:21:42.470685 kubelet[2529]: E0117 00:21:42.470380 2529 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:21:42.472227 kubelet[2529]: E0117 00:21:42.472141 2529 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dxlpp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-m56rm_calico-system(a4529381-2d40-4d70-a757-b0ee2c920e64): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:21:42.473445 kubelet[2529]: E0117 00:21:42.473397 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-m56rm" podUID="a4529381-2d40-4d70-a757-b0ee2c920e64" Jan 17 00:21:42.611136 sshd[5361]: Accepted publickey for core from 4.153.228.146 port 34796 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:21:42.613715 sshd[5361]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:21:42.621523 systemd-logind[1446]: New session 15 of user core. Jan 17 00:21:42.625928 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 00:21:43.052713 sshd[5361]: pam_unix(sshd:session): session closed for user core Jan 17 00:21:43.057395 systemd[1]: sshd@14-209.38.74.55:22-4.153.228.146:34796.service: Deactivated successfully. Jan 17 00:21:43.060519 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 00:21:43.061452 systemd-logind[1446]: Session 15 logged out. Waiting for processes to exit. Jan 17 00:21:43.062967 systemd-logind[1446]: Removed session 15. Jan 17 00:21:43.137664 kubelet[2529]: E0117 00:21:43.137013 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:21:48.131658 systemd[1]: Started sshd@15-209.38.74.55:22-4.153.228.146:46374.service - OpenSSH per-connection server daemon (4.153.228.146:46374). Jan 17 00:21:48.540060 sshd[5378]: Accepted publickey for core from 4.153.228.146 port 46374 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:21:48.541848 sshd[5378]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:21:48.548381 systemd-logind[1446]: New session 16 of user core. Jan 17 00:21:48.552809 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 00:21:48.928872 sshd[5378]: pam_unix(sshd:session): session closed for user core Jan 17 00:21:48.933263 systemd[1]: sshd@15-209.38.74.55:22-4.153.228.146:46374.service: Deactivated successfully. Jan 17 00:21:48.935592 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 00:21:48.936711 systemd-logind[1446]: Session 16 logged out. Waiting for processes to exit. Jan 17 00:21:48.938202 systemd-logind[1446]: Removed session 16. Jan 17 00:21:49.022869 systemd[1]: Started sshd@16-209.38.74.55:22-4.153.228.146:46384.service - OpenSSH per-connection server daemon (4.153.228.146:46384). Jan 17 00:21:49.474196 sshd[5391]: Accepted publickey for core from 4.153.228.146 port 46384 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:21:49.476193 sshd[5391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:21:49.483474 systemd-logind[1446]: New session 17 of user core. Jan 17 00:21:49.492880 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 00:21:50.030458 sshd[5391]: pam_unix(sshd:session): session closed for user core Jan 17 00:21:50.035287 systemd[1]: sshd@16-209.38.74.55:22-4.153.228.146:46384.service: Deactivated successfully. Jan 17 00:21:50.038091 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 00:21:50.039768 systemd-logind[1446]: Session 17 logged out. Waiting for processes to exit. Jan 17 00:21:50.041134 systemd-logind[1446]: Removed session 17. Jan 17 00:21:50.103405 systemd[1]: Started sshd@17-209.38.74.55:22-4.153.228.146:46386.service - OpenSSH per-connection server daemon (4.153.228.146:46386). Jan 17 00:21:50.511142 sshd[5402]: Accepted publickey for core from 4.153.228.146 port 46386 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:21:50.513093 sshd[5402]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:21:50.518661 systemd-logind[1446]: New session 18 of user core. Jan 17 00:21:50.526988 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 00:21:51.141375 kubelet[2529]: E0117 00:21:51.141306 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-b9877fd47-255j9" podUID="1b489003-62f2-46b7-a6af-3a3a669c193c" Jan 17 00:21:51.595055 sshd[5402]: pam_unix(sshd:session): session closed for user core Jan 17 00:21:51.607151 systemd[1]: sshd@17-209.38.74.55:22-4.153.228.146:46386.service: Deactivated successfully. Jan 17 00:21:51.613273 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 00:21:51.620878 systemd-logind[1446]: Session 18 logged out. Waiting for processes to exit. Jan 17 00:21:51.622964 systemd-logind[1446]: Removed session 18. Jan 17 00:21:51.674937 systemd[1]: Started sshd@18-209.38.74.55:22-4.153.228.146:46390.service - OpenSSH per-connection server daemon (4.153.228.146:46390). Jan 17 00:21:52.119441 sshd[5420]: Accepted publickey for core from 4.153.228.146 port 46390 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:21:52.122222 sshd[5420]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:21:52.127918 systemd-logind[1446]: New session 19 of user core. Jan 17 00:21:52.133780 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 00:21:52.139975 kubelet[2529]: E0117 00:21:52.138778 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-669cbdb5c4-xt5pt" podUID="fee43243-8ebd-4cd2-afa5-ba57dc078efe" Jan 17 00:21:52.882981 sshd[5420]: pam_unix(sshd:session): session closed for user core Jan 17 00:21:52.889107 systemd-logind[1446]: Session 19 logged out. Waiting for processes to exit. Jan 17 00:21:52.890091 systemd[1]: sshd@18-209.38.74.55:22-4.153.228.146:46390.service: Deactivated successfully. Jan 17 00:21:52.893387 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 00:21:52.894919 systemd-logind[1446]: Removed session 19. Jan 17 00:21:52.950925 systemd[1]: Started sshd@19-209.38.74.55:22-4.153.228.146:46404.service - OpenSSH per-connection server daemon (4.153.228.146:46404). Jan 17 00:21:53.140124 kubelet[2529]: E0117 00:21:53.139949 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65bcbb5f55-c8kgh" podUID="dbdec3ca-a9b5-4e95-bddf-4459d785adf7" Jan 17 00:21:53.141905 kubelet[2529]: E0117 00:21:53.141318 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dcsb9" podUID="96bd27e8-f4d7-4ca9-8ceb-fc56f28a33f0" Jan 17 00:21:53.345049 sshd[5431]: Accepted publickey for core from 4.153.228.146 port 46404 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:21:53.347114 sshd[5431]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:21:53.352793 systemd-logind[1446]: New session 20 of user core. Jan 17 00:21:53.357758 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 00:21:53.752392 sshd[5431]: pam_unix(sshd:session): session closed for user core Jan 17 00:21:53.760259 systemd[1]: sshd@19-209.38.74.55:22-4.153.228.146:46404.service: Deactivated successfully. Jan 17 00:21:53.765377 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 00:21:53.767469 systemd-logind[1446]: Session 20 logged out. Waiting for processes to exit. Jan 17 00:21:53.769676 systemd-logind[1446]: Removed session 20. Jan 17 00:21:54.139089 kubelet[2529]: E0117 00:21:54.138641 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-m56rm" podUID="a4529381-2d40-4d70-a757-b0ee2c920e64" Jan 17 00:21:54.139089 kubelet[2529]: E0117 00:21:54.138803 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-669cbdb5c4-j86b8" podUID="8c0578bf-2fb3-4218-b665-10ff5fcbea9f" Jan 17 00:21:55.137952 kubelet[2529]: E0117 00:21:55.137477 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:21:58.840344 systemd[1]: Started sshd@20-209.38.74.55:22-4.153.228.146:45142.service - OpenSSH per-connection server daemon (4.153.228.146:45142). Jan 17 00:21:59.306081 sshd[5467]: Accepted publickey for core from 4.153.228.146 port 45142 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:21:59.307931 sshd[5467]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:21:59.314240 systemd-logind[1446]: New session 21 of user core. Jan 17 00:21:59.320787 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 17 00:21:59.721093 sshd[5467]: pam_unix(sshd:session): session closed for user core Jan 17 00:21:59.725285 systemd[1]: sshd@20-209.38.74.55:22-4.153.228.146:45142.service: Deactivated successfully. Jan 17 00:21:59.727461 systemd[1]: session-21.scope: Deactivated successfully. Jan 17 00:21:59.728300 systemd-logind[1446]: Session 21 logged out. Waiting for processes to exit. Jan 17 00:21:59.729922 systemd-logind[1446]: Removed session 21. Jan 17 00:22:03.139576 kubelet[2529]: E0117 00:22:03.139384 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-b9877fd47-255j9" podUID="1b489003-62f2-46b7-a6af-3a3a669c193c" Jan 17 00:22:04.143797 kubelet[2529]: E0117 00:22:04.143694 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dcsb9" podUID="96bd27e8-f4d7-4ca9-8ceb-fc56f28a33f0" Jan 17 00:22:04.806129 systemd[1]: Started sshd@21-209.38.74.55:22-4.153.228.146:48404.service - OpenSSH per-connection server daemon (4.153.228.146:48404). Jan 17 00:22:05.146491 kubelet[2529]: E0117 00:22:05.146383 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65bcbb5f55-c8kgh" podUID="dbdec3ca-a9b5-4e95-bddf-4459d785adf7" Jan 17 00:22:05.151704 kubelet[2529]: E0117 00:22:05.151650 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-m56rm" podUID="a4529381-2d40-4d70-a757-b0ee2c920e64" Jan 17 00:22:05.247677 sshd[5480]: Accepted publickey for core from 4.153.228.146 port 48404 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:22:05.255784 sshd[5480]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:22:05.272070 systemd-logind[1446]: New session 22 of user core. Jan 17 00:22:05.276879 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 17 00:22:05.750672 sshd[5480]: pam_unix(sshd:session): session closed for user core Jan 17 00:22:05.760048 systemd[1]: sshd@21-209.38.74.55:22-4.153.228.146:48404.service: Deactivated successfully. Jan 17 00:22:05.765203 systemd[1]: session-22.scope: Deactivated successfully. Jan 17 00:22:05.771374 systemd-logind[1446]: Session 22 logged out. Waiting for processes to exit. Jan 17 00:22:05.774702 systemd-logind[1446]: Removed session 22. Jan 17 00:22:06.138109 kubelet[2529]: E0117 00:22:06.137721 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-669cbdb5c4-xt5pt" podUID="fee43243-8ebd-4cd2-afa5-ba57dc078efe" Jan 17 00:22:07.139602 kubelet[2529]: E0117 00:22:07.138896 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-669cbdb5c4-j86b8" podUID="8c0578bf-2fb3-4218-b665-10ff5fcbea9f" Jan 17 00:22:08.137345 kubelet[2529]: E0117 00:22:08.137298 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2"