Jan 17 00:25:38.963757 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 16 22:25:55 -00 2026 Jan 17 00:25:38.963779 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:25:38.963788 kernel: BIOS-provided physical RAM map: Jan 17 00:25:38.963793 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 17 00:25:38.963798 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ed3efff] usable Jan 17 00:25:38.963802 kernel: BIOS-e820: [mem 0x000000007ed3f000-0x000000007edfffff] reserved Jan 17 00:25:38.963807 kernel: BIOS-e820: [mem 0x000000007ee00000-0x000000007f8ecfff] usable Jan 17 00:25:38.963812 kernel: BIOS-e820: [mem 0x000000007f8ed000-0x000000007f9ecfff] reserved Jan 17 00:25:38.963816 kernel: BIOS-e820: [mem 0x000000007f9ed000-0x000000007faecfff] type 20 Jan 17 00:25:38.963820 kernel: BIOS-e820: [mem 0x000000007faed000-0x000000007fb6cfff] reserved Jan 17 00:25:38.963825 kernel: BIOS-e820: [mem 0x000000007fb6d000-0x000000007fb7efff] ACPI data Jan 17 00:25:38.963832 kernel: BIOS-e820: [mem 0x000000007fb7f000-0x000000007fbfefff] ACPI NVS Jan 17 00:25:38.963836 kernel: BIOS-e820: [mem 0x000000007fbff000-0x000000007ff7bfff] usable Jan 17 00:25:38.963841 kernel: BIOS-e820: [mem 0x000000007ff7c000-0x000000007fffffff] reserved Jan 17 00:25:38.963846 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 17 00:25:38.963851 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 17 00:25:38.963858 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 17 00:25:38.963863 kernel: BIOS-e820: [mem 0x0000000100000000-0x0000000179ffffff] usable Jan 17 00:25:38.963868 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 17 00:25:38.963873 kernel: NX (Execute Disable) protection: active Jan 17 00:25:38.963880 kernel: APIC: Static calls initialized Jan 17 00:25:38.963887 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Jan 17 00:25:38.963895 kernel: efi: SMBIOS=0x7f988000 SMBIOS 3.0=0x7f986000 ACPI=0x7fb7e000 ACPI 2.0=0x7fb7e014 MEMATTR=0x7e01b198 Jan 17 00:25:38.963902 kernel: efi: Remove mem135: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jan 17 00:25:38.963909 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jan 17 00:25:38.963917 kernel: SMBIOS 3.0.0 present. Jan 17 00:25:38.963924 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Jan 17 00:25:38.963929 kernel: Hypervisor detected: KVM Jan 17 00:25:38.963937 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 17 00:25:38.963941 kernel: kvm-clock: using sched offset of 12671203796 cycles Jan 17 00:25:38.963947 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 17 00:25:38.963952 kernel: tsc: Detected 2399.998 MHz processor Jan 17 00:25:38.963957 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 00:25:38.963962 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 00:25:38.963966 kernel: last_pfn = 0x17a000 max_arch_pfn = 0x10000000000 Jan 17 00:25:38.963971 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 17 00:25:38.963976 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 00:25:38.963984 kernel: last_pfn = 0x7ff7c max_arch_pfn = 0x10000000000 Jan 17 00:25:38.963989 kernel: Using GB pages for direct mapping Jan 17 00:25:38.963994 kernel: Secure boot disabled Jan 17 00:25:38.964001 kernel: ACPI: Early table checksum verification disabled Jan 17 00:25:38.964007 kernel: ACPI: RSDP 0x000000007FB7E014 000024 (v02 BOCHS ) Jan 17 00:25:38.964012 kernel: ACPI: XSDT 0x000000007FB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 17 00:25:38.964017 kernel: ACPI: FACP 0x000000007FB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:25:38.964025 kernel: ACPI: DSDT 0x000000007FB7A000 002443 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:25:38.964030 kernel: ACPI: FACS 0x000000007FBDD000 000040 Jan 17 00:25:38.964036 kernel: ACPI: APIC 0x000000007FB78000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:25:38.964041 kernel: ACPI: HPET 0x000000007FB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:25:38.964046 kernel: ACPI: MCFG 0x000000007FB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:25:38.964051 kernel: ACPI: WAET 0x000000007FB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:25:38.964056 kernel: ACPI: BGRT 0x000000007FB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 17 00:25:38.964064 kernel: ACPI: Reserving FACP table memory at [mem 0x7fb79000-0x7fb790f3] Jan 17 00:25:38.964069 kernel: ACPI: Reserving DSDT table memory at [mem 0x7fb7a000-0x7fb7c442] Jan 17 00:25:38.964074 kernel: ACPI: Reserving FACS table memory at [mem 0x7fbdd000-0x7fbdd03f] Jan 17 00:25:38.964079 kernel: ACPI: Reserving APIC table memory at [mem 0x7fb78000-0x7fb7807f] Jan 17 00:25:38.964084 kernel: ACPI: Reserving HPET table memory at [mem 0x7fb77000-0x7fb77037] Jan 17 00:25:38.964088 kernel: ACPI: Reserving MCFG table memory at [mem 0x7fb76000-0x7fb7603b] Jan 17 00:25:38.964106 kernel: ACPI: Reserving WAET table memory at [mem 0x7fb75000-0x7fb75027] Jan 17 00:25:38.964119 kernel: ACPI: Reserving BGRT table memory at [mem 0x7fb74000-0x7fb74037] Jan 17 00:25:38.964125 kernel: No NUMA configuration found Jan 17 00:25:38.964133 kernel: Faking a node at [mem 0x0000000000000000-0x0000000179ffffff] Jan 17 00:25:38.964137 kernel: NODE_DATA(0) allocated [mem 0x179ff8000-0x179ffdfff] Jan 17 00:25:38.964143 kernel: Zone ranges: Jan 17 00:25:38.964148 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 00:25:38.964153 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 17 00:25:38.964158 kernel: Normal [mem 0x0000000100000000-0x0000000179ffffff] Jan 17 00:25:38.964163 kernel: Movable zone start for each node Jan 17 00:25:38.964168 kernel: Early memory node ranges Jan 17 00:25:38.964173 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 17 00:25:38.964180 kernel: node 0: [mem 0x0000000000100000-0x000000007ed3efff] Jan 17 00:25:38.964185 kernel: node 0: [mem 0x000000007ee00000-0x000000007f8ecfff] Jan 17 00:25:38.964190 kernel: node 0: [mem 0x000000007fbff000-0x000000007ff7bfff] Jan 17 00:25:38.964195 kernel: node 0: [mem 0x0000000100000000-0x0000000179ffffff] Jan 17 00:25:38.964201 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x0000000179ffffff] Jan 17 00:25:38.964206 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 00:25:38.964210 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 17 00:25:38.964215 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jan 17 00:25:38.964220 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 17 00:25:38.964225 kernel: On node 0, zone Normal: 132 pages in unavailable ranges Jan 17 00:25:38.964234 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Jan 17 00:25:38.964239 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 17 00:25:38.964244 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 17 00:25:38.964249 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 17 00:25:38.964254 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 17 00:25:38.964258 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 17 00:25:38.964264 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 17 00:25:38.964269 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 17 00:25:38.964276 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 17 00:25:38.964282 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 00:25:38.964288 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 17 00:25:38.964296 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 17 00:25:38.964304 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 17 00:25:38.964311 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Jan 17 00:25:38.964319 kernel: Booting paravirtualized kernel on KVM Jan 17 00:25:38.964327 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 00:25:38.964334 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 17 00:25:38.964341 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Jan 17 00:25:38.964346 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Jan 17 00:25:38.964351 kernel: pcpu-alloc: [0] 0 1 Jan 17 00:25:38.964356 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 17 00:25:38.964362 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:25:38.964367 kernel: random: crng init done Jan 17 00:25:38.964372 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 17 00:25:38.964378 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 00:25:38.964385 kernel: Fallback order for Node 0: 0 Jan 17 00:25:38.964390 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1004632 Jan 17 00:25:38.964395 kernel: Policy zone: Normal Jan 17 00:25:38.964400 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 00:25:38.964405 kernel: software IO TLB: area num 2. Jan 17 00:25:38.964410 kernel: Memory: 3827764K/4091168K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 263200K reserved, 0K cma-reserved) Jan 17 00:25:38.964415 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 17 00:25:38.964420 kernel: ftrace: allocating 37989 entries in 149 pages Jan 17 00:25:38.964425 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 00:25:38.964433 kernel: Dynamic Preempt: voluntary Jan 17 00:25:38.964438 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 00:25:38.964447 kernel: rcu: RCU event tracing is enabled. Jan 17 00:25:38.964453 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 17 00:25:38.964458 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 00:25:38.964473 kernel: Rude variant of Tasks RCU enabled. Jan 17 00:25:38.964485 kernel: Tracing variant of Tasks RCU enabled. Jan 17 00:25:38.964491 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 00:25:38.964496 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 17 00:25:38.964502 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 17 00:25:38.964507 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 00:25:38.964516 kernel: Console: colour dummy device 80x25 Jan 17 00:25:38.964527 kernel: printk: console [tty0] enabled Jan 17 00:25:38.964533 kernel: printk: console [ttyS0] enabled Jan 17 00:25:38.964538 kernel: ACPI: Core revision 20230628 Jan 17 00:25:38.964544 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 17 00:25:38.964549 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 00:25:38.964557 kernel: x2apic enabled Jan 17 00:25:38.964562 kernel: APIC: Switched APIC routing to: physical x2apic Jan 17 00:25:38.964567 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 17 00:25:38.964573 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 17 00:25:38.964579 kernel: Calibrating delay loop (skipped) preset value.. 4799.99 BogoMIPS (lpj=2399998) Jan 17 00:25:38.964584 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 17 00:25:38.964589 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 17 00:25:38.964595 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 17 00:25:38.964600 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 00:25:38.964608 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jan 17 00:25:38.964613 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 17 00:25:38.964619 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 17 00:25:38.964624 kernel: active return thunk: srso_alias_return_thunk Jan 17 00:25:38.964629 kernel: Speculative Return Stack Overflow: Mitigation: Safe RET Jan 17 00:25:38.964635 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 17 00:25:38.964640 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 17 00:25:38.964646 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 17 00:25:38.964651 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 17 00:25:38.964660 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 17 00:25:38.964665 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 17 00:25:38.964671 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 17 00:25:38.964676 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 17 00:25:38.964682 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 17 00:25:38.964688 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 17 00:25:38.964696 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 17 00:25:38.964704 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 17 00:25:38.964711 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 17 00:25:38.964719 kernel: x86/fpu: xstate_offset[9]: 2432, xstate_sizes[9]: 8 Jan 17 00:25:38.964724 kernel: x86/fpu: Enabled xstate features 0x2e7, context size is 2440 bytes, using 'compacted' format. Jan 17 00:25:38.964730 kernel: Freeing SMP alternatives memory: 32K Jan 17 00:25:38.964738 kernel: pid_max: default: 32768 minimum: 301 Jan 17 00:25:38.964746 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 00:25:38.964753 kernel: landlock: Up and running. Jan 17 00:25:38.964758 kernel: SELinux: Initializing. Jan 17 00:25:38.964763 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 00:25:38.964769 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 00:25:38.964777 kernel: smpboot: CPU0: AMD EPYC-Genoa Processor (family: 0x19, model: 0x11, stepping: 0x0) Jan 17 00:25:38.964782 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:25:38.964788 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:25:38.964793 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:25:38.964798 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 17 00:25:38.964804 kernel: ... version: 0 Jan 17 00:25:38.964809 kernel: ... bit width: 48 Jan 17 00:25:38.964814 kernel: ... generic registers: 6 Jan 17 00:25:38.964819 kernel: ... value mask: 0000ffffffffffff Jan 17 00:25:38.964827 kernel: ... max period: 00007fffffffffff Jan 17 00:25:38.964832 kernel: ... fixed-purpose events: 0 Jan 17 00:25:38.964838 kernel: ... event mask: 000000000000003f Jan 17 00:25:38.964843 kernel: signal: max sigframe size: 3376 Jan 17 00:25:38.964848 kernel: rcu: Hierarchical SRCU implementation. Jan 17 00:25:38.964854 kernel: rcu: Max phase no-delay instances is 400. Jan 17 00:25:38.964859 kernel: smp: Bringing up secondary CPUs ... Jan 17 00:25:38.964864 kernel: smpboot: x86: Booting SMP configuration: Jan 17 00:25:38.964870 kernel: .... node #0, CPUs: #1 Jan 17 00:25:38.964878 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 00:25:38.964883 kernel: smpboot: Max logical packages: 1 Jan 17 00:25:38.964888 kernel: smpboot: Total of 2 processors activated (9599.99 BogoMIPS) Jan 17 00:25:38.964894 kernel: devtmpfs: initialized Jan 17 00:25:38.964899 kernel: x86/mm: Memory block size: 128MB Jan 17 00:25:38.964904 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7fb7f000-0x7fbfefff] (524288 bytes) Jan 17 00:25:38.964909 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 00:25:38.964915 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 17 00:25:38.964920 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 00:25:38.964931 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 00:25:38.964939 kernel: audit: initializing netlink subsys (disabled) Jan 17 00:25:38.964947 kernel: audit: type=2000 audit(1768609537.251:1): state=initialized audit_enabled=0 res=1 Jan 17 00:25:38.964953 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 00:25:38.964958 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 00:25:38.964963 kernel: cpuidle: using governor menu Jan 17 00:25:38.964969 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 00:25:38.964978 kernel: dca service started, version 1.12.1 Jan 17 00:25:38.964987 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Jan 17 00:25:38.964995 kernel: PCI: Using configuration type 1 for base access Jan 17 00:25:38.965000 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 00:25:38.965006 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 00:25:38.965011 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 00:25:38.965017 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 00:25:38.965022 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 00:25:38.965027 kernel: ACPI: Added _OSI(Module Device) Jan 17 00:25:38.965032 kernel: ACPI: Added _OSI(Processor Device) Jan 17 00:25:38.965038 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 00:25:38.965046 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 00:25:38.965051 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 00:25:38.965056 kernel: ACPI: Interpreter enabled Jan 17 00:25:38.965061 kernel: ACPI: PM: (supports S0 S5) Jan 17 00:25:38.965067 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 00:25:38.965072 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 00:25:38.965078 kernel: PCI: Using E820 reservations for host bridge windows Jan 17 00:25:38.965083 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 17 00:25:38.965088 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 17 00:25:38.965307 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 17 00:25:38.965419 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 17 00:25:38.965543 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 17 00:25:38.965553 kernel: PCI host bridge to bus 0000:00 Jan 17 00:25:38.965665 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 17 00:25:38.965769 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 17 00:25:38.965870 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 17 00:25:38.965961 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xdfffffff window] Jan 17 00:25:38.966068 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jan 17 00:25:38.968233 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc7ffffffff window] Jan 17 00:25:38.968345 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 17 00:25:38.968507 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 17 00:25:38.968645 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Jan 17 00:25:38.968797 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80000000-0x807fffff pref] Jan 17 00:25:38.968941 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc060500000-0xc060503fff 64bit pref] Jan 17 00:25:38.969073 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8138a000-0x8138afff] Jan 17 00:25:38.969242 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jan 17 00:25:38.969355 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Jan 17 00:25:38.969456 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 17 00:25:38.969561 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 17 00:25:38.969663 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x81389000-0x81389fff] Jan 17 00:25:38.969769 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 17 00:25:38.969867 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x81388000-0x81388fff] Jan 17 00:25:38.969972 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 17 00:25:38.970069 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x81387000-0x81387fff] Jan 17 00:25:38.970234 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 17 00:25:38.970345 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x81386000-0x81386fff] Jan 17 00:25:38.970449 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 17 00:25:38.970547 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x81385000-0x81385fff] Jan 17 00:25:38.970650 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 17 00:25:38.970747 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x81384000-0x81384fff] Jan 17 00:25:38.970878 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 17 00:25:38.970998 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x81383000-0x81383fff] Jan 17 00:25:38.971421 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 17 00:25:38.971557 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x81382000-0x81382fff] Jan 17 00:25:38.971709 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Jan 17 00:25:38.971859 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x81381000-0x81381fff] Jan 17 00:25:38.972025 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 17 00:25:38.972669 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 17 00:25:38.972803 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 17 00:25:38.972918 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x6040-0x605f] Jan 17 00:25:38.973039 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0x81380000-0x81380fff] Jan 17 00:25:38.973223 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 17 00:25:38.973352 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6000-0x603f] Jan 17 00:25:38.973483 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Jan 17 00:25:38.973619 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x81200000-0x81200fff] Jan 17 00:25:38.973739 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xc060000000-0xc060003fff 64bit pref] Jan 17 00:25:38.973858 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 17 00:25:38.973980 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 17 00:25:38.974393 kernel: pci 0000:00:02.0: bridge window [mem 0x81200000-0x812fffff] Jan 17 00:25:38.974536 kernel: pci 0000:00:02.0: bridge window [mem 0xc060000000-0xc0600fffff 64bit pref] Jan 17 00:25:38.974680 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 17 00:25:38.974844 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x81100000-0x81103fff 64bit] Jan 17 00:25:38.974998 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 17 00:25:38.975141 kernel: pci 0000:00:02.1: bridge window [mem 0x81100000-0x811fffff] Jan 17 00:25:38.975259 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Jan 17 00:25:38.975364 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x81000000-0x81000fff] Jan 17 00:25:38.975467 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xc060100000-0xc060103fff 64bit pref] Jan 17 00:25:38.975571 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 17 00:25:38.975670 kernel: pci 0000:00:02.2: bridge window [mem 0x81000000-0x810fffff] Jan 17 00:25:38.975767 kernel: pci 0000:00:02.2: bridge window [mem 0xc060100000-0xc0601fffff 64bit pref] Jan 17 00:25:38.975910 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Jan 17 00:25:38.976036 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xc060200000-0xc060203fff 64bit pref] Jan 17 00:25:38.976207 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 17 00:25:38.976341 kernel: pci 0000:00:02.3: bridge window [mem 0xc060200000-0xc0602fffff 64bit pref] Jan 17 00:25:38.976497 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 17 00:25:38.976639 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x80f00000-0x80f00fff] Jan 17 00:25:38.976778 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xc060300000-0xc060303fff 64bit pref] Jan 17 00:25:38.976916 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 17 00:25:38.977075 kernel: pci 0000:00:02.4: bridge window [mem 0x80f00000-0x80ffffff] Jan 17 00:25:38.977274 kernel: pci 0000:00:02.4: bridge window [mem 0xc060300000-0xc0603fffff 64bit pref] Jan 17 00:25:38.977435 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Jan 17 00:25:38.977582 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x80e00000-0x80e00fff] Jan 17 00:25:38.977720 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xc060400000-0xc060403fff 64bit pref] Jan 17 00:25:38.977834 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 17 00:25:38.977973 kernel: pci 0000:00:02.5: bridge window [mem 0x80e00000-0x80efffff] Jan 17 00:25:38.978249 kernel: pci 0000:00:02.5: bridge window [mem 0xc060400000-0xc0604fffff 64bit pref] Jan 17 00:25:38.978265 kernel: acpiphp: Slot [0] registered Jan 17 00:25:38.978420 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Jan 17 00:25:38.978567 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x80c00000-0x80c00fff] Jan 17 00:25:38.978725 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xc000000000-0xc000003fff 64bit pref] Jan 17 00:25:38.978875 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 17 00:25:38.979026 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 17 00:25:38.979485 kernel: pci 0000:00:02.6: bridge window [mem 0x80c00000-0x80dfffff] Jan 17 00:25:38.979593 kernel: pci 0000:00:02.6: bridge window [mem 0xc000000000-0xc01fffffff 64bit pref] Jan 17 00:25:38.979600 kernel: acpiphp: Slot [0-2] registered Jan 17 00:25:38.979703 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 17 00:25:38.979802 kernel: pci 0000:00:02.7: bridge window [mem 0x80a00000-0x80bfffff] Jan 17 00:25:38.979939 kernel: pci 0000:00:02.7: bridge window [mem 0xc020000000-0xc03fffffff 64bit pref] Jan 17 00:25:38.979952 kernel: acpiphp: Slot [0-3] registered Jan 17 00:25:38.980090 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 17 00:25:38.980228 kernel: pci 0000:00:03.0: bridge window [mem 0x80800000-0x809fffff] Jan 17 00:25:38.980340 kernel: pci 0000:00:03.0: bridge window [mem 0xc040000000-0xc05fffffff 64bit pref] Jan 17 00:25:38.980352 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 17 00:25:38.980361 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 17 00:25:38.980369 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 17 00:25:38.980383 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 17 00:25:38.980392 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 17 00:25:38.980400 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 17 00:25:38.980409 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 17 00:25:38.980417 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 17 00:25:38.980426 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 17 00:25:38.980434 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 17 00:25:38.980442 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 17 00:25:38.980449 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 17 00:25:38.980462 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 17 00:25:38.980469 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 17 00:25:38.980478 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 17 00:25:38.980486 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 17 00:25:38.980493 kernel: iommu: Default domain type: Translated Jan 17 00:25:38.980501 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 00:25:38.980509 kernel: efivars: Registered efivars operations Jan 17 00:25:38.980517 kernel: PCI: Using ACPI for IRQ routing Jan 17 00:25:38.980525 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 17 00:25:38.980537 kernel: e820: reserve RAM buffer [mem 0x7ed3f000-0x7fffffff] Jan 17 00:25:38.980545 kernel: e820: reserve RAM buffer [mem 0x7f8ed000-0x7fffffff] Jan 17 00:25:38.980553 kernel: e820: reserve RAM buffer [mem 0x7ff7c000-0x7fffffff] Jan 17 00:25:38.980561 kernel: e820: reserve RAM buffer [mem 0x17a000000-0x17bffffff] Jan 17 00:25:38.980703 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 17 00:25:38.980844 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 17 00:25:38.980972 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 17 00:25:38.980983 kernel: vgaarb: loaded Jan 17 00:25:38.980991 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 17 00:25:38.981003 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 17 00:25:38.981011 kernel: clocksource: Switched to clocksource kvm-clock Jan 17 00:25:38.981019 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 00:25:38.981027 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 00:25:38.981034 kernel: pnp: PnP ACPI init Jan 17 00:25:38.981224 kernel: system 00:04: [mem 0xe0000000-0xefffffff window] has been reserved Jan 17 00:25:38.981240 kernel: pnp: PnP ACPI: found 5 devices Jan 17 00:25:38.981246 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 00:25:38.981257 kernel: NET: Registered PF_INET protocol family Jan 17 00:25:38.981278 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 00:25:38.981287 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 17 00:25:38.981293 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 00:25:38.981299 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 00:25:38.981304 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 17 00:25:38.981310 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 17 00:25:38.981316 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 00:25:38.981322 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 00:25:38.981330 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 00:25:38.981336 kernel: NET: Registered PF_XDP protocol family Jan 17 00:25:38.981451 kernel: pci 0000:01:00.0: can't claim BAR 6 [mem 0xfff80000-0xffffffff pref]: no compatible bridge window Jan 17 00:25:38.981558 kernel: pci 0000:07:00.0: can't claim BAR 6 [mem 0xfff80000-0xffffffff pref]: no compatible bridge window Jan 17 00:25:38.984277 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 17 00:25:38.984437 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 17 00:25:38.984560 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 17 00:25:38.984671 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Jan 17 00:25:38.984772 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Jan 17 00:25:38.984887 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Jan 17 00:25:38.985014 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x81280000-0x812fffff pref] Jan 17 00:25:38.985260 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 17 00:25:38.985387 kernel: pci 0000:00:02.0: bridge window [mem 0x81200000-0x812fffff] Jan 17 00:25:38.985507 kernel: pci 0000:00:02.0: bridge window [mem 0xc060000000-0xc0600fffff 64bit pref] Jan 17 00:25:38.985638 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 17 00:25:38.985771 kernel: pci 0000:00:02.1: bridge window [mem 0x81100000-0x811fffff] Jan 17 00:25:38.985904 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 17 00:25:38.986035 kernel: pci 0000:00:02.2: bridge window [mem 0x81000000-0x810fffff] Jan 17 00:25:38.987252 kernel: pci 0000:00:02.2: bridge window [mem 0xc060100000-0xc0601fffff 64bit pref] Jan 17 00:25:38.987391 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 17 00:25:38.987528 kernel: pci 0000:00:02.3: bridge window [mem 0xc060200000-0xc0602fffff 64bit pref] Jan 17 00:25:38.987635 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 17 00:25:38.987734 kernel: pci 0000:00:02.4: bridge window [mem 0x80f00000-0x80ffffff] Jan 17 00:25:38.987848 kernel: pci 0000:00:02.4: bridge window [mem 0xc060300000-0xc0603fffff 64bit pref] Jan 17 00:25:38.987970 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 17 00:25:38.988070 kernel: pci 0000:00:02.5: bridge window [mem 0x80e00000-0x80efffff] Jan 17 00:25:38.988234 kernel: pci 0000:00:02.5: bridge window [mem 0xc060400000-0xc0604fffff 64bit pref] Jan 17 00:25:38.988345 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x80c80000-0x80cfffff pref] Jan 17 00:25:38.988464 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 17 00:25:38.988561 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Jan 17 00:25:38.988657 kernel: pci 0000:00:02.6: bridge window [mem 0x80c00000-0x80dfffff] Jan 17 00:25:38.988785 kernel: pci 0000:00:02.6: bridge window [mem 0xc000000000-0xc01fffffff 64bit pref] Jan 17 00:25:38.988895 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 17 00:25:38.988993 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Jan 17 00:25:38.989091 kernel: pci 0000:00:02.7: bridge window [mem 0x80a00000-0x80bfffff] Jan 17 00:25:38.990284 kernel: pci 0000:00:02.7: bridge window [mem 0xc020000000-0xc03fffffff 64bit pref] Jan 17 00:25:38.990401 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 17 00:25:38.990503 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Jan 17 00:25:38.990600 kernel: pci 0000:00:03.0: bridge window [mem 0x80800000-0x809fffff] Jan 17 00:25:38.990697 kernel: pci 0000:00:03.0: bridge window [mem 0xc040000000-0xc05fffffff 64bit pref] Jan 17 00:25:38.990797 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 17 00:25:38.990892 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 17 00:25:38.990983 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 17 00:25:38.991073 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xdfffffff window] Jan 17 00:25:38.991281 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jan 17 00:25:38.991372 kernel: pci_bus 0000:00: resource 9 [mem 0xc000000000-0xc7ffffffff window] Jan 17 00:25:38.991477 kernel: pci_bus 0000:01: resource 1 [mem 0x81200000-0x812fffff] Jan 17 00:25:38.991572 kernel: pci_bus 0000:01: resource 2 [mem 0xc060000000-0xc0600fffff 64bit pref] Jan 17 00:25:38.991678 kernel: pci_bus 0000:02: resource 1 [mem 0x81100000-0x811fffff] Jan 17 00:25:38.991780 kernel: pci_bus 0000:03: resource 1 [mem 0x81000000-0x810fffff] Jan 17 00:25:38.991874 kernel: pci_bus 0000:03: resource 2 [mem 0xc060100000-0xc0601fffff 64bit pref] Jan 17 00:25:38.991978 kernel: pci_bus 0000:04: resource 2 [mem 0xc060200000-0xc0602fffff 64bit pref] Jan 17 00:25:38.992081 kernel: pci_bus 0000:05: resource 1 [mem 0x80f00000-0x80ffffff] Jan 17 00:25:38.992197 kernel: pci_bus 0000:05: resource 2 [mem 0xc060300000-0xc0603fffff 64bit pref] Jan 17 00:25:38.992345 kernel: pci_bus 0000:06: resource 1 [mem 0x80e00000-0x80efffff] Jan 17 00:25:38.992463 kernel: pci_bus 0000:06: resource 2 [mem 0xc060400000-0xc0604fffff 64bit pref] Jan 17 00:25:38.992571 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Jan 17 00:25:38.992666 kernel: pci_bus 0000:07: resource 1 [mem 0x80c00000-0x80dfffff] Jan 17 00:25:38.992762 kernel: pci_bus 0000:07: resource 2 [mem 0xc000000000-0xc01fffffff 64bit pref] Jan 17 00:25:38.992867 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Jan 17 00:25:38.992963 kernel: pci_bus 0000:08: resource 1 [mem 0x80a00000-0x80bfffff] Jan 17 00:25:38.993061 kernel: pci_bus 0000:08: resource 2 [mem 0xc020000000-0xc03fffffff 64bit pref] Jan 17 00:25:38.995314 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Jan 17 00:25:38.995460 kernel: pci_bus 0000:09: resource 1 [mem 0x80800000-0x809fffff] Jan 17 00:25:38.995582 kernel: pci_bus 0000:09: resource 2 [mem 0xc040000000-0xc05fffffff 64bit pref] Jan 17 00:25:38.995593 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 17 00:25:38.995601 kernel: PCI: CLS 0 bytes, default 64 Jan 17 00:25:38.995607 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 17 00:25:38.995614 kernel: software IO TLB: mapped [mem 0x0000000077ffd000-0x000000007bffd000] (64MB) Jan 17 00:25:38.995627 kernel: Initialise system trusted keyrings Jan 17 00:25:38.995633 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 17 00:25:38.995639 kernel: Key type asymmetric registered Jan 17 00:25:38.995645 kernel: Asymmetric key parser 'x509' registered Jan 17 00:25:38.995651 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 00:25:38.995657 kernel: io scheduler mq-deadline registered Jan 17 00:25:38.995663 kernel: io scheduler kyber registered Jan 17 00:25:38.995669 kernel: io scheduler bfq registered Jan 17 00:25:38.995811 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 17 00:25:38.995950 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 17 00:25:38.996065 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 17 00:25:38.996296 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 17 00:25:38.996437 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 17 00:25:38.996573 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 17 00:25:38.996679 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 17 00:25:38.996793 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 17 00:25:38.996903 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 17 00:25:38.997007 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 17 00:25:38.998190 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 17 00:25:38.998353 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 17 00:25:38.998494 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 17 00:25:38.998607 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 17 00:25:38.998747 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 17 00:25:38.998858 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 17 00:25:38.998867 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 17 00:25:38.998981 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Jan 17 00:25:38.999092 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Jan 17 00:25:38.999143 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 00:25:38.999149 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Jan 17 00:25:38.999155 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 00:25:38.999162 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 00:25:38.999168 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 17 00:25:38.999174 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 17 00:25:38.999180 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 17 00:25:38.999316 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 17 00:25:38.999329 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 17 00:25:38.999430 kernel: rtc_cmos 00:03: registered as rtc0 Jan 17 00:25:38.999553 kernel: rtc_cmos 00:03: setting system clock to 2026-01-17T00:25:38 UTC (1768609538) Jan 17 00:25:38.999685 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 17 00:25:38.999695 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 17 00:25:38.999702 kernel: efifb: probing for efifb Jan 17 00:25:38.999712 kernel: efifb: framebuffer at 0x80000000, using 4032k, total 4032k Jan 17 00:25:38.999719 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jan 17 00:25:38.999726 kernel: efifb: scrolling: redraw Jan 17 00:25:38.999734 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 17 00:25:38.999740 kernel: Console: switching to colour frame buffer device 160x50 Jan 17 00:25:38.999747 kernel: fb0: EFI VGA frame buffer device Jan 17 00:25:38.999755 kernel: pstore: Using crash dump compression: deflate Jan 17 00:25:38.999765 kernel: pstore: Registered efi_pstore as persistent store backend Jan 17 00:25:38.999773 kernel: NET: Registered PF_INET6 protocol family Jan 17 00:25:38.999783 kernel: Segment Routing with IPv6 Jan 17 00:25:38.999795 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 00:25:38.999806 kernel: NET: Registered PF_PACKET protocol family Jan 17 00:25:38.999814 kernel: Key type dns_resolver registered Jan 17 00:25:38.999820 kernel: IPI shorthand broadcast: enabled Jan 17 00:25:38.999826 kernel: sched_clock: Marking stable (1306030003, 224455956)->(1577326407, -46840448) Jan 17 00:25:38.999832 kernel: registered taskstats version 1 Jan 17 00:25:38.999839 kernel: Loading compiled-in X.509 certificates Jan 17 00:25:38.999844 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: b6a847a3a522371f15b0d5425f12279a240740e4' Jan 17 00:25:38.999854 kernel: Key type .fscrypt registered Jan 17 00:25:38.999866 kernel: Key type fscrypt-provisioning registered Jan 17 00:25:38.999878 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 00:25:38.999886 kernel: ima: Allocated hash algorithm: sha1 Jan 17 00:25:38.999892 kernel: ima: No architecture policies found Jan 17 00:25:38.999898 kernel: clk: Disabling unused clocks Jan 17 00:25:38.999905 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 17 00:25:38.999910 kernel: Write protecting the kernel read-only data: 36864k Jan 17 00:25:38.999917 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 17 00:25:38.999926 kernel: Run /init as init process Jan 17 00:25:38.999933 kernel: with arguments: Jan 17 00:25:38.999940 kernel: /init Jan 17 00:25:38.999947 kernel: with environment: Jan 17 00:25:38.999952 kernel: HOME=/ Jan 17 00:25:38.999958 kernel: TERM=linux Jan 17 00:25:38.999967 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:25:38.999976 systemd[1]: Detected virtualization kvm. Jan 17 00:25:38.999985 systemd[1]: Detected architecture x86-64. Jan 17 00:25:38.999992 systemd[1]: Running in initrd. Jan 17 00:25:38.999999 systemd[1]: No hostname configured, using default hostname. Jan 17 00:25:39.000006 systemd[1]: Hostname set to . Jan 17 00:25:39.000013 systemd[1]: Initializing machine ID from VM UUID. Jan 17 00:25:39.000018 systemd[1]: Queued start job for default target initrd.target. Jan 17 00:25:39.000025 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:25:39.000031 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:25:39.000040 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 00:25:39.000046 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:25:39.000052 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 00:25:39.000059 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 00:25:39.000066 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 00:25:39.000072 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 00:25:39.000078 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:25:39.000086 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:25:39.001156 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:25:39.001174 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:25:39.001181 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:25:39.001188 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:25:39.001195 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:25:39.001207 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:25:39.001221 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 00:25:39.001236 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 00:25:39.001242 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:25:39.001250 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:25:39.001257 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:25:39.001265 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:25:39.001271 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 00:25:39.001276 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:25:39.001282 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 00:25:39.001289 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 00:25:39.001298 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:25:39.001306 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:25:39.001340 systemd-journald[188]: Collecting audit messages is disabled. Jan 17 00:25:39.001359 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:25:39.001368 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 00:25:39.001375 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:25:39.001382 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 00:25:39.001391 systemd-journald[188]: Journal started Jan 17 00:25:39.001409 systemd-journald[188]: Runtime Journal (/run/log/journal/0fc40b7da28b4963b337014284a85818) is 8.0M, max 76.3M, 68.3M free. Jan 17 00:25:38.975067 systemd-modules-load[189]: Inserted module 'overlay' Jan 17 00:25:39.011147 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 00:25:39.013238 systemd-modules-load[189]: Inserted module 'br_netfilter' Jan 17 00:25:39.014123 kernel: Bridge firewalling registered Jan 17 00:25:39.020613 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 00:25:39.020668 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:25:39.021446 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:25:39.022577 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:25:39.023183 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:25:39.026695 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:25:39.028351 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:25:39.037642 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:25:39.038994 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:25:39.059422 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:25:39.060401 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:25:39.061731 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:25:39.069326 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 00:25:39.070039 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:25:39.074082 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:25:39.082251 dracut-cmdline[220]: dracut-dracut-053 Jan 17 00:25:39.086242 dracut-cmdline[220]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:25:39.109771 systemd-resolved[223]: Positive Trust Anchors: Jan 17 00:25:39.110602 systemd-resolved[223]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:25:39.111014 systemd-resolved[223]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:25:39.114382 systemd-resolved[223]: Defaulting to hostname 'linux'. Jan 17 00:25:39.116937 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:25:39.117446 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:25:39.153135 kernel: SCSI subsystem initialized Jan 17 00:25:39.161128 kernel: Loading iSCSI transport class v2.0-870. Jan 17 00:25:39.171139 kernel: iscsi: registered transport (tcp) Jan 17 00:25:39.189423 kernel: iscsi: registered transport (qla4xxx) Jan 17 00:25:39.189503 kernel: QLogic iSCSI HBA Driver Jan 17 00:25:39.230258 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 00:25:39.237298 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 00:25:39.260984 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 00:25:39.261060 kernel: device-mapper: uevent: version 1.0.3 Jan 17 00:25:39.261071 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 00:25:39.303145 kernel: raid6: avx512x4 gen() 36930 MB/s Jan 17 00:25:39.321140 kernel: raid6: avx512x2 gen() 34970 MB/s Jan 17 00:25:39.339180 kernel: raid6: avx512x1 gen() 30547 MB/s Jan 17 00:25:39.357147 kernel: raid6: avx2x4 gen() 26367 MB/s Jan 17 00:25:39.375142 kernel: raid6: avx2x2 gen() 26702 MB/s Jan 17 00:25:39.393262 kernel: raid6: avx2x1 gen() 23389 MB/s Jan 17 00:25:39.393344 kernel: raid6: using algorithm avx512x4 gen() 36930 MB/s Jan 17 00:25:39.413361 kernel: raid6: .... xor() 3793 MB/s, rmw enabled Jan 17 00:25:39.413439 kernel: raid6: using avx512x2 recovery algorithm Jan 17 00:25:39.432154 kernel: xor: automatically using best checksumming function avx Jan 17 00:25:39.559147 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 00:25:39.570599 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:25:39.580507 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:25:39.593661 systemd-udevd[406]: Using default interface naming scheme 'v255'. Jan 17 00:25:39.598811 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:25:39.609370 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 00:25:39.626543 dracut-pre-trigger[412]: rd.md=0: removing MD RAID activation Jan 17 00:25:39.665778 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:25:39.676384 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:25:39.775899 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:25:39.783358 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 00:25:39.802796 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 00:25:39.804866 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:25:39.806420 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:25:39.807513 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:25:39.814449 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 00:25:39.837679 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:25:39.890135 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 00:25:39.911191 kernel: scsi host0: Virtio SCSI HBA Jan 17 00:25:39.922147 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 17 00:25:39.926642 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:25:39.927843 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:25:39.930282 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:25:39.930608 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:25:39.930938 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:25:39.932227 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:25:39.938344 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:25:39.943136 kernel: ACPI: bus type USB registered Jan 17 00:25:39.944588 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:25:39.945429 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:25:39.948198 kernel: AVX2 version of gcm_enc/dec engaged. Jan 17 00:25:39.948322 kernel: AES CTR mode by8 optimization enabled Jan 17 00:25:39.956344 kernel: usbcore: registered new interface driver usbfs Jan 17 00:25:39.956384 kernel: usbcore: registered new interface driver hub Jan 17 00:25:39.958178 kernel: libata version 3.00 loaded. Jan 17 00:25:39.963381 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:25:39.968167 kernel: usbcore: registered new device driver usb Jan 17 00:25:40.001858 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 17 00:25:40.002107 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jan 17 00:25:40.002547 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:25:40.007352 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 17 00:25:40.007507 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 17 00:25:40.007632 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jan 17 00:25:40.011697 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jan 17 00:25:40.013325 kernel: ahci 0000:00:1f.2: version 3.0 Jan 17 00:25:40.013481 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 17 00:25:40.014544 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:25:40.019686 kernel: hub 1-0:1.0: USB hub found Jan 17 00:25:40.020258 kernel: hub 1-0:1.0: 4 ports detected Jan 17 00:25:40.022312 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 17 00:25:40.022453 kernel: hub 2-0:1.0: USB hub found Jan 17 00:25:40.028192 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 17 00:25:40.028441 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 17 00:25:40.028560 kernel: hub 2-0:1.0: 4 ports detected Jan 17 00:25:40.035123 kernel: scsi host1: ahci Jan 17 00:25:40.040337 kernel: scsi host2: ahci Jan 17 00:25:40.040564 kernel: scsi host3: ahci Jan 17 00:25:40.043163 kernel: scsi host4: ahci Jan 17 00:25:40.045544 kernel: scsi host5: ahci Jan 17 00:25:40.045780 kernel: scsi host6: ahci Jan 17 00:25:40.057667 kernel: ata1: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380100 irq 51 Jan 17 00:25:40.057718 kernel: ata2: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380180 irq 51 Jan 17 00:25:40.057727 kernel: ata3: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380200 irq 51 Jan 17 00:25:40.057734 kernel: ata4: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380280 irq 51 Jan 17 00:25:40.057742 kernel: ata5: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380300 irq 51 Jan 17 00:25:40.057750 kernel: ata6: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380380 irq 51 Jan 17 00:25:40.059451 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:25:40.061276 kernel: sd 0:0:0:0: Power-on or device reset occurred Jan 17 00:25:40.061566 kernel: sd 0:0:0:0: [sda] 160006144 512-byte logical blocks: (81.9 GB/76.3 GiB) Jan 17 00:25:40.066205 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 17 00:25:40.066396 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Jan 17 00:25:40.066527 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 17 00:25:40.074877 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 00:25:40.074921 kernel: GPT:17805311 != 160006143 Jan 17 00:25:40.074932 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 00:25:40.077892 kernel: GPT:17805311 != 160006143 Jan 17 00:25:40.077930 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 00:25:40.080636 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:25:40.081199 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 17 00:25:40.257688 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 17 00:25:40.378147 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 17 00:25:40.378267 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 17 00:25:40.385178 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 17 00:25:40.390163 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 17 00:25:40.395552 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 17 00:25:40.395606 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 17 00:25:40.401136 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 17 00:25:40.402202 kernel: ata1.00: applying bridge limits Jan 17 00:25:40.407212 kernel: ata1.00: configured for UDMA/100 Jan 17 00:25:40.413180 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 17 00:25:40.419179 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 17 00:25:40.453421 kernel: usbcore: registered new interface driver usbhid Jan 17 00:25:40.453504 kernel: usbhid: USB HID core driver Jan 17 00:25:40.475199 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 Jan 17 00:25:40.486182 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jan 17 00:25:40.502203 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 17 00:25:40.502649 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 17 00:25:40.524167 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Jan 17 00:25:40.530936 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 17 00:25:40.537132 kernel: BTRFS: device fsid a67b5ac0-cdfd-426d-9386-e029282f433a devid 1 transid 33 /dev/sda3 scanned by (udev-worker) (474) Jan 17 00:25:40.541106 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (480) Jan 17 00:25:40.541244 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 17 00:25:40.554435 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 17 00:25:40.558425 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jan 17 00:25:40.559729 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 17 00:25:40.566259 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 00:25:40.576502 disk-uuid[582]: Primary Header is updated. Jan 17 00:25:40.576502 disk-uuid[582]: Secondary Entries is updated. Jan 17 00:25:40.576502 disk-uuid[582]: Secondary Header is updated. Jan 17 00:25:40.588112 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:25:40.594169 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:25:41.602179 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:25:41.604229 disk-uuid[583]: The operation has completed successfully. Jan 17 00:25:41.692269 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 00:25:41.692447 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 00:25:41.713387 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 00:25:41.723406 sh[600]: Success Jan 17 00:25:41.749179 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 17 00:25:41.825589 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 00:25:41.836256 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 00:25:41.841163 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 00:25:41.879344 kernel: BTRFS info (device dm-0): first mount of filesystem a67b5ac0-cdfd-426d-9386-e029282f433a Jan 17 00:25:41.879431 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:25:41.879449 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 00:25:41.889656 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 00:25:41.889709 kernel: BTRFS info (device dm-0): using free space tree Jan 17 00:25:41.907187 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 17 00:25:41.911483 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 00:25:41.913765 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 00:25:41.922426 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 00:25:41.930560 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 00:25:41.965662 kernel: BTRFS info (device sda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:25:41.965745 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:25:41.973179 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:25:41.990422 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 00:25:41.990494 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:25:42.016427 kernel: BTRFS info (device sda6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:25:42.015931 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 00:25:42.029641 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 00:25:42.037376 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 00:25:42.149437 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:25:42.158076 ignition[715]: Ignition 2.19.0 Jan 17 00:25:42.158093 ignition[715]: Stage: fetch-offline Jan 17 00:25:42.162323 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:25:42.158665 ignition[715]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:25:42.165006 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:25:42.158676 ignition[715]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 17 00:25:42.158757 ignition[715]: parsed url from cmdline: "" Jan 17 00:25:42.158761 ignition[715]: no config URL provided Jan 17 00:25:42.158765 ignition[715]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:25:42.158773 ignition[715]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:25:42.158778 ignition[715]: failed to fetch config: resource requires networking Jan 17 00:25:42.158925 ignition[715]: Ignition finished successfully Jan 17 00:25:42.178990 systemd-networkd[786]: lo: Link UP Jan 17 00:25:42.178999 systemd-networkd[786]: lo: Gained carrier Jan 17 00:25:42.181277 systemd-networkd[786]: Enumeration completed Jan 17 00:25:42.181842 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:25:42.181845 systemd-networkd[786]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:25:42.182187 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:25:42.182600 systemd-networkd[786]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:25:42.182604 systemd-networkd[786]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:25:42.183657 systemd-networkd[786]: eth0: Link UP Jan 17 00:25:42.183661 systemd-networkd[786]: eth0: Gained carrier Jan 17 00:25:42.183668 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:25:42.185594 systemd[1]: Reached target network.target - Network. Jan 17 00:25:42.187584 systemd-networkd[786]: eth1: Link UP Jan 17 00:25:42.187588 systemd-networkd[786]: eth1: Gained carrier Jan 17 00:25:42.187594 systemd-networkd[786]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:25:42.198364 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 00:25:42.211021 ignition[789]: Ignition 2.19.0 Jan 17 00:25:42.211035 ignition[789]: Stage: fetch Jan 17 00:25:42.211190 ignition[789]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:25:42.211199 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 17 00:25:42.211265 ignition[789]: parsed url from cmdline: "" Jan 17 00:25:42.211269 ignition[789]: no config URL provided Jan 17 00:25:42.211273 ignition[789]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:25:42.211281 ignition[789]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:25:42.211296 ignition[789]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jan 17 00:25:42.211422 ignition[789]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 17 00:25:42.241158 systemd-networkd[786]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Jan 17 00:25:42.260150 systemd-networkd[786]: eth0: DHCPv4 address 135.181.41.243/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 17 00:25:42.411648 ignition[789]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Jan 17 00:25:42.418750 ignition[789]: GET result: OK Jan 17 00:25:42.418817 ignition[789]: parsing config with SHA512: 780938edcf0496a045ea06f5009537349546b961b4aa2686a0cd127b17b7c1fc9284c7d558cbd6e723d6488eadaefaaacb0acbdc9ebd2e052718d378b0f2edd2 Jan 17 00:25:42.421860 unknown[789]: fetched base config from "system" Jan 17 00:25:42.421877 unknown[789]: fetched base config from "system" Jan 17 00:25:42.422166 ignition[789]: fetch: fetch complete Jan 17 00:25:42.421883 unknown[789]: fetched user config from "hetzner" Jan 17 00:25:42.422171 ignition[789]: fetch: fetch passed Jan 17 00:25:42.425515 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 00:25:42.422212 ignition[789]: Ignition finished successfully Jan 17 00:25:42.439351 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 00:25:42.454087 ignition[797]: Ignition 2.19.0 Jan 17 00:25:42.454972 ignition[797]: Stage: kargs Jan 17 00:25:42.455164 ignition[797]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:25:42.455175 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 17 00:25:42.455856 ignition[797]: kargs: kargs passed Jan 17 00:25:42.457247 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 00:25:42.455901 ignition[797]: Ignition finished successfully Jan 17 00:25:42.465275 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 00:25:42.486211 ignition[803]: Ignition 2.19.0 Jan 17 00:25:42.486232 ignition[803]: Stage: disks Jan 17 00:25:42.486529 ignition[803]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:25:42.486550 ignition[803]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 17 00:25:42.487829 ignition[803]: disks: disks passed Jan 17 00:25:42.487915 ignition[803]: Ignition finished successfully Jan 17 00:25:42.490514 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 00:25:42.491751 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 00:25:42.492280 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 00:25:42.493581 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:25:42.494793 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:25:42.495969 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:25:42.501329 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 00:25:42.523387 systemd-fsck[812]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 17 00:25:42.529251 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 00:25:42.537319 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 00:25:42.669145 kernel: EXT4-fs (sda9): mounted filesystem ab055cfb-d92d-4784-aa05-26ea844796bc r/w with ordered data mode. Quota mode: none. Jan 17 00:25:42.671029 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 00:25:42.672817 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 00:25:42.687322 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:25:42.690297 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 00:25:42.693386 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 17 00:25:42.694839 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 00:25:42.715920 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (820) Jan 17 00:25:42.715962 kernel: BTRFS info (device sda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:25:42.715983 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:25:42.716002 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:25:42.716022 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 00:25:42.716041 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:25:42.694869 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:25:42.720949 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:25:42.723226 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 00:25:42.731339 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 00:25:42.762215 coreos-metadata[822]: Jan 17 00:25:42.762 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jan 17 00:25:42.763617 coreos-metadata[822]: Jan 17 00:25:42.762 INFO Fetch successful Jan 17 00:25:42.766073 coreos-metadata[822]: Jan 17 00:25:42.764 INFO wrote hostname ci-4081-3-6-n-e100e79615 to /sysroot/etc/hostname Jan 17 00:25:42.765791 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 00:25:42.778568 initrd-setup-root[848]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 00:25:42.783320 initrd-setup-root[855]: cut: /sysroot/etc/group: No such file or directory Jan 17 00:25:42.788018 initrd-setup-root[862]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 00:25:42.791604 initrd-setup-root[869]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 00:25:42.897615 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 00:25:42.904241 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 00:25:42.907410 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 00:25:42.924199 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 00:25:42.931609 kernel: BTRFS info (device sda6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:25:42.966006 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 00:25:42.975124 ignition[936]: INFO : Ignition 2.19.0 Jan 17 00:25:42.975124 ignition[936]: INFO : Stage: mount Jan 17 00:25:42.975124 ignition[936]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:25:42.975124 ignition[936]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 17 00:25:42.975124 ignition[936]: INFO : mount: mount passed Jan 17 00:25:42.975124 ignition[936]: INFO : Ignition finished successfully Jan 17 00:25:42.978870 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 00:25:42.985341 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 00:25:43.016374 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:25:43.049187 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (948) Jan 17 00:25:43.056394 kernel: BTRFS info (device sda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:25:43.056471 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:25:43.061357 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:25:43.076472 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 00:25:43.076562 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:25:43.081487 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:25:43.121802 ignition[964]: INFO : Ignition 2.19.0 Jan 17 00:25:43.123234 ignition[964]: INFO : Stage: files Jan 17 00:25:43.124392 ignition[964]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:25:43.125250 ignition[964]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 17 00:25:43.126604 ignition[964]: DEBUG : files: compiled without relabeling support, skipping Jan 17 00:25:43.128679 ignition[964]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 00:25:43.128679 ignition[964]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 00:25:43.134520 ignition[964]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 00:25:43.135686 ignition[964]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 00:25:43.135686 ignition[964]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 00:25:43.135438 unknown[964]: wrote ssh authorized keys file for user: core Jan 17 00:25:43.138631 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 17 00:25:43.138631 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 17 00:25:43.347553 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 17 00:25:43.525289 systemd-networkd[786]: eth1: Gained IPv6LL Jan 17 00:25:43.652873 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 17 00:25:43.652873 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 17 00:25:43.656485 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 00:25:43.656485 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:25:43.656485 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:25:43.656485 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:25:43.656485 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:25:43.656485 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:25:43.656485 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:25:43.656485 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:25:43.656485 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:25:43.656485 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 17 00:25:43.656485 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 17 00:25:43.656485 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 17 00:25:43.656485 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 17 00:25:44.139298 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 17 00:25:44.165538 systemd-networkd[786]: eth0: Gained IPv6LL Jan 17 00:25:44.426763 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 17 00:25:44.426763 ignition[964]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 17 00:25:44.429678 ignition[964]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:25:44.429678 ignition[964]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:25:44.429678 ignition[964]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 17 00:25:44.429678 ignition[964]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 17 00:25:44.429678 ignition[964]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 17 00:25:44.429678 ignition[964]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 17 00:25:44.429678 ignition[964]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 17 00:25:44.429678 ignition[964]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Jan 17 00:25:44.429678 ignition[964]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 00:25:44.429678 ignition[964]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:25:44.429678 ignition[964]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:25:44.429678 ignition[964]: INFO : files: files passed Jan 17 00:25:44.429678 ignition[964]: INFO : Ignition finished successfully Jan 17 00:25:44.431807 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 00:25:44.441535 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 00:25:44.450415 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 00:25:44.460033 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 00:25:44.461153 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 00:25:44.468863 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:25:44.468863 initrd-setup-root-after-ignition[994]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:25:44.471765 initrd-setup-root-after-ignition[998]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:25:44.475452 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:25:44.477256 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 00:25:44.482386 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 00:25:44.512397 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 00:25:44.512513 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 00:25:44.514599 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 00:25:44.515596 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 00:25:44.517260 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 00:25:44.523414 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 00:25:44.540556 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:25:44.545480 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 00:25:44.560435 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:25:44.562370 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:25:44.564233 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 00:25:44.565960 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 00:25:44.566238 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:25:44.569086 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 00:25:44.570798 systemd[1]: Stopped target basic.target - Basic System. Jan 17 00:25:44.572296 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 00:25:44.573252 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:25:44.574310 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 00:25:44.575492 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 00:25:44.576570 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:25:44.577681 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 00:25:44.578769 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 00:25:44.579850 systemd[1]: Stopped target swap.target - Swaps. Jan 17 00:25:44.580936 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 00:25:44.581136 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:25:44.582654 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:25:44.583891 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:25:44.584871 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 00:25:44.585017 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:25:44.585938 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 00:25:44.586129 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 00:25:44.587479 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 00:25:44.587700 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:25:44.589040 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 00:25:44.589298 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 00:25:44.590455 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 17 00:25:44.590594 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 00:25:44.596243 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 00:25:44.599243 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 00:25:44.600047 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 00:25:44.600561 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:25:44.601673 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 00:25:44.602173 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:25:44.607413 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 00:25:44.607509 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 00:25:44.620465 ignition[1018]: INFO : Ignition 2.19.0 Jan 17 00:25:44.620465 ignition[1018]: INFO : Stage: umount Jan 17 00:25:44.621459 ignition[1018]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:25:44.621459 ignition[1018]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 17 00:25:44.622905 ignition[1018]: INFO : umount: umount passed Jan 17 00:25:44.622905 ignition[1018]: INFO : Ignition finished successfully Jan 17 00:25:44.623566 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 00:25:44.623675 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 00:25:44.624595 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 00:25:44.624680 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 00:25:44.626869 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 00:25:44.626924 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 00:25:44.627319 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 00:25:44.627356 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 00:25:44.627695 systemd[1]: Stopped target network.target - Network. Jan 17 00:25:44.628005 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 00:25:44.628042 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:25:44.628431 systemd[1]: Stopped target paths.target - Path Units. Jan 17 00:25:44.628742 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 00:25:44.632393 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:25:44.633122 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 00:25:44.633797 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 00:25:44.634512 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 00:25:44.634566 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:25:44.635612 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 00:25:44.635653 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:25:44.636396 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 00:25:44.636445 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 00:25:44.637119 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 00:25:44.637166 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 00:25:44.637976 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 00:25:44.639257 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 00:25:44.640676 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 00:25:44.642043 systemd-networkd[786]: eth0: DHCPv6 lease lost Jan 17 00:25:44.647972 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 00:25:44.648173 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 00:25:44.648203 systemd-networkd[786]: eth1: DHCPv6 lease lost Jan 17 00:25:44.650183 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 00:25:44.650236 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:25:44.652394 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 00:25:44.652506 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 00:25:44.653618 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 00:25:44.653678 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:25:44.661309 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 00:25:44.662031 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 00:25:44.662121 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:25:44.662578 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 00:25:44.662628 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:25:44.662986 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 00:25:44.663033 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 00:25:44.663491 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:25:44.672985 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 00:25:44.674180 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 00:25:44.674960 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 00:25:44.675041 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 00:25:44.683825 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 00:25:44.683995 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:25:44.684774 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 00:25:44.684873 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 00:25:44.685965 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 00:25:44.686029 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 00:25:44.686625 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 00:25:44.686663 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:25:44.687346 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 00:25:44.687389 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:25:44.688408 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 00:25:44.688448 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 00:25:44.689404 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:25:44.689445 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:25:44.697297 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 00:25:44.697657 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 00:25:44.697709 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:25:44.698114 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 17 00:25:44.698169 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:25:44.698540 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 00:25:44.698576 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:25:44.698929 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:25:44.698962 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:25:44.704611 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 00:25:44.704737 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 00:25:44.705699 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 00:25:44.713416 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 00:25:44.721541 systemd[1]: Switching root. Jan 17 00:25:44.761643 systemd-journald[188]: Journal stopped Jan 17 00:25:46.234450 systemd-journald[188]: Received SIGTERM from PID 1 (systemd). Jan 17 00:25:46.234563 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 00:25:46.234592 kernel: SELinux: policy capability open_perms=1 Jan 17 00:25:46.234606 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 00:25:46.234617 kernel: SELinux: policy capability always_check_network=0 Jan 17 00:25:46.234629 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 00:25:46.234642 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 00:25:46.234654 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 00:25:46.234670 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 00:25:46.234683 kernel: audit: type=1403 audit(1768609544.975:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 00:25:46.234702 systemd[1]: Successfully loaded SELinux policy in 79.741ms. Jan 17 00:25:46.234746 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 25.848ms. Jan 17 00:25:46.234765 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:25:46.234778 systemd[1]: Detected virtualization kvm. Jan 17 00:25:46.234794 systemd[1]: Detected architecture x86-64. Jan 17 00:25:46.234807 systemd[1]: Detected first boot. Jan 17 00:25:46.234820 systemd[1]: Hostname set to . Jan 17 00:25:46.234834 systemd[1]: Initializing machine ID from VM UUID. Jan 17 00:25:46.234847 zram_generator::config[1064]: No configuration found. Jan 17 00:25:46.234862 systemd[1]: Populated /etc with preset unit settings. Jan 17 00:25:46.234875 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 17 00:25:46.234888 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 17 00:25:46.234904 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 17 00:25:46.234917 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 00:25:46.234931 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 00:25:46.234944 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 00:25:46.234957 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 00:25:46.234970 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 00:25:46.234984 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 00:25:46.234997 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 00:25:46.235014 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 00:25:46.235027 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:25:46.235040 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:25:46.235057 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 00:25:46.235070 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 00:25:46.235084 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 00:25:46.235111 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:25:46.235125 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 17 00:25:46.235137 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:25:46.235171 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 17 00:25:46.235185 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 17 00:25:46.235197 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 17 00:25:46.235214 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 00:25:46.235234 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:25:46.235248 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:25:46.235270 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:25:46.235291 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:25:46.235307 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 00:25:46.235321 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 00:25:46.235334 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:25:46.235347 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:25:46.235361 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:25:46.235374 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 00:25:46.235387 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 00:25:46.235402 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 00:25:46.235422 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 00:25:46.235435 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:25:46.235737 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 00:25:46.235751 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 00:25:46.235764 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 00:25:46.235778 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 00:25:46.235792 systemd[1]: Reached target machines.target - Containers. Jan 17 00:25:46.235804 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 00:25:46.235821 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:25:46.235834 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:25:46.235847 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 00:25:46.235860 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:25:46.235872 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:25:46.235886 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:25:46.235898 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 00:25:46.235911 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:25:46.235925 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 00:25:46.235942 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 17 00:25:46.235956 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 17 00:25:46.235973 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 17 00:25:46.235987 systemd[1]: Stopped systemd-fsck-usr.service. Jan 17 00:25:46.236000 kernel: loop: module loaded Jan 17 00:25:46.236014 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:25:46.236028 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:25:46.236044 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 00:25:46.236058 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 00:25:46.236072 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:25:46.236085 systemd[1]: verity-setup.service: Deactivated successfully. Jan 17 00:25:46.236119 systemd[1]: Stopped verity-setup.service. Jan 17 00:25:46.236133 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:25:46.236145 kernel: ACPI: bus type drm_connector registered Jan 17 00:25:46.236171 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 00:25:46.236183 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 00:25:46.236200 kernel: fuse: init (API version 7.39) Jan 17 00:25:46.236259 systemd-journald[1147]: Collecting audit messages is disabled. Jan 17 00:25:46.236286 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 00:25:46.236299 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 00:25:46.236316 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 00:25:46.236331 systemd-journald[1147]: Journal started Jan 17 00:25:46.236356 systemd-journald[1147]: Runtime Journal (/run/log/journal/0fc40b7da28b4963b337014284a85818) is 8.0M, max 76.3M, 68.3M free. Jan 17 00:25:45.836782 systemd[1]: Queued start job for default target multi-user.target. Jan 17 00:25:45.856539 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 17 00:25:45.856999 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 17 00:25:46.240469 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:25:46.241582 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 00:25:46.242725 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 00:25:46.243768 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:25:46.244786 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 00:25:46.245045 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 00:25:46.246043 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:25:46.246680 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:25:46.247607 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:25:46.247780 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:25:46.248745 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:25:46.248907 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:25:46.250119 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 00:25:46.250355 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 00:25:46.251762 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:25:46.252039 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:25:46.253042 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:25:46.254265 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 00:25:46.255496 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 00:25:46.273264 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 00:25:46.283267 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 00:25:46.294238 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 00:25:46.295675 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 00:25:46.295730 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:25:46.297680 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 00:25:46.305378 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 00:25:46.310964 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 00:25:46.312488 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:25:46.319453 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 00:25:46.326398 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 00:25:46.327240 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:25:46.335515 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 00:25:46.336239 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:25:46.340314 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:25:46.343960 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 00:25:46.359539 systemd-journald[1147]: Time spent on flushing to /var/log/journal/0fc40b7da28b4963b337014284a85818 is 98.414ms for 1176 entries. Jan 17 00:25:46.359539 systemd-journald[1147]: System Journal (/var/log/journal/0fc40b7da28b4963b337014284a85818) is 8.0M, max 584.8M, 576.8M free. Jan 17 00:25:46.499060 systemd-journald[1147]: Received client request to flush runtime journal. Jan 17 00:25:46.499177 kernel: loop0: detected capacity change from 0 to 8 Jan 17 00:25:46.499206 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 00:25:46.352547 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 00:25:46.358534 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 00:25:46.361812 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 00:25:46.362825 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 00:25:46.405277 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 00:25:46.406122 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 00:25:46.416473 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 00:25:46.438830 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:25:46.440964 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 00:25:46.441642 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 00:25:46.446466 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 00:25:46.462753 udevadm[1187]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 17 00:25:46.506174 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:25:46.506943 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 00:25:46.513280 kernel: loop1: detected capacity change from 0 to 142488 Jan 17 00:25:46.521300 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. Jan 17 00:25:46.521824 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. Jan 17 00:25:46.536563 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:25:46.546403 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 00:25:46.577951 kernel: loop2: detected capacity change from 0 to 140768 Jan 17 00:25:46.589889 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 00:25:46.614390 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:25:46.635775 systemd-tmpfiles[1204]: ACLs are not supported, ignoring. Jan 17 00:25:46.636189 systemd-tmpfiles[1204]: ACLs are not supported, ignoring. Jan 17 00:25:46.645682 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:25:46.649193 kernel: loop3: detected capacity change from 0 to 224512 Jan 17 00:25:46.708119 kernel: loop4: detected capacity change from 0 to 8 Jan 17 00:25:46.717701 kernel: loop5: detected capacity change from 0 to 142488 Jan 17 00:25:46.743133 kernel: loop6: detected capacity change from 0 to 140768 Jan 17 00:25:46.764137 kernel: loop7: detected capacity change from 0 to 224512 Jan 17 00:25:46.796311 (sd-merge)[1209]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Jan 17 00:25:46.800954 (sd-merge)[1209]: Merged extensions into '/usr'. Jan 17 00:25:46.811853 systemd[1]: Reloading requested from client PID 1180 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 00:25:46.812046 systemd[1]: Reloading... Jan 17 00:25:46.951134 zram_generator::config[1238]: No configuration found. Jan 17 00:25:47.015239 ldconfig[1175]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 00:25:47.074685 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:25:47.116068 systemd[1]: Reloading finished in 303 ms. Jan 17 00:25:47.143335 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 00:25:47.148430 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 00:25:47.159383 systemd[1]: Starting ensure-sysext.service... Jan 17 00:25:47.165879 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:25:47.184232 systemd[1]: Reloading requested from client PID 1278 ('systemctl') (unit ensure-sysext.service)... Jan 17 00:25:47.184254 systemd[1]: Reloading... Jan 17 00:25:47.219136 systemd-tmpfiles[1279]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 00:25:47.219507 systemd-tmpfiles[1279]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 00:25:47.221485 systemd-tmpfiles[1279]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 00:25:47.221730 systemd-tmpfiles[1279]: ACLs are not supported, ignoring. Jan 17 00:25:47.221804 systemd-tmpfiles[1279]: ACLs are not supported, ignoring. Jan 17 00:25:47.227288 systemd-tmpfiles[1279]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:25:47.227303 systemd-tmpfiles[1279]: Skipping /boot Jan 17 00:25:47.240345 systemd-tmpfiles[1279]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:25:47.240362 systemd-tmpfiles[1279]: Skipping /boot Jan 17 00:25:47.285175 zram_generator::config[1308]: No configuration found. Jan 17 00:25:47.404607 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:25:47.459992 systemd[1]: Reloading finished in 275 ms. Jan 17 00:25:47.481678 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 00:25:47.483015 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:25:47.509543 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:25:47.520605 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 00:25:47.526372 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 00:25:47.538589 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:25:47.547463 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:25:47.557452 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 00:25:47.574618 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 00:25:47.578214 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:25:47.579411 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:25:47.590493 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:25:47.600558 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:25:47.608520 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:25:47.609378 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:25:47.610264 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:25:47.617317 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:25:47.617620 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:25:47.617848 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:25:47.617942 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:25:47.620370 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 00:25:47.625647 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:25:47.625894 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:25:47.633922 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:25:47.634282 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:25:47.650412 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 00:25:47.657632 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:25:47.657949 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:25:47.666772 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:25:47.667732 systemd-udevd[1357]: Using default interface naming scheme 'v255'. Jan 17 00:25:47.676492 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:25:47.681604 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:25:47.683408 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:25:47.693698 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 00:25:47.694345 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:25:47.696025 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 00:25:47.697827 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:25:47.699198 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:25:47.700553 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:25:47.700762 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:25:47.716589 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:25:47.718031 systemd[1]: Finished ensure-sysext.service. Jan 17 00:25:47.722464 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 00:25:47.733077 augenrules[1386]: No rules Jan 17 00:25:47.744268 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:25:47.744919 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:25:47.751962 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 17 00:25:47.753330 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 00:25:47.755329 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:25:47.756564 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:25:47.756855 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:25:47.759009 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:25:47.759520 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:25:47.768072 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:25:47.792027 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 00:25:47.822566 systemd-resolved[1355]: Positive Trust Anchors: Jan 17 00:25:47.823140 systemd-resolved[1355]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:25:47.823270 systemd-resolved[1355]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:25:47.829001 systemd-resolved[1355]: Using system hostname 'ci-4081-3-6-n-e100e79615'. Jan 17 00:25:47.833534 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:25:47.834268 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:25:47.916274 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 17 00:25:47.925321 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 17 00:25:47.925980 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 00:25:47.930464 systemd-networkd[1401]: lo: Link UP Jan 17 00:25:47.930482 systemd-networkd[1401]: lo: Gained carrier Jan 17 00:25:47.933564 systemd-timesyncd[1403]: No network connectivity, watching for changes. Jan 17 00:25:47.934250 systemd-networkd[1401]: Enumeration completed Jan 17 00:25:47.934725 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:25:47.935828 systemd[1]: Reached target network.target - Network. Jan 17 00:25:47.943340 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 00:25:47.993932 systemd-networkd[1401]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:25:47.993953 systemd-networkd[1401]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:25:47.996617 systemd-networkd[1401]: eth0: Link UP Jan 17 00:25:47.996632 systemd-networkd[1401]: eth0: Gained carrier Jan 17 00:25:47.996656 systemd-networkd[1401]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:25:48.040180 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jan 17 00:25:48.055253 systemd-networkd[1401]: eth0: DHCPv4 address 135.181.41.243/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 17 00:25:48.057431 systemd-timesyncd[1403]: Network configuration changed, trying to establish connection. Jan 17 00:25:48.062147 kernel: ACPI: button: Power Button [PWRF] Jan 17 00:25:48.065179 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1418) Jan 17 00:25:48.068347 systemd-networkd[1401]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:25:48.068358 systemd-networkd[1401]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:25:48.073814 systemd-networkd[1401]: eth1: Link UP Jan 17 00:25:48.073978 systemd-networkd[1401]: eth1: Gained carrier Jan 17 00:25:48.074078 systemd-networkd[1401]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:25:48.089966 systemd-timesyncd[1403]: Contacted time server 5.45.97.204:123 (0.flatcar.pool.ntp.org). Jan 17 00:25:48.090257 systemd-timesyncd[1403]: Initial clock synchronization to Sat 2026-01-17 00:25:48.116788 UTC. Jan 17 00:25:48.109220 systemd-networkd[1401]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Jan 17 00:25:48.116792 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 17 00:25:48.125559 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 00:25:48.142122 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Jan 17 00:25:48.142249 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:25:48.142375 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:25:48.149522 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:25:48.158509 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:25:48.166592 kernel: mousedev: PS/2 mouse device common for all mice Jan 17 00:25:48.163357 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:25:48.165418 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:25:48.165465 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 00:25:48.165481 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:25:48.166179 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 00:25:48.179617 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:25:48.179856 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:25:48.197530 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:25:48.197746 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:25:48.199555 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:25:48.200151 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:25:48.203735 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:25:48.204081 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:25:48.213285 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Jan 17 00:25:48.219182 kernel: EDAC MC: Ver: 3.0.0 Jan 17 00:25:48.221846 kernel: Console: switching to colour dummy device 80x25 Jan 17 00:25:48.223215 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input5 Jan 17 00:25:48.230323 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Jan 17 00:25:48.230663 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 17 00:25:48.234409 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 17 00:25:48.234470 kernel: [drm] features: -context_init Jan 17 00:25:48.234483 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 17 00:25:48.238995 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 17 00:25:48.239269 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 17 00:25:48.239400 kernel: [drm] number of scanouts: 1 Jan 17 00:25:48.240521 kernel: [drm] number of cap sets: 0 Jan 17 00:25:48.250231 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Jan 17 00:25:48.283139 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 17 00:25:48.290933 kernel: Console: switching to colour frame buffer device 160x50 Jan 17 00:25:48.299276 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:25:48.302961 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 17 00:25:48.317729 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:25:48.318356 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:25:48.325403 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:25:48.329450 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:25:48.329681 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:25:48.339469 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:25:48.410540 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:25:48.441307 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 00:25:48.449424 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 00:25:48.463047 lvm[1468]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:25:48.507189 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 00:25:48.507535 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:25:48.507632 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:25:48.507821 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 00:25:48.507930 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 00:25:48.508573 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 00:25:48.509643 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 00:25:48.509917 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 00:25:48.510204 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 00:25:48.510289 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:25:48.510431 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:25:48.512963 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 00:25:48.515728 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 00:25:48.531718 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 00:25:48.533332 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 00:25:48.536227 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 00:25:48.536908 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:25:48.538996 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:25:48.539986 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:25:48.540013 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:25:48.543272 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 00:25:48.546298 lvm[1472]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:25:48.551296 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 17 00:25:48.563242 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 00:25:48.565855 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 00:25:48.570201 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 00:25:48.571671 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 00:25:48.577501 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 00:25:48.589211 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 17 00:25:48.590390 jq[1476]: false Jan 17 00:25:48.601297 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jan 17 00:25:48.603652 coreos-metadata[1474]: Jan 17 00:25:48.602 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jan 17 00:25:48.605299 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 00:25:48.613340 coreos-metadata[1474]: Jan 17 00:25:48.613 INFO Fetch successful Jan 17 00:25:48.613340 coreos-metadata[1474]: Jan 17 00:25:48.613 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jan 17 00:25:48.616171 coreos-metadata[1474]: Jan 17 00:25:48.615 INFO Fetch successful Jan 17 00:25:48.616337 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 00:25:48.622333 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 00:25:48.623371 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 00:25:48.623836 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 00:25:48.632301 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 00:25:48.640558 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 00:25:48.645569 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 00:25:48.660544 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 00:25:48.661393 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 00:25:48.661745 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 00:25:48.662259 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 00:25:48.672589 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 00:25:48.673189 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 00:25:48.675694 dbus-daemon[1475]: [system] SELinux support is enabled Jan 17 00:25:48.684380 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 00:25:48.689278 update_engine[1488]: I20260117 00:25:48.686698 1488 main.cc:92] Flatcar Update Engine starting Jan 17 00:25:48.689497 jq[1491]: true Jan 17 00:25:48.701524 (ntainerd)[1501]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 00:25:48.705688 jq[1503]: true Jan 17 00:25:48.705936 extend-filesystems[1477]: Found loop4 Jan 17 00:25:48.705936 extend-filesystems[1477]: Found loop5 Jan 17 00:25:48.705936 extend-filesystems[1477]: Found loop6 Jan 17 00:25:48.705936 extend-filesystems[1477]: Found loop7 Jan 17 00:25:48.705936 extend-filesystems[1477]: Found sda Jan 17 00:25:48.705936 extend-filesystems[1477]: Found sda1 Jan 17 00:25:48.705936 extend-filesystems[1477]: Found sda2 Jan 17 00:25:48.705936 extend-filesystems[1477]: Found sda3 Jan 17 00:25:48.705936 extend-filesystems[1477]: Found usr Jan 17 00:25:48.705936 extend-filesystems[1477]: Found sda4 Jan 17 00:25:48.705936 extend-filesystems[1477]: Found sda6 Jan 17 00:25:48.705936 extend-filesystems[1477]: Found sda7 Jan 17 00:25:48.705936 extend-filesystems[1477]: Found sda9 Jan 17 00:25:48.705936 extend-filesystems[1477]: Checking size of /dev/sda9 Jan 17 00:25:48.723589 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 00:25:48.769082 update_engine[1488]: I20260117 00:25:48.707696 1488 update_check_scheduler.cc:74] Next update check in 9m39s Jan 17 00:25:48.723632 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 00:25:48.739960 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 00:25:48.777310 tar[1499]: linux-amd64/LICENSE Jan 17 00:25:48.777310 tar[1499]: linux-amd64/helm Jan 17 00:25:48.739981 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 00:25:48.743289 systemd[1]: Started update-engine.service - Update Engine. Jan 17 00:25:48.757390 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 00:25:48.790766 extend-filesystems[1477]: Resized partition /dev/sda9 Jan 17 00:25:48.796001 extend-filesystems[1526]: resize2fs 1.47.1 (20-May-2024) Jan 17 00:25:48.815130 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 19393531 blocks Jan 17 00:25:48.863643 systemd-logind[1487]: New seat seat0. Jan 17 00:25:48.881555 systemd-logind[1487]: Watching system buttons on /dev/input/event2 (Power Button) Jan 17 00:25:48.881577 systemd-logind[1487]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 17 00:25:48.881817 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 00:25:48.894494 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 17 00:25:48.903715 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 00:25:48.911211 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1415) Jan 17 00:25:48.940984 bash[1542]: Updated "/home/core/.ssh/authorized_keys" Jan 17 00:25:48.943039 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 00:25:48.974906 systemd[1]: Starting sshkeys.service... Jan 17 00:25:49.023532 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 17 00:25:49.038562 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 17 00:25:49.063861 sshd_keygen[1505]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 00:25:49.068318 containerd[1501]: time="2026-01-17T00:25:49.068087868Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 00:25:49.070202 locksmithd[1519]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 00:25:49.089757 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 00:25:49.097882 coreos-metadata[1556]: Jan 17 00:25:49.097 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jan 17 00:25:49.100378 coreos-metadata[1556]: Jan 17 00:25:49.099 INFO Fetch successful Jan 17 00:25:49.100467 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 00:25:49.112566 containerd[1501]: time="2026-01-17T00:25:49.112509909Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:25:49.114143 unknown[1556]: wrote ssh authorized keys file for user: core Jan 17 00:25:49.118886 containerd[1501]: time="2026-01-17T00:25:49.114895471Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:25:49.118886 containerd[1501]: time="2026-01-17T00:25:49.114956794Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 00:25:49.118886 containerd[1501]: time="2026-01-17T00:25:49.114979973Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 00:25:49.118886 containerd[1501]: time="2026-01-17T00:25:49.117069757Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 00:25:49.118886 containerd[1501]: time="2026-01-17T00:25:49.117088719Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 00:25:49.118886 containerd[1501]: time="2026-01-17T00:25:49.117228384Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:25:49.118886 containerd[1501]: time="2026-01-17T00:25:49.117239683Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:25:49.118886 containerd[1501]: time="2026-01-17T00:25:49.117473536Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:25:49.118886 containerd[1501]: time="2026-01-17T00:25:49.117515527Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 00:25:49.118886 containerd[1501]: time="2026-01-17T00:25:49.117528238Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:25:49.118886 containerd[1501]: time="2026-01-17T00:25:49.117538866Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 00:25:49.119163 containerd[1501]: time="2026-01-17T00:25:49.117649743Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:25:49.119163 containerd[1501]: time="2026-01-17T00:25:49.117931436Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:25:49.119163 containerd[1501]: time="2026-01-17T00:25:49.118070049Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:25:49.119163 containerd[1501]: time="2026-01-17T00:25:49.118082600Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 00:25:49.119163 containerd[1501]: time="2026-01-17T00:25:49.118218709Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 00:25:49.119163 containerd[1501]: time="2026-01-17T00:25:49.118264997Z" level=info msg="metadata content store policy set" policy=shared Jan 17 00:25:49.129711 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 00:25:49.129911 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 00:25:49.141696 containerd[1501]: time="2026-01-17T00:25:49.141566402Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 00:25:49.141696 containerd[1501]: time="2026-01-17T00:25:49.141653388Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 00:25:49.141696 containerd[1501]: time="2026-01-17T00:25:49.141674914Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 00:25:49.141696 containerd[1501]: time="2026-01-17T00:25:49.141696561Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 00:25:49.142957 containerd[1501]: time="2026-01-17T00:25:49.142923685Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 00:25:49.145180 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 00:25:49.149972 containerd[1501]: time="2026-01-17T00:25:49.149895148Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 00:25:49.150485 containerd[1501]: time="2026-01-17T00:25:49.150468833Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 00:25:49.153622 containerd[1501]: time="2026-01-17T00:25:49.153603303Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 00:25:49.153912 containerd[1501]: time="2026-01-17T00:25:49.153896066Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 00:25:49.153960 containerd[1501]: time="2026-01-17T00:25:49.153950527Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 00:25:49.154025 containerd[1501]: time="2026-01-17T00:25:49.154015496Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 00:25:49.154191 containerd[1501]: time="2026-01-17T00:25:49.154177619Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 00:25:49.154310 containerd[1501]: time="2026-01-17T00:25:49.154300045Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 00:25:49.154445 containerd[1501]: time="2026-01-17T00:25:49.154418906Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 00:25:49.161143 containerd[1501]: time="2026-01-17T00:25:49.154479978Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 00:25:49.161143 containerd[1501]: time="2026-01-17T00:25:49.154496827Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 00:25:49.161143 containerd[1501]: time="2026-01-17T00:25:49.154720573Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 00:25:49.161143 containerd[1501]: time="2026-01-17T00:25:49.154741658Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 00:25:49.161143 containerd[1501]: time="2026-01-17T00:25:49.154775155Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 00:25:49.161143 containerd[1501]: time="2026-01-17T00:25:49.154811065Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 00:25:49.161143 containerd[1501]: time="2026-01-17T00:25:49.154828424Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 00:25:49.161143 containerd[1501]: time="2026-01-17T00:25:49.154851313Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 00:25:49.161143 containerd[1501]: time="2026-01-17T00:25:49.154862301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 00:25:49.161143 containerd[1501]: time="2026-01-17T00:25:49.154874832Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 00:25:49.161143 containerd[1501]: time="2026-01-17T00:25:49.154886562Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 00:25:49.161143 containerd[1501]: time="2026-01-17T00:25:49.154900495Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 00:25:49.161143 containerd[1501]: time="2026-01-17T00:25:49.154914399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 00:25:49.161143 containerd[1501]: time="2026-01-17T00:25:49.154929003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 00:25:49.161532 containerd[1501]: time="2026-01-17T00:25:49.154939932Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 00:25:49.161532 containerd[1501]: time="2026-01-17T00:25:49.154951751Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 00:25:49.161532 containerd[1501]: time="2026-01-17T00:25:49.154967248Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 00:25:49.161532 containerd[1501]: time="2026-01-17T00:25:49.154986991Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 00:25:49.161532 containerd[1501]: time="2026-01-17T00:25:49.155011262Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 00:25:49.161532 containerd[1501]: time="2026-01-17T00:25:49.155027098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 00:25:49.161532 containerd[1501]: time="2026-01-17T00:25:49.155038297Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 00:25:49.161532 containerd[1501]: time="2026-01-17T00:25:49.155127848Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 00:25:49.161532 containerd[1501]: time="2026-01-17T00:25:49.155151407Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 00:25:49.161532 containerd[1501]: time="2026-01-17T00:25:49.155163377Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 00:25:49.161532 containerd[1501]: time="2026-01-17T00:25:49.155258026Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 00:25:49.161532 containerd[1501]: time="2026-01-17T00:25:49.155267973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 00:25:49.161532 containerd[1501]: time="2026-01-17T00:25:49.155283119Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 00:25:49.161532 containerd[1501]: time="2026-01-17T00:25:49.155292775Z" level=info msg="NRI interface is disabled by configuration." Jan 17 00:25:49.163599 containerd[1501]: time="2026-01-17T00:25:49.155302681Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 00:25:49.174366 containerd[1501]: time="2026-01-17T00:25:49.155626877Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 00:25:49.174366 containerd[1501]: time="2026-01-17T00:25:49.155717198Z" level=info msg="Connect containerd service" Jan 17 00:25:49.174366 containerd[1501]: time="2026-01-17T00:25:49.155745085Z" level=info msg="using legacy CRI server" Jan 17 00:25:49.174366 containerd[1501]: time="2026-01-17T00:25:49.155751466Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 00:25:49.174366 containerd[1501]: time="2026-01-17T00:25:49.155885221Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 00:25:49.174366 containerd[1501]: time="2026-01-17T00:25:49.160763414Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 00:25:49.174366 containerd[1501]: time="2026-01-17T00:25:49.162911106Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 00:25:49.174366 containerd[1501]: time="2026-01-17T00:25:49.162970576Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 00:25:49.174366 containerd[1501]: time="2026-01-17T00:25:49.163012857Z" level=info msg="Start subscribing containerd event" Jan 17 00:25:49.174366 containerd[1501]: time="2026-01-17T00:25:49.163058273Z" level=info msg="Start recovering state" Jan 17 00:25:49.174366 containerd[1501]: time="2026-01-17T00:25:49.164667259Z" level=info msg="Start event monitor" Jan 17 00:25:49.174366 containerd[1501]: time="2026-01-17T00:25:49.164697991Z" level=info msg="Start snapshots syncer" Jan 17 00:25:49.174366 containerd[1501]: time="2026-01-17T00:25:49.164712245Z" level=info msg="Start cni network conf syncer for default" Jan 17 00:25:49.174366 containerd[1501]: time="2026-01-17T00:25:49.164729143Z" level=info msg="Start streaming server" Jan 17 00:25:49.174366 containerd[1501]: time="2026-01-17T00:25:49.169827777Z" level=info msg="containerd successfully booted in 0.106250s" Jan 17 00:25:49.165121 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 00:25:49.171866 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 00:25:49.182549 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 00:25:49.191980 kernel: EXT4-fs (sda9): resized filesystem to 19393531 Jan 17 00:25:49.196849 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 17 00:25:49.240316 update-ssh-keys[1577]: Updated "/home/core/.ssh/authorized_keys" Jan 17 00:25:49.198960 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 00:25:49.202036 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 17 00:25:49.204834 systemd[1]: Finished sshkeys.service. Jan 17 00:25:49.243154 extend-filesystems[1526]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 17 00:25:49.243154 extend-filesystems[1526]: old_desc_blocks = 1, new_desc_blocks = 10 Jan 17 00:25:49.243154 extend-filesystems[1526]: The filesystem on /dev/sda9 is now 19393531 (4k) blocks long. Jan 17 00:25:49.249345 extend-filesystems[1477]: Resized filesystem in /dev/sda9 Jan 17 00:25:49.249345 extend-filesystems[1477]: Found sr0 Jan 17 00:25:49.248886 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 00:25:49.250010 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 00:25:49.286414 systemd-networkd[1401]: eth0: Gained IPv6LL Jan 17 00:25:49.290685 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 00:25:49.296678 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 00:25:49.308206 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:25:49.318991 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 00:25:49.349689 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 00:25:49.413543 systemd-networkd[1401]: eth1: Gained IPv6LL Jan 17 00:25:49.617189 tar[1499]: linux-amd64/README.md Jan 17 00:25:49.630199 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 17 00:25:50.137288 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:25:50.138473 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 00:25:50.140303 (kubelet)[1607]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:25:50.143772 systemd[1]: Startup finished in 1.437s (kernel) + 6.223s (initrd) + 5.245s (userspace) = 12.906s. Jan 17 00:25:50.713812 kubelet[1607]: E0117 00:25:50.713731 1607 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:25:50.719551 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:25:50.719734 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:25:53.554154 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 00:25:53.559579 systemd[1]: Started sshd@0-135.181.41.243:22-20.161.92.111:34958.service - OpenSSH per-connection server daemon (20.161.92.111:34958). Jan 17 00:25:54.341765 sshd[1619]: Accepted publickey for core from 20.161.92.111 port 34958 ssh2: RSA SHA256:X2mgP45nVkft7Ss8TR9hqcppzZ5HLZCqnkArfSq+OHE Jan 17 00:25:54.344193 sshd[1619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:25:54.353878 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 00:25:54.360992 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 00:25:54.365017 systemd-logind[1487]: New session 1 of user core. Jan 17 00:25:54.379124 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 00:25:54.386821 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 00:25:54.398745 (systemd)[1623]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 00:25:54.532485 systemd[1623]: Queued start job for default target default.target. Jan 17 00:25:54.542116 systemd[1623]: Created slice app.slice - User Application Slice. Jan 17 00:25:54.542164 systemd[1623]: Reached target paths.target - Paths. Jan 17 00:25:54.542182 systemd[1623]: Reached target timers.target - Timers. Jan 17 00:25:54.544434 systemd[1623]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 00:25:54.572239 systemd[1623]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 00:25:54.572466 systemd[1623]: Reached target sockets.target - Sockets. Jan 17 00:25:54.572486 systemd[1623]: Reached target basic.target - Basic System. Jan 17 00:25:54.572536 systemd[1623]: Reached target default.target - Main User Target. Jan 17 00:25:54.572582 systemd[1623]: Startup finished in 163ms. Jan 17 00:25:54.572833 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 00:25:54.580497 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 00:25:55.133641 systemd[1]: Started sshd@1-135.181.41.243:22-20.161.92.111:34964.service - OpenSSH per-connection server daemon (20.161.92.111:34964). Jan 17 00:25:55.908501 sshd[1634]: Accepted publickey for core from 20.161.92.111 port 34964 ssh2: RSA SHA256:X2mgP45nVkft7Ss8TR9hqcppzZ5HLZCqnkArfSq+OHE Jan 17 00:25:55.911283 sshd[1634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:25:55.919207 systemd-logind[1487]: New session 2 of user core. Jan 17 00:25:55.931474 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 00:25:56.442968 sshd[1634]: pam_unix(sshd:session): session closed for user core Jan 17 00:25:56.447598 systemd[1]: sshd@1-135.181.41.243:22-20.161.92.111:34964.service: Deactivated successfully. Jan 17 00:25:56.449826 systemd[1]: session-2.scope: Deactivated successfully. Jan 17 00:25:56.451774 systemd-logind[1487]: Session 2 logged out. Waiting for processes to exit. Jan 17 00:25:56.453084 systemd-logind[1487]: Removed session 2. Jan 17 00:25:56.578520 systemd[1]: Started sshd@2-135.181.41.243:22-20.161.92.111:34980.service - OpenSSH per-connection server daemon (20.161.92.111:34980). Jan 17 00:25:57.336643 sshd[1641]: Accepted publickey for core from 20.161.92.111 port 34980 ssh2: RSA SHA256:X2mgP45nVkft7Ss8TR9hqcppzZ5HLZCqnkArfSq+OHE Jan 17 00:25:57.339308 sshd[1641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:25:57.346938 systemd-logind[1487]: New session 3 of user core. Jan 17 00:25:57.354317 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 00:25:57.862687 sshd[1641]: pam_unix(sshd:session): session closed for user core Jan 17 00:25:57.869465 systemd[1]: sshd@2-135.181.41.243:22-20.161.92.111:34980.service: Deactivated successfully. Jan 17 00:25:57.873756 systemd[1]: session-3.scope: Deactivated successfully. Jan 17 00:25:57.874884 systemd-logind[1487]: Session 3 logged out. Waiting for processes to exit. Jan 17 00:25:57.876937 systemd-logind[1487]: Removed session 3. Jan 17 00:25:58.003691 systemd[1]: Started sshd@3-135.181.41.243:22-20.161.92.111:34988.service - OpenSSH per-connection server daemon (20.161.92.111:34988). Jan 17 00:25:58.758258 sshd[1648]: Accepted publickey for core from 20.161.92.111 port 34988 ssh2: RSA SHA256:X2mgP45nVkft7Ss8TR9hqcppzZ5HLZCqnkArfSq+OHE Jan 17 00:25:58.760305 sshd[1648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:25:58.765507 systemd-logind[1487]: New session 4 of user core. Jan 17 00:25:58.771282 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 00:25:59.292272 sshd[1648]: pam_unix(sshd:session): session closed for user core Jan 17 00:25:59.297114 systemd[1]: sshd@3-135.181.41.243:22-20.161.92.111:34988.service: Deactivated successfully. Jan 17 00:25:59.300419 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 00:25:59.301338 systemd-logind[1487]: Session 4 logged out. Waiting for processes to exit. Jan 17 00:25:59.302688 systemd-logind[1487]: Removed session 4. Jan 17 00:25:59.435585 systemd[1]: Started sshd@4-135.181.41.243:22-20.161.92.111:34996.service - OpenSSH per-connection server daemon (20.161.92.111:34996). Jan 17 00:26:00.195922 sshd[1655]: Accepted publickey for core from 20.161.92.111 port 34996 ssh2: RSA SHA256:X2mgP45nVkft7Ss8TR9hqcppzZ5HLZCqnkArfSq+OHE Jan 17 00:26:00.198514 sshd[1655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:26:00.206809 systemd-logind[1487]: New session 5 of user core. Jan 17 00:26:00.213361 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 00:26:00.626973 sudo[1658]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 00:26:00.627736 sudo[1658]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:26:00.648765 sudo[1658]: pam_unix(sudo:session): session closed for user root Jan 17 00:26:00.773209 sshd[1655]: pam_unix(sshd:session): session closed for user core Jan 17 00:26:00.783165 systemd[1]: sshd@4-135.181.41.243:22-20.161.92.111:34996.service: Deactivated successfully. Jan 17 00:26:00.788783 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 00:26:00.790712 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 00:26:00.792657 systemd-logind[1487]: Session 5 logged out. Waiting for processes to exit. Jan 17 00:26:00.801445 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:26:00.803776 systemd-logind[1487]: Removed session 5. Jan 17 00:26:00.919287 systemd[1]: Started sshd@5-135.181.41.243:22-20.161.92.111:35000.service - OpenSSH per-connection server daemon (20.161.92.111:35000). Jan 17 00:26:00.951482 (kubelet)[1672]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:26:00.953240 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:26:00.997771 kubelet[1672]: E0117 00:26:00.997662 1672 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:26:01.008246 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:26:01.008451 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:26:01.699559 sshd[1666]: Accepted publickey for core from 20.161.92.111 port 35000 ssh2: RSA SHA256:X2mgP45nVkft7Ss8TR9hqcppzZ5HLZCqnkArfSq+OHE Jan 17 00:26:01.701785 sshd[1666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:26:01.708167 systemd-logind[1487]: New session 6 of user core. Jan 17 00:26:01.717412 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 00:26:02.112237 sudo[1682]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 00:26:02.112676 sudo[1682]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:26:02.118181 sudo[1682]: pam_unix(sudo:session): session closed for user root Jan 17 00:26:02.127522 sudo[1681]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 00:26:02.127969 sudo[1681]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:26:02.153563 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 00:26:02.164845 auditctl[1685]: No rules Jan 17 00:26:02.166599 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 00:26:02.166924 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 00:26:02.173920 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:26:02.221584 augenrules[1703]: No rules Jan 17 00:26:02.223260 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:26:02.227389 sudo[1681]: pam_unix(sudo:session): session closed for user root Jan 17 00:26:02.350343 sshd[1666]: pam_unix(sshd:session): session closed for user core Jan 17 00:26:02.355950 systemd[1]: sshd@5-135.181.41.243:22-20.161.92.111:35000.service: Deactivated successfully. Jan 17 00:26:02.358883 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 00:26:02.361495 systemd-logind[1487]: Session 6 logged out. Waiting for processes to exit. Jan 17 00:26:02.362890 systemd-logind[1487]: Removed session 6. Jan 17 00:26:02.486530 systemd[1]: Started sshd@6-135.181.41.243:22-20.161.92.111:37524.service - OpenSSH per-connection server daemon (20.161.92.111:37524). Jan 17 00:26:03.236003 sshd[1711]: Accepted publickey for core from 20.161.92.111 port 37524 ssh2: RSA SHA256:X2mgP45nVkft7Ss8TR9hqcppzZ5HLZCqnkArfSq+OHE Jan 17 00:26:03.238486 sshd[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:26:03.245194 systemd-logind[1487]: New session 7 of user core. Jan 17 00:26:03.255472 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 00:26:03.647093 sudo[1714]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 00:26:03.647719 sudo[1714]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:26:03.956545 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 17 00:26:03.969719 (dockerd)[1729]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 17 00:26:04.413756 dockerd[1729]: time="2026-01-17T00:26:04.413329428Z" level=info msg="Starting up" Jan 17 00:26:04.490016 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport407105082-merged.mount: Deactivated successfully. Jan 17 00:26:04.533885 dockerd[1729]: time="2026-01-17T00:26:04.533804618Z" level=info msg="Loading containers: start." Jan 17 00:26:04.662135 kernel: Initializing XFRM netlink socket Jan 17 00:26:04.751562 systemd-networkd[1401]: docker0: Link UP Jan 17 00:26:04.768012 dockerd[1729]: time="2026-01-17T00:26:04.767925359Z" level=info msg="Loading containers: done." Jan 17 00:26:04.785417 dockerd[1729]: time="2026-01-17T00:26:04.785350665Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 17 00:26:04.785610 dockerd[1729]: time="2026-01-17T00:26:04.785531270Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 17 00:26:04.785706 dockerd[1729]: time="2026-01-17T00:26:04.785683348Z" level=info msg="Daemon has completed initialization" Jan 17 00:26:04.818329 dockerd[1729]: time="2026-01-17T00:26:04.818073885Z" level=info msg="API listen on /run/docker.sock" Jan 17 00:26:04.818409 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 17 00:26:05.489726 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2637350802-merged.mount: Deactivated successfully. Jan 17 00:26:05.947457 containerd[1501]: time="2026-01-17T00:26:05.947312346Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 17 00:26:06.595844 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount467025671.mount: Deactivated successfully. Jan 17 00:26:07.981175 containerd[1501]: time="2026-01-17T00:26:07.981112842Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:26:07.982193 containerd[1501]: time="2026-01-17T00:26:07.982124525Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=29070747" Jan 17 00:26:07.984247 containerd[1501]: time="2026-01-17T00:26:07.982983979Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:26:07.986059 containerd[1501]: time="2026-01-17T00:26:07.985023803Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:26:07.986059 containerd[1501]: time="2026-01-17T00:26:07.985862116Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 2.038495187s" Jan 17 00:26:07.986059 containerd[1501]: time="2026-01-17T00:26:07.985889830Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 17 00:26:07.986819 containerd[1501]: time="2026-01-17T00:26:07.986794387Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 17 00:26:09.332833 containerd[1501]: time="2026-01-17T00:26:09.332765166Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:26:09.333877 containerd[1501]: time="2026-01-17T00:26:09.333832611Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24993376" Jan 17 00:26:09.335621 containerd[1501]: time="2026-01-17T00:26:09.335249517Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:26:09.338680 containerd[1501]: time="2026-01-17T00:26:09.338642110Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:26:09.339694 containerd[1501]: time="2026-01-17T00:26:09.339666244Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 1.352783411s" Jan 17 00:26:09.339766 containerd[1501]: time="2026-01-17T00:26:09.339703031Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 17 00:26:09.340228 containerd[1501]: time="2026-01-17T00:26:09.340212268Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 17 00:26:10.445858 containerd[1501]: time="2026-01-17T00:26:10.445783875Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:26:10.446991 containerd[1501]: time="2026-01-17T00:26:10.446835301Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19405098" Jan 17 00:26:10.447927 containerd[1501]: time="2026-01-17T00:26:10.447587569Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:26:10.449860 containerd[1501]: time="2026-01-17T00:26:10.449829392Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:26:10.450856 containerd[1501]: time="2026-01-17T00:26:10.450744500Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 1.110508741s" Jan 17 00:26:10.450856 containerd[1501]: time="2026-01-17T00:26:10.450770461Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 17 00:26:10.451905 containerd[1501]: time="2026-01-17T00:26:10.451517816Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 17 00:26:11.220061 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 17 00:26:11.229331 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:26:11.436232 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:26:11.441008 (kubelet)[1946]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:26:11.474192 kubelet[1946]: E0117 00:26:11.472991 1946 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:26:11.476854 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:26:11.477040 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:26:11.582360 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1656135693.mount: Deactivated successfully. Jan 17 00:26:11.844006 containerd[1501]: time="2026-01-17T00:26:11.843668603Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:26:11.845238 containerd[1501]: time="2026-01-17T00:26:11.845132629Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31161927" Jan 17 00:26:11.846172 containerd[1501]: time="2026-01-17T00:26:11.846036608Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:26:11.848915 containerd[1501]: time="2026-01-17T00:26:11.848885739Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:26:11.849555 containerd[1501]: time="2026-01-17T00:26:11.849526629Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 1.397985374s" Jan 17 00:26:11.849603 containerd[1501]: time="2026-01-17T00:26:11.849555991Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 17 00:26:11.850650 containerd[1501]: time="2026-01-17T00:26:11.850267692Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 17 00:26:12.384023 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2452487829.mount: Deactivated successfully. Jan 17 00:26:13.338520 containerd[1501]: time="2026-01-17T00:26:13.338462871Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:26:13.339730 containerd[1501]: time="2026-01-17T00:26:13.339588624Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565335" Jan 17 00:26:13.340908 containerd[1501]: time="2026-01-17T00:26:13.340634168Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:26:13.343789 containerd[1501]: time="2026-01-17T00:26:13.342973076Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:26:13.343789 containerd[1501]: time="2026-01-17T00:26:13.343666954Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.493375123s" Jan 17 00:26:13.343789 containerd[1501]: time="2026-01-17T00:26:13.343692583Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 17 00:26:13.344575 containerd[1501]: time="2026-01-17T00:26:13.344557063Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 17 00:26:13.807465 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1108750808.mount: Deactivated successfully. Jan 17 00:26:13.816356 containerd[1501]: time="2026-01-17T00:26:13.816238037Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:26:13.818748 containerd[1501]: time="2026-01-17T00:26:13.818474308Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321160" Jan 17 00:26:13.820256 containerd[1501]: time="2026-01-17T00:26:13.820150278Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:26:13.824661 containerd[1501]: time="2026-01-17T00:26:13.824596950Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:26:13.827062 containerd[1501]: time="2026-01-17T00:26:13.826780032Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 482.188826ms" Jan 17 00:26:13.827062 containerd[1501]: time="2026-01-17T00:26:13.826868733Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 17 00:26:13.828512 containerd[1501]: time="2026-01-17T00:26:13.828434345Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 17 00:26:14.378203 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3661999695.mount: Deactivated successfully. Jan 17 00:26:16.281347 containerd[1501]: time="2026-01-17T00:26:16.281269566Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:26:16.282410 containerd[1501]: time="2026-01-17T00:26:16.282345965Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682132" Jan 17 00:26:16.283734 containerd[1501]: time="2026-01-17T00:26:16.283436276Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:26:16.286553 containerd[1501]: time="2026-01-17T00:26:16.286527119Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:26:16.287389 containerd[1501]: time="2026-01-17T00:26:16.287359544Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.458876372s" Jan 17 00:26:16.287437 containerd[1501]: time="2026-01-17T00:26:16.287393984Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 17 00:26:18.612704 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:26:18.623634 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:26:18.686848 systemd[1]: Reloading requested from client PID 2094 ('systemctl') (unit session-7.scope)... Jan 17 00:26:18.686885 systemd[1]: Reloading... Jan 17 00:26:18.823196 zram_generator::config[2140]: No configuration found. Jan 17 00:26:18.908178 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:26:18.971511 systemd[1]: Reloading finished in 282 ms. Jan 17 00:26:19.027183 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:26:19.033269 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:26:19.034520 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 00:26:19.034967 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:26:19.041783 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:26:19.191586 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:26:19.202945 (kubelet)[2190]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:26:19.247138 kubelet[2190]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:26:19.247138 kubelet[2190]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:26:19.247138 kubelet[2190]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:26:19.247138 kubelet[2190]: I0117 00:26:19.246571 2190 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:26:19.516848 kubelet[2190]: I0117 00:26:19.516792 2190 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 17 00:26:19.516848 kubelet[2190]: I0117 00:26:19.516835 2190 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:26:19.517078 kubelet[2190]: I0117 00:26:19.517060 2190 server.go:954] "Client rotation is on, will bootstrap in background" Jan 17 00:26:19.541945 kubelet[2190]: I0117 00:26:19.541912 2190 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:26:19.542391 kubelet[2190]: E0117 00:26:19.542364 2190 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://135.181.41.243:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 135.181.41.243:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:26:19.550494 kubelet[2190]: E0117 00:26:19.550461 2190 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:26:19.550949 kubelet[2190]: I0117 00:26:19.550616 2190 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 17 00:26:19.554517 kubelet[2190]: I0117 00:26:19.554469 2190 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 00:26:19.555766 kubelet[2190]: I0117 00:26:19.555720 2190 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:26:19.555905 kubelet[2190]: I0117 00:26:19.555760 2190 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-e100e79615","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 17 00:26:19.555975 kubelet[2190]: I0117 00:26:19.555909 2190 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:26:19.555975 kubelet[2190]: I0117 00:26:19.555918 2190 container_manager_linux.go:304] "Creating device plugin manager" Jan 17 00:26:19.556070 kubelet[2190]: I0117 00:26:19.556053 2190 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:26:19.559526 kubelet[2190]: I0117 00:26:19.559371 2190 kubelet.go:446] "Attempting to sync node with API server" Jan 17 00:26:19.559526 kubelet[2190]: I0117 00:26:19.559409 2190 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:26:19.559526 kubelet[2190]: I0117 00:26:19.559427 2190 kubelet.go:352] "Adding apiserver pod source" Jan 17 00:26:19.559526 kubelet[2190]: I0117 00:26:19.559438 2190 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:26:19.564601 kubelet[2190]: I0117 00:26:19.564555 2190 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:26:19.565583 kubelet[2190]: I0117 00:26:19.565355 2190 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 00:26:19.567139 kubelet[2190]: W0117 00:26:19.566243 2190 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 00:26:19.567284 kubelet[2190]: W0117 00:26:19.567238 2190 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://135.181.41.243:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 135.181.41.243:6443: connect: connection refused Jan 17 00:26:19.567330 kubelet[2190]: E0117 00:26:19.567294 2190 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://135.181.41.243:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 135.181.41.243:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:26:19.567388 kubelet[2190]: W0117 00:26:19.567352 2190 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://135.181.41.243:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-e100e79615&limit=500&resourceVersion=0": dial tcp 135.181.41.243:6443: connect: connection refused Jan 17 00:26:19.567388 kubelet[2190]: E0117 00:26:19.567371 2190 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://135.181.41.243:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-e100e79615&limit=500&resourceVersion=0\": dial tcp 135.181.41.243:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:26:19.569329 kubelet[2190]: I0117 00:26:19.569286 2190 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 17 00:26:19.569460 kubelet[2190]: I0117 00:26:19.569352 2190 server.go:1287] "Started kubelet" Jan 17 00:26:19.569692 kubelet[2190]: I0117 00:26:19.569651 2190 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:26:19.571285 kubelet[2190]: I0117 00:26:19.570862 2190 server.go:479] "Adding debug handlers to kubelet server" Jan 17 00:26:19.573136 kubelet[2190]: I0117 00:26:19.572953 2190 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:26:19.573358 kubelet[2190]: I0117 00:26:19.573347 2190 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:26:19.574309 kubelet[2190]: I0117 00:26:19.574199 2190 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:26:19.576653 kubelet[2190]: E0117 00:26:19.575735 2190 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://135.181.41.243:6443/api/v1/namespaces/default/events\": dial tcp 135.181.41.243:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-6-n-e100e79615.188b5d0ac894fc2c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-n-e100e79615,UID:ci-4081-3-6-n-e100e79615,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-e100e79615,},FirstTimestamp:2026-01-17 00:26:19.56931486 +0000 UTC m=+0.360957330,LastTimestamp:2026-01-17 00:26:19.56931486 +0000 UTC m=+0.360957330,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-e100e79615,}" Jan 17 00:26:19.578437 kubelet[2190]: I0117 00:26:19.576854 2190 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:26:19.581443 kubelet[2190]: E0117 00:26:19.581427 2190 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 00:26:19.581589 kubelet[2190]: E0117 00:26:19.581580 2190 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-e100e79615\" not found" Jan 17 00:26:19.581673 kubelet[2190]: I0117 00:26:19.581666 2190 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 17 00:26:19.581877 kubelet[2190]: I0117 00:26:19.581866 2190 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 17 00:26:19.581944 kubelet[2190]: I0117 00:26:19.581938 2190 reconciler.go:26] "Reconciler: start to sync state" Jan 17 00:26:19.582709 kubelet[2190]: I0117 00:26:19.582691 2190 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:26:19.583219 kubelet[2190]: W0117 00:26:19.583190 2190 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://135.181.41.243:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 135.181.41.243:6443: connect: connection refused Jan 17 00:26:19.583299 kubelet[2190]: E0117 00:26:19.583286 2190 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://135.181.41.243:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 135.181.41.243:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:26:19.584304 kubelet[2190]: E0117 00:26:19.584284 2190 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://135.181.41.243:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-e100e79615?timeout=10s\": dial tcp 135.181.41.243:6443: connect: connection refused" interval="200ms" Jan 17 00:26:19.584448 kubelet[2190]: I0117 00:26:19.584439 2190 factory.go:221] Registration of the containerd container factory successfully Jan 17 00:26:19.584494 kubelet[2190]: I0117 00:26:19.584487 2190 factory.go:221] Registration of the systemd container factory successfully Jan 17 00:26:19.597927 kubelet[2190]: I0117 00:26:19.597882 2190 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 00:26:19.599498 kubelet[2190]: I0117 00:26:19.599479 2190 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 00:26:19.599605 kubelet[2190]: I0117 00:26:19.599598 2190 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 17 00:26:19.599657 kubelet[2190]: I0117 00:26:19.599650 2190 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:26:19.599684 kubelet[2190]: I0117 00:26:19.599679 2190 kubelet.go:2382] "Starting kubelet main sync loop" Jan 17 00:26:19.599761 kubelet[2190]: E0117 00:26:19.599749 2190 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 00:26:19.611490 kubelet[2190]: W0117 00:26:19.611428 2190 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://135.181.41.243:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 135.181.41.243:6443: connect: connection refused Jan 17 00:26:19.612487 kubelet[2190]: E0117 00:26:19.612461 2190 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://135.181.41.243:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 135.181.41.243:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:26:19.621190 kubelet[2190]: I0117 00:26:19.621150 2190 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:26:19.621373 kubelet[2190]: I0117 00:26:19.621362 2190 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:26:19.621455 kubelet[2190]: I0117 00:26:19.621448 2190 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:26:19.623068 kubelet[2190]: I0117 00:26:19.623042 2190 policy_none.go:49] "None policy: Start" Jan 17 00:26:19.623181 kubelet[2190]: I0117 00:26:19.623173 2190 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 17 00:26:19.623225 kubelet[2190]: I0117 00:26:19.623219 2190 state_mem.go:35] "Initializing new in-memory state store" Jan 17 00:26:19.629015 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 17 00:26:19.645384 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 17 00:26:19.649233 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 17 00:26:19.662376 kubelet[2190]: I0117 00:26:19.662352 2190 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 00:26:19.662931 kubelet[2190]: I0117 00:26:19.662779 2190 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:26:19.662931 kubelet[2190]: I0117 00:26:19.662794 2190 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:26:19.663655 kubelet[2190]: I0117 00:26:19.663310 2190 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:26:19.665360 kubelet[2190]: E0117 00:26:19.665093 2190 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:26:19.665360 kubelet[2190]: E0117 00:26:19.665179 2190 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-6-n-e100e79615\" not found" Jan 17 00:26:19.719713 systemd[1]: Created slice kubepods-burstable-pod2f548a38d62179f505f11c94d3b29a60.slice - libcontainer container kubepods-burstable-pod2f548a38d62179f505f11c94d3b29a60.slice. Jan 17 00:26:19.744489 kubelet[2190]: E0117 00:26:19.744418 2190 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-e100e79615\" not found" node="ci-4081-3-6-n-e100e79615" Jan 17 00:26:19.753739 systemd[1]: Created slice kubepods-burstable-pod1534507e84119c3a1b0c39bcd2c565c2.slice - libcontainer container kubepods-burstable-pod1534507e84119c3a1b0c39bcd2c565c2.slice. Jan 17 00:26:19.760210 kubelet[2190]: E0117 00:26:19.759790 2190 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-e100e79615\" not found" node="ci-4081-3-6-n-e100e79615" Jan 17 00:26:19.762170 systemd[1]: Created slice kubepods-burstable-pod8567510f9c9de485e02f8ce983200c61.slice - libcontainer container kubepods-burstable-pod8567510f9c9de485e02f8ce983200c61.slice. Jan 17 00:26:19.764824 kubelet[2190]: E0117 00:26:19.764739 2190 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-e100e79615\" not found" node="ci-4081-3-6-n-e100e79615" Jan 17 00:26:19.767634 kubelet[2190]: I0117 00:26:19.767293 2190 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-e100e79615" Jan 17 00:26:19.769615 kubelet[2190]: E0117 00:26:19.769546 2190 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://135.181.41.243:6443/api/v1/nodes\": dial tcp 135.181.41.243:6443: connect: connection refused" node="ci-4081-3-6-n-e100e79615" Jan 17 00:26:19.783403 kubelet[2190]: I0117 00:26:19.782991 2190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8567510f9c9de485e02f8ce983200c61-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-e100e79615\" (UID: \"8567510f9c9de485e02f8ce983200c61\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-e100e79615" Jan 17 00:26:19.783403 kubelet[2190]: I0117 00:26:19.783053 2190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8567510f9c9de485e02f8ce983200c61-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-e100e79615\" (UID: \"8567510f9c9de485e02f8ce983200c61\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-e100e79615" Jan 17 00:26:19.783403 kubelet[2190]: I0117 00:26:19.783086 2190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1534507e84119c3a1b0c39bcd2c565c2-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-e100e79615\" (UID: \"1534507e84119c3a1b0c39bcd2c565c2\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-e100e79615" Jan 17 00:26:19.783403 kubelet[2190]: I0117 00:26:19.783142 2190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2f548a38d62179f505f11c94d3b29a60-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-e100e79615\" (UID: \"2f548a38d62179f505f11c94d3b29a60\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-e100e79615" Jan 17 00:26:19.783403 kubelet[2190]: I0117 00:26:19.783202 2190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2f548a38d62179f505f11c94d3b29a60-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-e100e79615\" (UID: \"2f548a38d62179f505f11c94d3b29a60\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-e100e79615" Jan 17 00:26:19.783783 kubelet[2190]: I0117 00:26:19.783245 2190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2f548a38d62179f505f11c94d3b29a60-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-e100e79615\" (UID: \"2f548a38d62179f505f11c94d3b29a60\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-e100e79615" Jan 17 00:26:19.783783 kubelet[2190]: I0117 00:26:19.783270 2190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8567510f9c9de485e02f8ce983200c61-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-e100e79615\" (UID: \"8567510f9c9de485e02f8ce983200c61\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-e100e79615" Jan 17 00:26:19.783783 kubelet[2190]: I0117 00:26:19.783296 2190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8567510f9c9de485e02f8ce983200c61-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-e100e79615\" (UID: \"8567510f9c9de485e02f8ce983200c61\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-e100e79615" Jan 17 00:26:19.783783 kubelet[2190]: I0117 00:26:19.783333 2190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8567510f9c9de485e02f8ce983200c61-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-e100e79615\" (UID: \"8567510f9c9de485e02f8ce983200c61\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-e100e79615" Jan 17 00:26:19.785467 kubelet[2190]: E0117 00:26:19.785375 2190 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://135.181.41.243:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-e100e79615?timeout=10s\": dial tcp 135.181.41.243:6443: connect: connection refused" interval="400ms" Jan 17 00:26:19.972794 kubelet[2190]: I0117 00:26:19.972391 2190 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-e100e79615" Jan 17 00:26:19.972794 kubelet[2190]: E0117 00:26:19.972752 2190 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://135.181.41.243:6443/api/v1/nodes\": dial tcp 135.181.41.243:6443: connect: connection refused" node="ci-4081-3-6-n-e100e79615" Jan 17 00:26:20.047504 containerd[1501]: time="2026-01-17T00:26:20.047274127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-e100e79615,Uid:2f548a38d62179f505f11c94d3b29a60,Namespace:kube-system,Attempt:0,}" Jan 17 00:26:20.066618 containerd[1501]: time="2026-01-17T00:26:20.066442568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-e100e79615,Uid:1534507e84119c3a1b0c39bcd2c565c2,Namespace:kube-system,Attempt:0,}" Jan 17 00:26:20.066618 containerd[1501]: time="2026-01-17T00:26:20.066449820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-e100e79615,Uid:8567510f9c9de485e02f8ce983200c61,Namespace:kube-system,Attempt:0,}" Jan 17 00:26:20.187435 kubelet[2190]: E0117 00:26:20.186904 2190 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://135.181.41.243:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-e100e79615?timeout=10s\": dial tcp 135.181.41.243:6443: connect: connection refused" interval="800ms" Jan 17 00:26:20.376166 kubelet[2190]: I0117 00:26:20.375921 2190 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-e100e79615" Jan 17 00:26:20.376959 kubelet[2190]: E0117 00:26:20.376587 2190 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://135.181.41.243:6443/api/v1/nodes\": dial tcp 135.181.41.243:6443: connect: connection refused" node="ci-4081-3-6-n-e100e79615" Jan 17 00:26:20.418916 kubelet[2190]: W0117 00:26:20.418845 2190 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://135.181.41.243:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 135.181.41.243:6443: connect: connection refused Jan 17 00:26:20.418916 kubelet[2190]: E0117 00:26:20.418918 2190 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://135.181.41.243:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 135.181.41.243:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:26:20.573066 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount144760868.mount: Deactivated successfully. Jan 17 00:26:20.588341 kubelet[2190]: W0117 00:26:20.588245 2190 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://135.181.41.243:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 135.181.41.243:6443: connect: connection refused Jan 17 00:26:20.588341 kubelet[2190]: E0117 00:26:20.588342 2190 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://135.181.41.243:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 135.181.41.243:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:26:20.590580 containerd[1501]: time="2026-01-17T00:26:20.589077732Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:26:20.592327 containerd[1501]: time="2026-01-17T00:26:20.592198023Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:26:20.593380 containerd[1501]: time="2026-01-17T00:26:20.593308467Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:26:20.594617 containerd[1501]: time="2026-01-17T00:26:20.594553171Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:26:20.597579 containerd[1501]: time="2026-01-17T00:26:20.597507975Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:26:20.597579 containerd[1501]: time="2026-01-17T00:26:20.597575961Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312078" Jan 17 00:26:20.598471 containerd[1501]: time="2026-01-17T00:26:20.598394597Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:26:20.602947 containerd[1501]: time="2026-01-17T00:26:20.602878860Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:26:20.607193 containerd[1501]: time="2026-01-17T00:26:20.605467491Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 558.054832ms" Jan 17 00:26:20.608361 containerd[1501]: time="2026-01-17T00:26:20.608272370Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 541.711255ms" Jan 17 00:26:20.609935 containerd[1501]: time="2026-01-17T00:26:20.609844339Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 543.042458ms" Jan 17 00:26:20.752950 kubelet[2190]: W0117 00:26:20.752743 2190 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://135.181.41.243:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 135.181.41.243:6443: connect: connection refused Jan 17 00:26:20.752950 kubelet[2190]: E0117 00:26:20.752865 2190 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://135.181.41.243:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 135.181.41.243:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:26:20.794734 kubelet[2190]: W0117 00:26:20.794398 2190 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://135.181.41.243:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-e100e79615&limit=500&resourceVersion=0": dial tcp 135.181.41.243:6443: connect: connection refused Jan 17 00:26:20.794734 kubelet[2190]: E0117 00:26:20.794527 2190 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://135.181.41.243:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-e100e79615&limit=500&resourceVersion=0\": dial tcp 135.181.41.243:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:26:20.820545 containerd[1501]: time="2026-01-17T00:26:20.820390968Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:26:20.820755 containerd[1501]: time="2026-01-17T00:26:20.820680334Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:26:20.820902 containerd[1501]: time="2026-01-17T00:26:20.820860366Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:26:20.821707 containerd[1501]: time="2026-01-17T00:26:20.821138149Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:26:20.821707 containerd[1501]: time="2026-01-17T00:26:20.821255436Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:26:20.821707 containerd[1501]: time="2026-01-17T00:26:20.821290694Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:26:20.821707 containerd[1501]: time="2026-01-17T00:26:20.821440198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:26:20.825342 containerd[1501]: time="2026-01-17T00:26:20.825228352Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:26:20.825641 containerd[1501]: time="2026-01-17T00:26:20.825589174Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:26:20.826887 containerd[1501]: time="2026-01-17T00:26:20.826702688Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:26:20.829506 containerd[1501]: time="2026-01-17T00:26:20.829451485Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:26:20.832412 containerd[1501]: time="2026-01-17T00:26:20.832220397Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:26:20.853283 systemd[1]: Started cri-containerd-87c4e6d26ff15ad076eb72bf4a55c2be6b0590a8974d825ce9bb622804922749.scope - libcontainer container 87c4e6d26ff15ad076eb72bf4a55c2be6b0590a8974d825ce9bb622804922749. Jan 17 00:26:20.856886 systemd[1]: Started cri-containerd-7a9c3e1571c14b88478e9edb65259c2436fa29d4c92b3f1dd2e25e3a541d5052.scope - libcontainer container 7a9c3e1571c14b88478e9edb65259c2436fa29d4c92b3f1dd2e25e3a541d5052. Jan 17 00:26:20.864651 systemd[1]: Started cri-containerd-4656f05014facf2bcb0b732454b2c25b2df000fcfc8dd746c7e9f28c0e50aa7e.scope - libcontainer container 4656f05014facf2bcb0b732454b2c25b2df000fcfc8dd746c7e9f28c0e50aa7e. Jan 17 00:26:20.904299 containerd[1501]: time="2026-01-17T00:26:20.904177842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-e100e79615,Uid:1534507e84119c3a1b0c39bcd2c565c2,Namespace:kube-system,Attempt:0,} returns sandbox id \"7a9c3e1571c14b88478e9edb65259c2436fa29d4c92b3f1dd2e25e3a541d5052\"" Jan 17 00:26:20.909540 containerd[1501]: time="2026-01-17T00:26:20.909303811Z" level=info msg="CreateContainer within sandbox \"7a9c3e1571c14b88478e9edb65259c2436fa29d4c92b3f1dd2e25e3a541d5052\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 17 00:26:20.931617 containerd[1501]: time="2026-01-17T00:26:20.931573542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-e100e79615,Uid:2f548a38d62179f505f11c94d3b29a60,Namespace:kube-system,Attempt:0,} returns sandbox id \"87c4e6d26ff15ad076eb72bf4a55c2be6b0590a8974d825ce9bb622804922749\"" Jan 17 00:26:20.936228 containerd[1501]: time="2026-01-17T00:26:20.935998780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-e100e79615,Uid:8567510f9c9de485e02f8ce983200c61,Namespace:kube-system,Attempt:0,} returns sandbox id \"4656f05014facf2bcb0b732454b2c25b2df000fcfc8dd746c7e9f28c0e50aa7e\"" Jan 17 00:26:20.938862 containerd[1501]: time="2026-01-17T00:26:20.938631582Z" level=info msg="CreateContainer within sandbox \"4656f05014facf2bcb0b732454b2c25b2df000fcfc8dd746c7e9f28c0e50aa7e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 17 00:26:20.940001 containerd[1501]: time="2026-01-17T00:26:20.939962185Z" level=info msg="CreateContainer within sandbox \"87c4e6d26ff15ad076eb72bf4a55c2be6b0590a8974d825ce9bb622804922749\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 17 00:26:20.952035 containerd[1501]: time="2026-01-17T00:26:20.951968564Z" level=info msg="CreateContainer within sandbox \"7a9c3e1571c14b88478e9edb65259c2436fa29d4c92b3f1dd2e25e3a541d5052\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8707839f21e908cb57af658aa552afe7b026f45b956a241270d4382f7e2dcec1\"" Jan 17 00:26:20.953875 containerd[1501]: time="2026-01-17T00:26:20.952737069Z" level=info msg="StartContainer for \"8707839f21e908cb57af658aa552afe7b026f45b956a241270d4382f7e2dcec1\"" Jan 17 00:26:20.954873 containerd[1501]: time="2026-01-17T00:26:20.954805421Z" level=info msg="CreateContainer within sandbox \"4656f05014facf2bcb0b732454b2c25b2df000fcfc8dd746c7e9f28c0e50aa7e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6c89fb3ff5d69eb09cc72e8ef33c280ad8fe9c798b34de1c8ff82ab927877cc3\"" Jan 17 00:26:20.956199 containerd[1501]: time="2026-01-17T00:26:20.955591391Z" level=info msg="StartContainer for \"6c89fb3ff5d69eb09cc72e8ef33c280ad8fe9c798b34de1c8ff82ab927877cc3\"" Jan 17 00:26:20.980236 systemd[1]: Started cri-containerd-8707839f21e908cb57af658aa552afe7b026f45b956a241270d4382f7e2dcec1.scope - libcontainer container 8707839f21e908cb57af658aa552afe7b026f45b956a241270d4382f7e2dcec1. Jan 17 00:26:20.983474 systemd[1]: Started cri-containerd-6c89fb3ff5d69eb09cc72e8ef33c280ad8fe9c798b34de1c8ff82ab927877cc3.scope - libcontainer container 6c89fb3ff5d69eb09cc72e8ef33c280ad8fe9c798b34de1c8ff82ab927877cc3. Jan 17 00:26:20.987571 kubelet[2190]: E0117 00:26:20.987513 2190 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://135.181.41.243:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-e100e79615?timeout=10s\": dial tcp 135.181.41.243:6443: connect: connection refused" interval="1.6s" Jan 17 00:26:21.016188 containerd[1501]: time="2026-01-17T00:26:21.016036427Z" level=info msg="CreateContainer within sandbox \"87c4e6d26ff15ad076eb72bf4a55c2be6b0590a8974d825ce9bb622804922749\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5f6d697c6dabd4b9e8b7d206f961cc89d9e4ab9e8a0042e6e79680f5d7879d40\"" Jan 17 00:26:21.018222 containerd[1501]: time="2026-01-17T00:26:21.018062590Z" level=info msg="StartContainer for \"5f6d697c6dabd4b9e8b7d206f961cc89d9e4ab9e8a0042e6e79680f5d7879d40\"" Jan 17 00:26:21.032318 containerd[1501]: time="2026-01-17T00:26:21.032193153Z" level=info msg="StartContainer for \"8707839f21e908cb57af658aa552afe7b026f45b956a241270d4382f7e2dcec1\" returns successfully" Jan 17 00:26:21.058211 containerd[1501]: time="2026-01-17T00:26:21.056526507Z" level=info msg="StartContainer for \"6c89fb3ff5d69eb09cc72e8ef33c280ad8fe9c798b34de1c8ff82ab927877cc3\" returns successfully" Jan 17 00:26:21.067821 systemd[1]: Started cri-containerd-5f6d697c6dabd4b9e8b7d206f961cc89d9e4ab9e8a0042e6e79680f5d7879d40.scope - libcontainer container 5f6d697c6dabd4b9e8b7d206f961cc89d9e4ab9e8a0042e6e79680f5d7879d40. Jan 17 00:26:21.126718 containerd[1501]: time="2026-01-17T00:26:21.126675301Z" level=info msg="StartContainer for \"5f6d697c6dabd4b9e8b7d206f961cc89d9e4ab9e8a0042e6e79680f5d7879d40\" returns successfully" Jan 17 00:26:21.180760 kubelet[2190]: I0117 00:26:21.180718 2190 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-e100e79615" Jan 17 00:26:21.626571 kubelet[2190]: E0117 00:26:21.626527 2190 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-e100e79615\" not found" node="ci-4081-3-6-n-e100e79615" Jan 17 00:26:21.631233 kubelet[2190]: E0117 00:26:21.631204 2190 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-e100e79615\" not found" node="ci-4081-3-6-n-e100e79615" Jan 17 00:26:21.636718 kubelet[2190]: E0117 00:26:21.636689 2190 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-e100e79615\" not found" node="ci-4081-3-6-n-e100e79615" Jan 17 00:26:22.329079 kubelet[2190]: I0117 00:26:22.329031 2190 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-n-e100e79615" Jan 17 00:26:22.329079 kubelet[2190]: E0117 00:26:22.329071 2190 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081-3-6-n-e100e79615\": node \"ci-4081-3-6-n-e100e79615\" not found" Jan 17 00:26:22.339551 kubelet[2190]: E0117 00:26:22.339520 2190 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-e100e79615\" not found" Jan 17 00:26:22.440780 kubelet[2190]: E0117 00:26:22.440681 2190 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-e100e79615\" not found" Jan 17 00:26:22.540973 kubelet[2190]: E0117 00:26:22.540844 2190 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-e100e79615\" not found" Jan 17 00:26:22.633666 kubelet[2190]: E0117 00:26:22.632967 2190 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-e100e79615\" not found" node="ci-4081-3-6-n-e100e79615" Jan 17 00:26:22.634642 kubelet[2190]: E0117 00:26:22.634188 2190 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-e100e79615\" not found" node="ci-4081-3-6-n-e100e79615" Jan 17 00:26:22.641402 kubelet[2190]: E0117 00:26:22.641359 2190 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-e100e79615\" not found" Jan 17 00:26:22.741947 kubelet[2190]: E0117 00:26:22.741863 2190 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-e100e79615\" not found" Jan 17 00:26:22.843140 kubelet[2190]: E0117 00:26:22.843004 2190 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-e100e79615\" not found" Jan 17 00:26:22.945141 kubelet[2190]: E0117 00:26:22.944349 2190 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-e100e79615\" not found" Jan 17 00:26:23.045155 kubelet[2190]: E0117 00:26:23.045070 2190 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-e100e79615\" not found" Jan 17 00:26:23.145851 kubelet[2190]: E0117 00:26:23.145800 2190 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-e100e79615\" not found" Jan 17 00:26:23.247162 kubelet[2190]: E0117 00:26:23.247061 2190 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-e100e79615\" not found" Jan 17 00:26:23.284666 kubelet[2190]: I0117 00:26:23.284535 2190 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-e100e79615" Jan 17 00:26:23.296891 kubelet[2190]: I0117 00:26:23.296831 2190 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-e100e79615" Jan 17 00:26:23.305632 kubelet[2190]: I0117 00:26:23.305297 2190 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-e100e79615" Jan 17 00:26:23.569856 kubelet[2190]: I0117 00:26:23.569556 2190 apiserver.go:52] "Watching apiserver" Jan 17 00:26:23.582656 kubelet[2190]: I0117 00:26:23.582486 2190 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 17 00:26:24.672324 systemd[1]: Reloading requested from client PID 2471 ('systemctl') (unit session-7.scope)... Jan 17 00:26:24.672345 systemd[1]: Reloading... Jan 17 00:26:24.821181 zram_generator::config[2517]: No configuration found. Jan 17 00:26:24.953156 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:26:25.059277 systemd[1]: Reloading finished in 386 ms. Jan 17 00:26:25.114424 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:26:25.132856 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 00:26:25.133264 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:26:25.141476 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:26:25.350332 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:26:25.359878 (kubelet)[2562]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:26:25.432591 kubelet[2562]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:26:25.434174 kubelet[2562]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:26:25.434174 kubelet[2562]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:26:25.434174 kubelet[2562]: I0117 00:26:25.433179 2562 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:26:25.442020 kubelet[2562]: I0117 00:26:25.441960 2562 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 17 00:26:25.442020 kubelet[2562]: I0117 00:26:25.442002 2562 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:26:25.443120 kubelet[2562]: I0117 00:26:25.442594 2562 server.go:954] "Client rotation is on, will bootstrap in background" Jan 17 00:26:25.446228 kubelet[2562]: I0117 00:26:25.446135 2562 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 17 00:26:25.448967 kubelet[2562]: I0117 00:26:25.448705 2562 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:26:25.454340 kubelet[2562]: E0117 00:26:25.453907 2562 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:26:25.454340 kubelet[2562]: I0117 00:26:25.454325 2562 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 17 00:26:25.458513 kubelet[2562]: I0117 00:26:25.458459 2562 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 00:26:25.458778 kubelet[2562]: I0117 00:26:25.458712 2562 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:26:25.458948 kubelet[2562]: I0117 00:26:25.458762 2562 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-e100e79615","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 17 00:26:25.459048 kubelet[2562]: I0117 00:26:25.458949 2562 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:26:25.459048 kubelet[2562]: I0117 00:26:25.458959 2562 container_manager_linux.go:304] "Creating device plugin manager" Jan 17 00:26:25.459048 kubelet[2562]: I0117 00:26:25.459022 2562 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:26:25.459297 kubelet[2562]: I0117 00:26:25.459267 2562 kubelet.go:446] "Attempting to sync node with API server" Jan 17 00:26:25.459297 kubelet[2562]: I0117 00:26:25.459295 2562 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:26:25.459806 kubelet[2562]: I0117 00:26:25.459778 2562 kubelet.go:352] "Adding apiserver pod source" Jan 17 00:26:25.459806 kubelet[2562]: I0117 00:26:25.459803 2562 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:26:25.469172 kubelet[2562]: I0117 00:26:25.468320 2562 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:26:25.469172 kubelet[2562]: I0117 00:26:25.468761 2562 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 00:26:25.471646 kubelet[2562]: I0117 00:26:25.471119 2562 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 17 00:26:25.471646 kubelet[2562]: I0117 00:26:25.471169 2562 server.go:1287] "Started kubelet" Jan 17 00:26:25.476045 kubelet[2562]: I0117 00:26:25.475989 2562 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:26:25.479954 kubelet[2562]: I0117 00:26:25.479876 2562 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:26:25.484379 kubelet[2562]: I0117 00:26:25.483787 2562 server.go:479] "Adding debug handlers to kubelet server" Jan 17 00:26:25.485986 kubelet[2562]: I0117 00:26:25.485299 2562 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:26:25.485986 kubelet[2562]: I0117 00:26:25.485572 2562 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:26:25.486186 kubelet[2562]: I0117 00:26:25.486017 2562 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:26:25.492766 kubelet[2562]: I0117 00:26:25.491430 2562 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 17 00:26:25.492766 kubelet[2562]: E0117 00:26:25.491601 2562 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-e100e79615\" not found" Jan 17 00:26:25.492952 kubelet[2562]: I0117 00:26:25.492932 2562 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 17 00:26:25.494134 kubelet[2562]: I0117 00:26:25.493078 2562 reconciler.go:26] "Reconciler: start to sync state" Jan 17 00:26:25.505133 kubelet[2562]: I0117 00:26:25.502881 2562 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:26:25.511085 kubelet[2562]: I0117 00:26:25.511012 2562 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 00:26:25.514378 kubelet[2562]: I0117 00:26:25.514332 2562 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 00:26:25.514378 kubelet[2562]: I0117 00:26:25.514381 2562 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 17 00:26:25.514573 kubelet[2562]: I0117 00:26:25.514408 2562 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:26:25.514573 kubelet[2562]: I0117 00:26:25.514416 2562 kubelet.go:2382] "Starting kubelet main sync loop" Jan 17 00:26:25.514573 kubelet[2562]: E0117 00:26:25.514472 2562 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 00:26:25.517041 kubelet[2562]: E0117 00:26:25.516636 2562 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 00:26:25.523138 kubelet[2562]: I0117 00:26:25.520297 2562 factory.go:221] Registration of the containerd container factory successfully Jan 17 00:26:25.523138 kubelet[2562]: I0117 00:26:25.520321 2562 factory.go:221] Registration of the systemd container factory successfully Jan 17 00:26:25.597520 kubelet[2562]: I0117 00:26:25.597475 2562 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:26:25.597520 kubelet[2562]: I0117 00:26:25.597495 2562 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:26:25.597520 kubelet[2562]: I0117 00:26:25.597517 2562 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:26:25.597746 kubelet[2562]: I0117 00:26:25.597691 2562 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 00:26:25.597746 kubelet[2562]: I0117 00:26:25.597702 2562 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 17 00:26:25.597746 kubelet[2562]: I0117 00:26:25.597720 2562 policy_none.go:49] "None policy: Start" Jan 17 00:26:25.597746 kubelet[2562]: I0117 00:26:25.597730 2562 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 17 00:26:25.597746 kubelet[2562]: I0117 00:26:25.597741 2562 state_mem.go:35] "Initializing new in-memory state store" Jan 17 00:26:25.597887 kubelet[2562]: I0117 00:26:25.597858 2562 state_mem.go:75] "Updated machine memory state" Jan 17 00:26:25.604164 kubelet[2562]: I0117 00:26:25.602942 2562 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 00:26:25.604164 kubelet[2562]: I0117 00:26:25.603222 2562 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:26:25.604164 kubelet[2562]: I0117 00:26:25.603237 2562 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:26:25.604164 kubelet[2562]: I0117 00:26:25.603920 2562 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:26:25.606431 kubelet[2562]: E0117 00:26:25.606401 2562 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:26:25.616854 kubelet[2562]: I0117 00:26:25.616818 2562 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-e100e79615" Jan 17 00:26:25.619263 kubelet[2562]: I0117 00:26:25.619240 2562 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-e100e79615" Jan 17 00:26:25.621065 kubelet[2562]: I0117 00:26:25.621045 2562 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-e100e79615" Jan 17 00:26:25.630697 kubelet[2562]: E0117 00:26:25.630667 2562 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-n-e100e79615\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-6-n-e100e79615" Jan 17 00:26:25.631115 kubelet[2562]: E0117 00:26:25.629850 2562 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-6-n-e100e79615\" already exists" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-e100e79615" Jan 17 00:26:25.631664 kubelet[2562]: E0117 00:26:25.630927 2562 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-e100e79615\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-6-n-e100e79615" Jan 17 00:26:25.694827 kubelet[2562]: I0117 00:26:25.694766 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8567510f9c9de485e02f8ce983200c61-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-e100e79615\" (UID: \"8567510f9c9de485e02f8ce983200c61\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-e100e79615" Jan 17 00:26:25.695147 kubelet[2562]: I0117 00:26:25.695091 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1534507e84119c3a1b0c39bcd2c565c2-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-e100e79615\" (UID: \"1534507e84119c3a1b0c39bcd2c565c2\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-e100e79615" Jan 17 00:26:25.695416 kubelet[2562]: I0117 00:26:25.695220 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2f548a38d62179f505f11c94d3b29a60-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-e100e79615\" (UID: \"2f548a38d62179f505f11c94d3b29a60\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-e100e79615" Jan 17 00:26:25.695416 kubelet[2562]: I0117 00:26:25.695247 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2f548a38d62179f505f11c94d3b29a60-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-e100e79615\" (UID: \"2f548a38d62179f505f11c94d3b29a60\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-e100e79615" Jan 17 00:26:25.695416 kubelet[2562]: I0117 00:26:25.695267 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8567510f9c9de485e02f8ce983200c61-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-e100e79615\" (UID: \"8567510f9c9de485e02f8ce983200c61\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-e100e79615" Jan 17 00:26:25.695416 kubelet[2562]: I0117 00:26:25.695289 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8567510f9c9de485e02f8ce983200c61-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-e100e79615\" (UID: \"8567510f9c9de485e02f8ce983200c61\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-e100e79615" Jan 17 00:26:25.695416 kubelet[2562]: I0117 00:26:25.695314 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8567510f9c9de485e02f8ce983200c61-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-e100e79615\" (UID: \"8567510f9c9de485e02f8ce983200c61\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-e100e79615" Jan 17 00:26:25.695578 kubelet[2562]: I0117 00:26:25.695338 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2f548a38d62179f505f11c94d3b29a60-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-e100e79615\" (UID: \"2f548a38d62179f505f11c94d3b29a60\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-e100e79615" Jan 17 00:26:25.695578 kubelet[2562]: I0117 00:26:25.695355 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8567510f9c9de485e02f8ce983200c61-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-e100e79615\" (UID: \"8567510f9c9de485e02f8ce983200c61\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-e100e79615" Jan 17 00:26:25.728022 kubelet[2562]: I0117 00:26:25.727714 2562 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-e100e79615" Jan 17 00:26:25.738296 kubelet[2562]: I0117 00:26:25.738214 2562 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-6-n-e100e79615" Jan 17 00:26:25.738840 kubelet[2562]: I0117 00:26:25.738728 2562 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-n-e100e79615" Jan 17 00:26:26.464072 kubelet[2562]: I0117 00:26:26.463717 2562 apiserver.go:52] "Watching apiserver" Jan 17 00:26:26.493382 kubelet[2562]: I0117 00:26:26.493339 2562 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 17 00:26:26.575152 kubelet[2562]: I0117 00:26:26.572767 2562 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-e100e79615" Jan 17 00:26:26.584806 kubelet[2562]: E0117 00:26:26.584695 2562 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-e100e79615\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-6-n-e100e79615" Jan 17 00:26:26.621725 kubelet[2562]: I0117 00:26:26.621619 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-6-n-e100e79615" podStartSLOduration=3.621591546 podStartE2EDuration="3.621591546s" podCreationTimestamp="2026-01-17 00:26:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:26:26.603732048 +0000 UTC m=+1.238569433" watchObservedRunningTime="2026-01-17 00:26:26.621591546 +0000 UTC m=+1.256428921" Jan 17 00:26:26.635328 kubelet[2562]: I0117 00:26:26.635123 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-6-n-e100e79615" podStartSLOduration=3.635077977 podStartE2EDuration="3.635077977s" podCreationTimestamp="2026-01-17 00:26:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:26:26.623170131 +0000 UTC m=+1.258007516" watchObservedRunningTime="2026-01-17 00:26:26.635077977 +0000 UTC m=+1.269915362" Jan 17 00:26:26.648370 kubelet[2562]: I0117 00:26:26.648274 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-e100e79615" podStartSLOduration=3.648251209 podStartE2EDuration="3.648251209s" podCreationTimestamp="2026-01-17 00:26:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:26:26.635694223 +0000 UTC m=+1.270531608" watchObservedRunningTime="2026-01-17 00:26:26.648251209 +0000 UTC m=+1.283088594" Jan 17 00:26:30.987587 kubelet[2562]: I0117 00:26:30.987469 2562 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 17 00:26:30.988646 kubelet[2562]: I0117 00:26:30.988383 2562 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 17 00:26:30.988727 containerd[1501]: time="2026-01-17T00:26:30.988013259Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 00:26:31.688485 systemd[1]: Created slice kubepods-besteffort-podc7d8400b_f86b_474b_be6e_aa39a6ac9f9b.slice - libcontainer container kubepods-besteffort-podc7d8400b_f86b_474b_be6e_aa39a6ac9f9b.slice. Jan 17 00:26:31.740191 kubelet[2562]: I0117 00:26:31.740063 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c7d8400b-f86b-474b-be6e-aa39a6ac9f9b-kube-proxy\") pod \"kube-proxy-ptjdq\" (UID: \"c7d8400b-f86b-474b-be6e-aa39a6ac9f9b\") " pod="kube-system/kube-proxy-ptjdq" Jan 17 00:26:31.740436 kubelet[2562]: I0117 00:26:31.740223 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c7d8400b-f86b-474b-be6e-aa39a6ac9f9b-xtables-lock\") pod \"kube-proxy-ptjdq\" (UID: \"c7d8400b-f86b-474b-be6e-aa39a6ac9f9b\") " pod="kube-system/kube-proxy-ptjdq" Jan 17 00:26:31.740436 kubelet[2562]: I0117 00:26:31.740258 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c7d8400b-f86b-474b-be6e-aa39a6ac9f9b-lib-modules\") pod \"kube-proxy-ptjdq\" (UID: \"c7d8400b-f86b-474b-be6e-aa39a6ac9f9b\") " pod="kube-system/kube-proxy-ptjdq" Jan 17 00:26:31.740436 kubelet[2562]: I0117 00:26:31.740289 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27kv4\" (UniqueName: \"kubernetes.io/projected/c7d8400b-f86b-474b-be6e-aa39a6ac9f9b-kube-api-access-27kv4\") pod \"kube-proxy-ptjdq\" (UID: \"c7d8400b-f86b-474b-be6e-aa39a6ac9f9b\") " pod="kube-system/kube-proxy-ptjdq" Jan 17 00:26:31.847133 kubelet[2562]: E0117 00:26:31.847039 2562 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 17 00:26:31.847133 kubelet[2562]: E0117 00:26:31.847084 2562 projected.go:194] Error preparing data for projected volume kube-api-access-27kv4 for pod kube-system/kube-proxy-ptjdq: configmap "kube-root-ca.crt" not found Jan 17 00:26:31.847358 kubelet[2562]: E0117 00:26:31.847173 2562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c7d8400b-f86b-474b-be6e-aa39a6ac9f9b-kube-api-access-27kv4 podName:c7d8400b-f86b-474b-be6e-aa39a6ac9f9b nodeName:}" failed. No retries permitted until 2026-01-17 00:26:32.347148202 +0000 UTC m=+6.981985587 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-27kv4" (UniqueName: "kubernetes.io/projected/c7d8400b-f86b-474b-be6e-aa39a6ac9f9b-kube-api-access-27kv4") pod "kube-proxy-ptjdq" (UID: "c7d8400b-f86b-474b-be6e-aa39a6ac9f9b") : configmap "kube-root-ca.crt" not found Jan 17 00:26:32.087232 systemd[1]: Created slice kubepods-besteffort-pod5fc9e7dd_de14_4dbd_b66b_e2afbd21fa00.slice - libcontainer container kubepods-besteffort-pod5fc9e7dd_de14_4dbd_b66b_e2afbd21fa00.slice. Jan 17 00:26:32.145308 kubelet[2562]: I0117 00:26:32.145210 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5fc9e7dd-de14-4dbd-b66b-e2afbd21fa00-var-lib-calico\") pod \"tigera-operator-7dcd859c48-r7fg4\" (UID: \"5fc9e7dd-de14-4dbd-b66b-e2afbd21fa00\") " pod="tigera-operator/tigera-operator-7dcd859c48-r7fg4" Jan 17 00:26:32.145308 kubelet[2562]: I0117 00:26:32.145297 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7jmw\" (UniqueName: \"kubernetes.io/projected/5fc9e7dd-de14-4dbd-b66b-e2afbd21fa00-kube-api-access-w7jmw\") pod \"tigera-operator-7dcd859c48-r7fg4\" (UID: \"5fc9e7dd-de14-4dbd-b66b-e2afbd21fa00\") " pod="tigera-operator/tigera-operator-7dcd859c48-r7fg4" Jan 17 00:26:32.395705 containerd[1501]: time="2026-01-17T00:26:32.395532402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-r7fg4,Uid:5fc9e7dd-de14-4dbd-b66b-e2afbd21fa00,Namespace:tigera-operator,Attempt:0,}" Jan 17 00:26:32.436674 containerd[1501]: time="2026-01-17T00:26:32.436520713Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:26:32.436674 containerd[1501]: time="2026-01-17T00:26:32.436607332Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:26:32.436674 containerd[1501]: time="2026-01-17T00:26:32.436626184Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:26:32.440117 containerd[1501]: time="2026-01-17T00:26:32.436790541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:26:32.475276 systemd[1]: Started cri-containerd-e2ec6907224adec4998fca1143dbfe421a24143fdc99185f9ef80f211b4a4cc4.scope - libcontainer container e2ec6907224adec4998fca1143dbfe421a24143fdc99185f9ef80f211b4a4cc4. Jan 17 00:26:32.519589 containerd[1501]: time="2026-01-17T00:26:32.519550423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-r7fg4,Uid:5fc9e7dd-de14-4dbd-b66b-e2afbd21fa00,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"e2ec6907224adec4998fca1143dbfe421a24143fdc99185f9ef80f211b4a4cc4\"" Jan 17 00:26:32.521574 containerd[1501]: time="2026-01-17T00:26:32.521551314Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 17 00:26:32.602668 containerd[1501]: time="2026-01-17T00:26:32.602558001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ptjdq,Uid:c7d8400b-f86b-474b-be6e-aa39a6ac9f9b,Namespace:kube-system,Attempt:0,}" Jan 17 00:26:32.639632 containerd[1501]: time="2026-01-17T00:26:32.639404274Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:26:32.639632 containerd[1501]: time="2026-01-17T00:26:32.639562671Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:26:32.640351 containerd[1501]: time="2026-01-17T00:26:32.639594034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:26:32.640351 containerd[1501]: time="2026-01-17T00:26:32.640084745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:26:32.670401 systemd[1]: Started cri-containerd-3ab5d87e74ea8eeea959b9c0d979ed9ce8994f1fa2dae06cbf469e41538df0dd.scope - libcontainer container 3ab5d87e74ea8eeea959b9c0d979ed9ce8994f1fa2dae06cbf469e41538df0dd. Jan 17 00:26:32.729795 containerd[1501]: time="2026-01-17T00:26:32.729666477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ptjdq,Uid:c7d8400b-f86b-474b-be6e-aa39a6ac9f9b,Namespace:kube-system,Attempt:0,} returns sandbox id \"3ab5d87e74ea8eeea959b9c0d979ed9ce8994f1fa2dae06cbf469e41538df0dd\"" Jan 17 00:26:32.735961 containerd[1501]: time="2026-01-17T00:26:32.735896143Z" level=info msg="CreateContainer within sandbox \"3ab5d87e74ea8eeea959b9c0d979ed9ce8994f1fa2dae06cbf469e41538df0dd\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 00:26:32.760924 containerd[1501]: time="2026-01-17T00:26:32.760837312Z" level=info msg="CreateContainer within sandbox \"3ab5d87e74ea8eeea959b9c0d979ed9ce8994f1fa2dae06cbf469e41538df0dd\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3310909b344c9a1b50b13b24e775f1828654b67c1d2dbb6af88769e0f434d319\"" Jan 17 00:26:32.762459 containerd[1501]: time="2026-01-17T00:26:32.762393895Z" level=info msg="StartContainer for \"3310909b344c9a1b50b13b24e775f1828654b67c1d2dbb6af88769e0f434d319\"" Jan 17 00:26:32.822401 systemd[1]: Started cri-containerd-3310909b344c9a1b50b13b24e775f1828654b67c1d2dbb6af88769e0f434d319.scope - libcontainer container 3310909b344c9a1b50b13b24e775f1828654b67c1d2dbb6af88769e0f434d319. Jan 17 00:26:32.886293 containerd[1501]: time="2026-01-17T00:26:32.886224486Z" level=info msg="StartContainer for \"3310909b344c9a1b50b13b24e775f1828654b67c1d2dbb6af88769e0f434d319\" returns successfully" Jan 17 00:26:34.239548 update_engine[1488]: I20260117 00:26:34.239379 1488 update_attempter.cc:509] Updating boot flags... Jan 17 00:26:34.303595 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (2863) Jan 17 00:26:34.802859 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2473198401.mount: Deactivated successfully. Jan 17 00:26:35.155616 kubelet[2562]: I0117 00:26:35.155441 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ptjdq" podStartSLOduration=4.155416382 podStartE2EDuration="4.155416382s" podCreationTimestamp="2026-01-17 00:26:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:26:33.607342677 +0000 UTC m=+8.242180152" watchObservedRunningTime="2026-01-17 00:26:35.155416382 +0000 UTC m=+9.790253767" Jan 17 00:26:35.300335 containerd[1501]: time="2026-01-17T00:26:35.300247239Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:26:35.302129 containerd[1501]: time="2026-01-17T00:26:35.301858260Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Jan 17 00:26:35.303532 containerd[1501]: time="2026-01-17T00:26:35.303393102Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:26:35.306144 containerd[1501]: time="2026-01-17T00:26:35.306072116Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:26:35.307018 containerd[1501]: time="2026-01-17T00:26:35.306755385Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.785063637s" Jan 17 00:26:35.307018 containerd[1501]: time="2026-01-17T00:26:35.306804119Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 17 00:26:35.309580 containerd[1501]: time="2026-01-17T00:26:35.309523176Z" level=info msg="CreateContainer within sandbox \"e2ec6907224adec4998fca1143dbfe421a24143fdc99185f9ef80f211b4a4cc4\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 17 00:26:35.332004 containerd[1501]: time="2026-01-17T00:26:35.331923842Z" level=info msg="CreateContainer within sandbox \"e2ec6907224adec4998fca1143dbfe421a24143fdc99185f9ef80f211b4a4cc4\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"4a7368710fbc3b9724ce91c66b2daa81c5301d624ed94e66b368e12d835216c1\"" Jan 17 00:26:35.333685 containerd[1501]: time="2026-01-17T00:26:35.333199593Z" level=info msg="StartContainer for \"4a7368710fbc3b9724ce91c66b2daa81c5301d624ed94e66b368e12d835216c1\"" Jan 17 00:26:35.368353 systemd[1]: Started cri-containerd-4a7368710fbc3b9724ce91c66b2daa81c5301d624ed94e66b368e12d835216c1.scope - libcontainer container 4a7368710fbc3b9724ce91c66b2daa81c5301d624ed94e66b368e12d835216c1. Jan 17 00:26:35.406660 containerd[1501]: time="2026-01-17T00:26:35.406513175Z" level=info msg="StartContainer for \"4a7368710fbc3b9724ce91c66b2daa81c5301d624ed94e66b368e12d835216c1\" returns successfully" Jan 17 00:26:39.461934 kubelet[2562]: I0117 00:26:39.460834 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-r7fg4" podStartSLOduration=4.674165114 podStartE2EDuration="7.460797126s" podCreationTimestamp="2026-01-17 00:26:32 +0000 UTC" firstStartedPulling="2026-01-17 00:26:32.521197986 +0000 UTC m=+7.156035361" lastFinishedPulling="2026-01-17 00:26:35.307829988 +0000 UTC m=+9.942667373" observedRunningTime="2026-01-17 00:26:35.639402006 +0000 UTC m=+10.274239391" watchObservedRunningTime="2026-01-17 00:26:39.460797126 +0000 UTC m=+14.095634551" Jan 17 00:26:41.364180 sudo[1714]: pam_unix(sudo:session): session closed for user root Jan 17 00:26:41.489317 sshd[1711]: pam_unix(sshd:session): session closed for user core Jan 17 00:26:41.494281 systemd-logind[1487]: Session 7 logged out. Waiting for processes to exit. Jan 17 00:26:41.495989 systemd[1]: sshd@6-135.181.41.243:22-20.161.92.111:37524.service: Deactivated successfully. Jan 17 00:26:41.501387 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 00:26:41.502213 systemd[1]: session-7.scope: Consumed 4.244s CPU time, 158.5M memory peak, 0B memory swap peak. Jan 17 00:26:41.505887 systemd-logind[1487]: Removed session 7. Jan 17 00:26:46.277834 systemd[1]: Created slice kubepods-besteffort-poda4f36064_5d8a_4880_949a_41296ec68025.slice - libcontainer container kubepods-besteffort-poda4f36064_5d8a_4880_949a_41296ec68025.slice. Jan 17 00:26:46.349152 kubelet[2562]: I0117 00:26:46.347301 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a4f36064-5d8a-4880-949a-41296ec68025-typha-certs\") pod \"calico-typha-85d95c77c7-2qlcf\" (UID: \"a4f36064-5d8a-4880-949a-41296ec68025\") " pod="calico-system/calico-typha-85d95c77c7-2qlcf" Jan 17 00:26:46.349152 kubelet[2562]: I0117 00:26:46.347359 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dr64x\" (UniqueName: \"kubernetes.io/projected/a4f36064-5d8a-4880-949a-41296ec68025-kube-api-access-dr64x\") pod \"calico-typha-85d95c77c7-2qlcf\" (UID: \"a4f36064-5d8a-4880-949a-41296ec68025\") " pod="calico-system/calico-typha-85d95c77c7-2qlcf" Jan 17 00:26:46.349152 kubelet[2562]: I0117 00:26:46.347419 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4f36064-5d8a-4880-949a-41296ec68025-tigera-ca-bundle\") pod \"calico-typha-85d95c77c7-2qlcf\" (UID: \"a4f36064-5d8a-4880-949a-41296ec68025\") " pod="calico-system/calico-typha-85d95c77c7-2qlcf" Jan 17 00:26:46.494242 systemd[1]: Created slice kubepods-besteffort-poda66ef2ad_ee77_483a_a5dd_00d925e60edc.slice - libcontainer container kubepods-besteffort-poda66ef2ad_ee77_483a_a5dd_00d925e60edc.slice. Jan 17 00:26:46.548681 kubelet[2562]: I0117 00:26:46.548499 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a66ef2ad-ee77-483a-a5dd-00d925e60edc-tigera-ca-bundle\") pod \"calico-node-gqwgp\" (UID: \"a66ef2ad-ee77-483a-a5dd-00d925e60edc\") " pod="calico-system/calico-node-gqwgp" Jan 17 00:26:46.548681 kubelet[2562]: I0117 00:26:46.548561 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a66ef2ad-ee77-483a-a5dd-00d925e60edc-flexvol-driver-host\") pod \"calico-node-gqwgp\" (UID: \"a66ef2ad-ee77-483a-a5dd-00d925e60edc\") " pod="calico-system/calico-node-gqwgp" Jan 17 00:26:46.548681 kubelet[2562]: I0117 00:26:46.548587 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a66ef2ad-ee77-483a-a5dd-00d925e60edc-lib-modules\") pod \"calico-node-gqwgp\" (UID: \"a66ef2ad-ee77-483a-a5dd-00d925e60edc\") " pod="calico-system/calico-node-gqwgp" Jan 17 00:26:46.548681 kubelet[2562]: I0117 00:26:46.548608 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a66ef2ad-ee77-483a-a5dd-00d925e60edc-var-run-calico\") pod \"calico-node-gqwgp\" (UID: \"a66ef2ad-ee77-483a-a5dd-00d925e60edc\") " pod="calico-system/calico-node-gqwgp" Jan 17 00:26:46.548681 kubelet[2562]: I0117 00:26:46.548625 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a66ef2ad-ee77-483a-a5dd-00d925e60edc-cni-bin-dir\") pod \"calico-node-gqwgp\" (UID: \"a66ef2ad-ee77-483a-a5dd-00d925e60edc\") " pod="calico-system/calico-node-gqwgp" Jan 17 00:26:46.548976 kubelet[2562]: I0117 00:26:46.548641 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a66ef2ad-ee77-483a-a5dd-00d925e60edc-policysync\") pod \"calico-node-gqwgp\" (UID: \"a66ef2ad-ee77-483a-a5dd-00d925e60edc\") " pod="calico-system/calico-node-gqwgp" Jan 17 00:26:46.549346 kubelet[2562]: I0117 00:26:46.548662 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a66ef2ad-ee77-483a-a5dd-00d925e60edc-var-lib-calico\") pod \"calico-node-gqwgp\" (UID: \"a66ef2ad-ee77-483a-a5dd-00d925e60edc\") " pod="calico-system/calico-node-gqwgp" Jan 17 00:26:46.549648 kubelet[2562]: I0117 00:26:46.549400 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a66ef2ad-ee77-483a-a5dd-00d925e60edc-cni-net-dir\") pod \"calico-node-gqwgp\" (UID: \"a66ef2ad-ee77-483a-a5dd-00d925e60edc\") " pod="calico-system/calico-node-gqwgp" Jan 17 00:26:46.549648 kubelet[2562]: I0117 00:26:46.549445 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdqzj\" (UniqueName: \"kubernetes.io/projected/a66ef2ad-ee77-483a-a5dd-00d925e60edc-kube-api-access-zdqzj\") pod \"calico-node-gqwgp\" (UID: \"a66ef2ad-ee77-483a-a5dd-00d925e60edc\") " pod="calico-system/calico-node-gqwgp" Jan 17 00:26:46.549648 kubelet[2562]: I0117 00:26:46.549472 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a66ef2ad-ee77-483a-a5dd-00d925e60edc-cni-log-dir\") pod \"calico-node-gqwgp\" (UID: \"a66ef2ad-ee77-483a-a5dd-00d925e60edc\") " pod="calico-system/calico-node-gqwgp" Jan 17 00:26:46.549648 kubelet[2562]: I0117 00:26:46.549495 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a66ef2ad-ee77-483a-a5dd-00d925e60edc-xtables-lock\") pod \"calico-node-gqwgp\" (UID: \"a66ef2ad-ee77-483a-a5dd-00d925e60edc\") " pod="calico-system/calico-node-gqwgp" Jan 17 00:26:46.549648 kubelet[2562]: I0117 00:26:46.549518 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a66ef2ad-ee77-483a-a5dd-00d925e60edc-node-certs\") pod \"calico-node-gqwgp\" (UID: \"a66ef2ad-ee77-483a-a5dd-00d925e60edc\") " pod="calico-system/calico-node-gqwgp" Jan 17 00:26:46.586491 containerd[1501]: time="2026-01-17T00:26:46.586425821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-85d95c77c7-2qlcf,Uid:a4f36064-5d8a-4880-949a-41296ec68025,Namespace:calico-system,Attempt:0,}" Jan 17 00:26:46.619162 containerd[1501]: time="2026-01-17T00:26:46.617491715Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:26:46.619162 containerd[1501]: time="2026-01-17T00:26:46.617560048Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:26:46.619162 containerd[1501]: time="2026-01-17T00:26:46.617572629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:26:46.619162 containerd[1501]: time="2026-01-17T00:26:46.617661292Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:26:46.656145 kubelet[2562]: E0117 00:26:46.655337 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:46.656145 kubelet[2562]: W0117 00:26:46.655400 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:46.656145 kubelet[2562]: E0117 00:26:46.655442 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:46.658382 kubelet[2562]: E0117 00:26:46.658346 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:46.659143 kubelet[2562]: W0117 00:26:46.658509 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:46.659143 kubelet[2562]: E0117 00:26:46.658718 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:46.673428 kubelet[2562]: E0117 00:26:46.672127 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:46.673428 kubelet[2562]: W0117 00:26:46.672157 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:46.673428 kubelet[2562]: E0117 00:26:46.672302 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:46.682373 kubelet[2562]: E0117 00:26:46.682342 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:46.682852 kubelet[2562]: W0117 00:26:46.682830 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:46.683130 kubelet[2562]: E0117 00:26:46.682935 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:46.684146 kubelet[2562]: E0117 00:26:46.683860 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wn7sn" podUID="eb95b785-13b8-4aa9-b43b-38efbd205ceb" Jan 17 00:26:46.687479 systemd[1]: Started cri-containerd-0c08511eaf263410a11aa8c25b2e727bf73b32a9845c4bfad98464ab5aa2b164.scope - libcontainer container 0c08511eaf263410a11aa8c25b2e727bf73b32a9845c4bfad98464ab5aa2b164. Jan 17 00:26:46.730491 kubelet[2562]: E0117 00:26:46.730227 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:46.731053 kubelet[2562]: W0117 00:26:46.730668 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:46.731053 kubelet[2562]: E0117 00:26:46.730704 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:46.732246 kubelet[2562]: E0117 00:26:46.731921 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:46.732246 kubelet[2562]: W0117 00:26:46.731937 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:46.732246 kubelet[2562]: E0117 00:26:46.731956 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:46.733045 kubelet[2562]: E0117 00:26:46.732640 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:46.733045 kubelet[2562]: W0117 00:26:46.732653 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:46.733045 kubelet[2562]: E0117 00:26:46.732668 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:46.733780 kubelet[2562]: E0117 00:26:46.733666 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:46.733780 kubelet[2562]: W0117 00:26:46.733682 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:46.733780 kubelet[2562]: E0117 00:26:46.733701 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:46.735022 kubelet[2562]: E0117 00:26:46.734951 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:46.735022 kubelet[2562]: W0117 00:26:46.734964 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:46.735022 kubelet[2562]: E0117 00:26:46.734979 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:46.735325 kubelet[2562]: E0117 00:26:46.735317 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:46.735380 kubelet[2562]: W0117 00:26:46.735371 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:46.736056 kubelet[2562]: E0117 00:26:46.736001 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:46.736393 kubelet[2562]: E0117 00:26:46.736337 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:46.736393 kubelet[2562]: W0117 00:26:46.736346 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:46.736393 kubelet[2562]: E0117 00:26:46.736355 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:46.737139 kubelet[2562]: E0117 00:26:46.736824 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:46.737139 kubelet[2562]: W0117 00:26:46.736833 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:46.737139 kubelet[2562]: E0117 00:26:46.736842 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:46.741620 kubelet[2562]: E0117 00:26:46.741393 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:46.741620 kubelet[2562]: W0117 00:26:46.741412 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:46.741620 kubelet[2562]: E0117 00:26:46.741431 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:46.743057 kubelet[2562]: E0117 00:26:46.742818 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:46.743057 kubelet[2562]: W0117 00:26:46.742833 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:46.743057 kubelet[2562]: E0117 00:26:46.742856 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:46.743961 kubelet[2562]: E0117 00:26:46.743848 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:46.743961 kubelet[2562]: W0117 00:26:46.743860 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:46.743961 kubelet[2562]: E0117 00:26:46.743872 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:46.744314 kubelet[2562]: E0117 00:26:46.744204 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:46.744314 kubelet[2562]: W0117 00:26:46.744216 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:46.744314 kubelet[2562]: E0117 00:26:46.744226 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:46.744671 kubelet[2562]: E0117 00:26:46.744586 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:46.744671 kubelet[2562]: W0117 00:26:46.744595 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:46.744671 kubelet[2562]: E0117 00:26:46.744604 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:46.744993 kubelet[2562]: E0117 00:26:46.744893 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:46.744993 kubelet[2562]: W0117 00:26:46.744911 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:46.744993 kubelet[2562]: E0117 00:26:46.744919 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:46.746466 kubelet[2562]: E0117 00:26:46.746174 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:46.746614 kubelet[2562]: W0117 00:26:46.746539 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:46.746614 kubelet[2562]: E0117 00:26:46.746556 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:46.747396 kubelet[2562]: E0117 00:26:46.747225 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:46.747396 kubelet[2562]: W0117 00:26:46.747236 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:46.747396 kubelet[2562]: E0117 00:26:46.747245 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:46.748057 kubelet[2562]: E0117 00:26:46.747567 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:46.748057 kubelet[2562]: W0117 00:26:46.747601 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:46.748057 kubelet[2562]: E0117 00:26:46.747648 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:46.748922 kubelet[2562]: E0117 00:26:46.748674 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:46.748922 kubelet[2562]: W0117 00:26:46.748916 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:46.748990 kubelet[2562]: E0117 00:26:46.748937 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:46.749959 kubelet[2562]: E0117 00:26:46.749936 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:46.749959 kubelet[2562]: W0117 00:26:46.749955 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:46.750024 kubelet[2562]: E0117 00:26:46.749973 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:46.750523 kubelet[2562]: E0117 00:26:46.750460 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:46.750523 kubelet[2562]: W0117 00:26:46.750477 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:46.751055 kubelet[2562]: E0117 00:26:46.750490 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:46.751984 kubelet[2562]: E0117 00:26:46.751635 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:46.751984 kubelet[2562]: W0117 00:26:46.751652 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:46.751984 kubelet[2562]: E0117 00:26:46.751667 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:46.751984 kubelet[2562]: I0117 00:26:46.751895 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/eb95b785-13b8-4aa9-b43b-38efbd205ceb-varrun\") pod \"csi-node-driver-wn7sn\" (UID: \"eb95b785-13b8-4aa9-b43b-38efbd205ceb\") " pod="calico-system/csi-node-driver-wn7sn" Jan 17 00:26:46.753275 kubelet[2562]: E0117 00:26:46.753253 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:46.753788 kubelet[2562]: W0117 00:26:46.753747 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:46.754055 kubelet[2562]: E0117 00:26:46.753877 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:46.755370 kubelet[2562]: E0117 00:26:46.755150 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:46.755370 kubelet[2562]: W0117 00:26:46.755263 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:46.755370 kubelet[2562]: E0117 00:26:46.755278 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:46.755824 kubelet[2562]: E0117 00:26:46.755765 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:46.755824 kubelet[2562]: W0117 00:26:46.755778 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:46.755824 kubelet[2562]: E0117 00:26:46.755794 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:46.756003 kubelet[2562]: I0117 00:26:46.755229 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knchd\" (UniqueName: \"kubernetes.io/projected/eb95b785-13b8-4aa9-b43b-38efbd205ceb-kube-api-access-knchd\") pod \"csi-node-driver-wn7sn\" (UID: \"eb95b785-13b8-4aa9-b43b-38efbd205ceb\") " pod="calico-system/csi-node-driver-wn7sn" Jan 17 00:26:46.756392 kubelet[2562]: E0117 00:26:46.756366 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:46.756392 kubelet[2562]: W0117 00:26:46.756380 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:46.756528 kubelet[2562]: E0117 00:26:46.756467 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:46.756528 kubelet[2562]: I0117 00:26:46.756499 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/eb95b785-13b8-4aa9-b43b-38efbd205ceb-registration-dir\") pod \"csi-node-driver-wn7sn\" (UID: \"eb95b785-13b8-4aa9-b43b-38efbd205ceb\") " pod="calico-system/csi-node-driver-wn7sn" Jan 17 00:26:46.756960 kubelet[2562]: E0117 00:26:46.756857 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:46.756960 kubelet[2562]: W0117 00:26:46.756866 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:46.756960 kubelet[2562]: E0117 00:26:46.756876 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:46.757412 kubelet[2562]: E0117 00:26:46.757367 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:46.757412 kubelet[2562]: W0117 00:26:46.757379 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:46.757412 kubelet[2562]: E0117 00:26:46.757389 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:46.758904 kubelet[2562]: E0117 00:26:46.758446 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:46.758904 kubelet[2562]: W0117 00:26:46.758456 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:46.758904 kubelet[2562]: E0117 00:26:46.758479 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:46.758904 kubelet[2562]: E0117 00:26:46.758693 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:46.758904 kubelet[2562]: W0117 00:26:46.758700 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:46.758904 kubelet[2562]: E0117 00:26:46.758718 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:46.759135 kubelet[2562]: E0117 00:26:46.759126 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:46.759273 kubelet[2562]: W0117 00:26:46.759178 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:46.759273 kubelet[2562]: E0117 00:26:46.759190 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:46.759273 kubelet[2562]: I0117 00:26:46.759216 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/eb95b785-13b8-4aa9-b43b-38efbd205ceb-socket-dir\") pod \"csi-node-driver-wn7sn\" (UID: \"eb95b785-13b8-4aa9-b43b-38efbd205ceb\") " pod="calico-system/csi-node-driver-wn7sn" Jan 17 00:26:46.759546 kubelet[2562]: E0117 00:26:46.759535 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:46.759748 kubelet[2562]: W0117 00:26:46.759591 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:46.759748 kubelet[2562]: E0117 00:26:46.759615 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:46.759748 kubelet[2562]: I0117 00:26:46.759631 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eb95b785-13b8-4aa9-b43b-38efbd205ceb-kubelet-dir\") pod \"csi-node-driver-wn7sn\" (UID: \"eb95b785-13b8-4aa9-b43b-38efbd205ceb\") " pod="calico-system/csi-node-driver-wn7sn" Jan 17 00:26:46.759986 kubelet[2562]: E0117 00:26:46.759973 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:46.760167 kubelet[2562]: W0117 00:26:46.760154 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:46.760235 kubelet[2562]: E0117 00:26:46.760227 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:46.760557 kubelet[2562]: E0117 00:26:46.760548 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:46.760604 kubelet[2562]: W0117 00:26:46.760596 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:46.760647 kubelet[2562]: E0117 00:26:46.760639 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:46.760943 kubelet[2562]: E0117 00:26:46.760934 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:46.760995 kubelet[2562]: W0117 00:26:46.760987 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:46.761031 kubelet[2562]: E0117 00:26:46.761024 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:46.761549 kubelet[2562]: E0117 00:26:46.761381 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:46.761610 kubelet[2562]: W0117 00:26:46.761600 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:46.761665 kubelet[2562]: E0117 00:26:46.761657 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:46.768132 containerd[1501]: time="2026-01-17T00:26:46.767033748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-85d95c77c7-2qlcf,Uid:a4f36064-5d8a-4880-949a-41296ec68025,Namespace:calico-system,Attempt:0,} returns sandbox id \"0c08511eaf263410a11aa8c25b2e727bf73b32a9845c4bfad98464ab5aa2b164\"" Jan 17 00:26:46.771121 containerd[1501]: time="2026-01-17T00:26:46.770000515Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 17 00:26:46.804475 containerd[1501]: time="2026-01-17T00:26:46.804015546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gqwgp,Uid:a66ef2ad-ee77-483a-a5dd-00d925e60edc,Namespace:calico-system,Attempt:0,}" Jan 17 00:26:46.835518 containerd[1501]: time="2026-01-17T00:26:46.831569139Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:26:46.835518 containerd[1501]: time="2026-01-17T00:26:46.831673114Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:26:46.835518 containerd[1501]: time="2026-01-17T00:26:46.831690515Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:26:46.835518 containerd[1501]: time="2026-01-17T00:26:46.831812730Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:26:46.854309 systemd[1]: Started cri-containerd-e335f2f9ecfefe993cc1249a9fcc9f48f9f5989872cd244d582946404faded38.scope - libcontainer container e335f2f9ecfefe993cc1249a9fcc9f48f9f5989872cd244d582946404faded38. Jan 17 00:26:46.860805 kubelet[2562]: E0117 00:26:46.860623 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:46.860805 kubelet[2562]: W0117 00:26:46.860642 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:46.860805 kubelet[2562]: E0117 00:26:46.860664 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:46.861468 kubelet[2562]: E0117 00:26:46.861354 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:46.861468 kubelet[2562]: W0117 00:26:46.861369 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:46.861468 kubelet[2562]: E0117 00:26:46.861413 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:46.862157 kubelet[2562]: E0117 00:26:46.862072 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:46.862157 kubelet[2562]: W0117 00:26:46.862091 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:46.862596 kubelet[2562]: E0117 00:26:46.862419 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:46.862736 kubelet[2562]: E0117 00:26:46.862706 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:46.862796 kubelet[2562]: W0117 00:26:46.862784 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:46.863014 kubelet[2562]: E0117 00:26:46.862832 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:46.863224 kubelet[2562]: E0117 00:26:46.863213 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:46.863266 kubelet[2562]: W0117 00:26:46.863259 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:46.863428 kubelet[2562]: E0117 00:26:46.863405 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:46.863624 kubelet[2562]: E0117 00:26:46.863616 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:46.863923 kubelet[2562]: W0117 00:26:46.863673 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:46.863986 kubelet[2562]: E0117 00:26:46.863976 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:46.864092 kubelet[2562]: E0117 00:26:46.864066 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:46.864092 kubelet[2562]: W0117 00:26:46.864073 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:46.864370 kubelet[2562]: E0117 00:26:46.864319 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:46.864568 kubelet[2562]: E0117 00:26:46.864537 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:46.864793 kubelet[2562]: W0117 00:26:46.864605 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:46.864861 kubelet[2562]: E0117 00:26:46.864851 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:46.865002 kubelet[2562]: E0117 00:26:46.864994 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:46.865073 kubelet[2562]: W0117 00:26:46.865057 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:46.865796 kubelet[2562]: E0117 00:26:46.865784 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:46.866303 kubelet[2562]: E0117 00:26:46.866196 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:46.866303 kubelet[2562]: W0117 00:26:46.866209 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:46.866380 kubelet[2562]: E0117 00:26:46.866367 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:46.866829 kubelet[2562]: E0117 00:26:46.866806 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:46.866829 kubelet[2562]: W0117 00:26:46.866818 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:46.867083 kubelet[2562]: E0117 00:26:46.866971 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:46.867315 kubelet[2562]: E0117 00:26:46.867305 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:46.867458 kubelet[2562]: W0117 00:26:46.867357 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:46.867520 kubelet[2562]: E0117 00:26:46.867508 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:46.867757 kubelet[2562]: E0117 00:26:46.867745 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:46.867948 kubelet[2562]: W0117 00:26:46.867810 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:46.868005 kubelet[2562]: E0117 00:26:46.867995 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:46.868265 kubelet[2562]: E0117 00:26:46.868256 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:46.868307 kubelet[2562]: W0117 00:26:46.868299 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:46.868448 kubelet[2562]: E0117 00:26:46.868418 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:46.868752 kubelet[2562]: E0117 00:26:46.868657 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:46.868752 kubelet[2562]: W0117 00:26:46.868669 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:46.868866 kubelet[2562]: E0117 00:26:46.868833 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:46.869059 kubelet[2562]: E0117 00:26:46.869045 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:46.869174 kubelet[2562]: W0117 00:26:46.869126 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:46.869223 kubelet[2562]: E0117 00:26:46.869212 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:46.869477 kubelet[2562]: E0117 00:26:46.869449 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:46.869477 kubelet[2562]: W0117 00:26:46.869458 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:46.869640 kubelet[2562]: E0117 00:26:46.869629 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:46.870276 kubelet[2562]: E0117 00:26:46.870266 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:46.870442 kubelet[2562]: W0117 00:26:46.870314 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:46.870481 kubelet[2562]: E0117 00:26:46.870473 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:46.870638 kubelet[2562]: E0117 00:26:46.870630 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:46.870697 kubelet[2562]: W0117 00:26:46.870690 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:46.870798 kubelet[2562]: E0117 00:26:46.870785 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:46.871045 kubelet[2562]: E0117 00:26:46.871036 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:46.871190 kubelet[2562]: W0117 00:26:46.871077 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:46.871238 kubelet[2562]: E0117 00:26:46.871228 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:46.871654 kubelet[2562]: E0117 00:26:46.871647 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:46.871707 kubelet[2562]: W0117 00:26:46.871687 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:46.871888 kubelet[2562]: E0117 00:26:46.871865 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:46.872142 kubelet[2562]: E0117 00:26:46.872042 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:46.872142 kubelet[2562]: W0117 00:26:46.872050 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:46.872227 kubelet[2562]: E0117 00:26:46.872217 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:46.872469 kubelet[2562]: E0117 00:26:46.872450 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:46.872469 kubelet[2562]: W0117 00:26:46.872459 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:46.872605 kubelet[2562]: E0117 00:26:46.872546 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:46.872899 kubelet[2562]: E0117 00:26:46.872873 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:46.872899 kubelet[2562]: W0117 00:26:46.872882 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:46.873014 kubelet[2562]: E0117 00:26:46.872956 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:46.873313 kubelet[2562]: E0117 00:26:46.873305 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:46.873543 kubelet[2562]: W0117 00:26:46.873352 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:46.873543 kubelet[2562]: E0117 00:26:46.873363 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:46.885809 kubelet[2562]: E0117 00:26:46.885776 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:46.885965 kubelet[2562]: W0117 00:26:46.885950 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:46.886055 kubelet[2562]: E0117 00:26:46.886028 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:46.902283 containerd[1501]: time="2026-01-17T00:26:46.902038526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gqwgp,Uid:a66ef2ad-ee77-483a-a5dd-00d925e60edc,Namespace:calico-system,Attempt:0,} returns sandbox id \"e335f2f9ecfefe993cc1249a9fcc9f48f9f5989872cd244d582946404faded38\"" Jan 17 00:26:48.515190 kubelet[2562]: E0117 00:26:48.515077 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wn7sn" podUID="eb95b785-13b8-4aa9-b43b-38efbd205ceb" Jan 17 00:26:48.610756 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount872889182.mount: Deactivated successfully. Jan 17 00:26:49.686735 containerd[1501]: time="2026-01-17T00:26:49.686672411Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:26:49.687780 containerd[1501]: time="2026-01-17T00:26:49.687592113Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Jan 17 00:26:49.689131 containerd[1501]: time="2026-01-17T00:26:49.688680241Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:26:49.690432 containerd[1501]: time="2026-01-17T00:26:49.690395332Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:26:49.691002 containerd[1501]: time="2026-01-17T00:26:49.690799296Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.92077276s" Jan 17 00:26:49.691002 containerd[1501]: time="2026-01-17T00:26:49.690842748Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 17 00:26:49.692023 containerd[1501]: time="2026-01-17T00:26:49.692007330Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 17 00:26:49.708063 containerd[1501]: time="2026-01-17T00:26:49.707841590Z" level=info msg="CreateContainer within sandbox \"0c08511eaf263410a11aa8c25b2e727bf73b32a9845c4bfad98464ab5aa2b164\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 17 00:26:49.722710 containerd[1501]: time="2026-01-17T00:26:49.722662297Z" level=info msg="CreateContainer within sandbox \"0c08511eaf263410a11aa8c25b2e727bf73b32a9845c4bfad98464ab5aa2b164\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"61c2a00e646e9ce360c8c59e57ff5871aa016d13c820464bb10bff785e5ab5ba\"" Jan 17 00:26:49.723343 containerd[1501]: time="2026-01-17T00:26:49.723311690Z" level=info msg="StartContainer for \"61c2a00e646e9ce360c8c59e57ff5871aa016d13c820464bb10bff785e5ab5ba\"" Jan 17 00:26:49.765346 systemd[1]: Started cri-containerd-61c2a00e646e9ce360c8c59e57ff5871aa016d13c820464bb10bff785e5ab5ba.scope - libcontainer container 61c2a00e646e9ce360c8c59e57ff5871aa016d13c820464bb10bff785e5ab5ba. Jan 17 00:26:49.824285 containerd[1501]: time="2026-01-17T00:26:49.824237418Z" level=info msg="StartContainer for \"61c2a00e646e9ce360c8c59e57ff5871aa016d13c820464bb10bff785e5ab5ba\" returns successfully" Jan 17 00:26:50.515864 kubelet[2562]: E0117 00:26:50.515725 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wn7sn" podUID="eb95b785-13b8-4aa9-b43b-38efbd205ceb" Jan 17 00:26:50.678181 kubelet[2562]: I0117 00:26:50.678052 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-85d95c77c7-2qlcf" podStartSLOduration=1.754375878 podStartE2EDuration="4.676791041s" podCreationTimestamp="2026-01-17 00:26:46 +0000 UTC" firstStartedPulling="2026-01-17 00:26:46.769349878 +0000 UTC m=+21.404187263" lastFinishedPulling="2026-01-17 00:26:49.691765041 +0000 UTC m=+24.326602426" observedRunningTime="2026-01-17 00:26:50.676317385 +0000 UTC m=+25.311154810" watchObservedRunningTime="2026-01-17 00:26:50.676791041 +0000 UTC m=+25.311628456" Jan 17 00:26:50.679085 kubelet[2562]: E0117 00:26:50.678946 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:50.679085 kubelet[2562]: W0117 00:26:50.678975 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:50.679085 kubelet[2562]: E0117 00:26:50.679011 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:50.681795 kubelet[2562]: E0117 00:26:50.681750 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:50.682366 kubelet[2562]: W0117 00:26:50.681787 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:50.682366 kubelet[2562]: E0117 00:26:50.682189 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:50.683260 kubelet[2562]: E0117 00:26:50.683216 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:50.683260 kubelet[2562]: W0117 00:26:50.683245 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:50.684418 kubelet[2562]: E0117 00:26:50.683275 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:50.684854 kubelet[2562]: E0117 00:26:50.684807 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:50.684854 kubelet[2562]: W0117 00:26:50.684842 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:50.685494 kubelet[2562]: E0117 00:26:50.684870 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:50.685780 kubelet[2562]: E0117 00:26:50.685596 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:50.685780 kubelet[2562]: W0117 00:26:50.685634 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:50.685780 kubelet[2562]: E0117 00:26:50.685660 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:50.687354 kubelet[2562]: E0117 00:26:50.687292 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:50.687354 kubelet[2562]: W0117 00:26:50.687327 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:50.687354 kubelet[2562]: E0117 00:26:50.687358 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:50.688071 kubelet[2562]: E0117 00:26:50.688019 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:50.688071 kubelet[2562]: W0117 00:26:50.688050 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:50.688565 kubelet[2562]: E0117 00:26:50.688075 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:50.688720 kubelet[2562]: E0117 00:26:50.688663 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:50.688720 kubelet[2562]: W0117 00:26:50.688700 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:50.688855 kubelet[2562]: E0117 00:26:50.688721 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:50.689442 kubelet[2562]: E0117 00:26:50.689400 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:50.689442 kubelet[2562]: W0117 00:26:50.689427 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:50.689619 kubelet[2562]: E0117 00:26:50.689449 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:50.690170 kubelet[2562]: E0117 00:26:50.690087 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:50.690170 kubelet[2562]: W0117 00:26:50.690154 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:50.690331 kubelet[2562]: E0117 00:26:50.690177 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:50.690838 kubelet[2562]: E0117 00:26:50.690793 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:50.690838 kubelet[2562]: W0117 00:26:50.690824 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:50.691223 kubelet[2562]: E0117 00:26:50.690848 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:50.691634 kubelet[2562]: E0117 00:26:50.691592 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:50.691634 kubelet[2562]: W0117 00:26:50.691620 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:50.691800 kubelet[2562]: E0117 00:26:50.691644 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:50.692292 kubelet[2562]: E0117 00:26:50.692219 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:50.692292 kubelet[2562]: W0117 00:26:50.692277 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:50.692472 kubelet[2562]: E0117 00:26:50.692300 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:50.692858 kubelet[2562]: E0117 00:26:50.692808 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:50.692858 kubelet[2562]: W0117 00:26:50.692836 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:50.693024 kubelet[2562]: E0117 00:26:50.692858 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:50.693520 kubelet[2562]: E0117 00:26:50.693474 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:50.693520 kubelet[2562]: W0117 00:26:50.693503 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:50.693676 kubelet[2562]: E0117 00:26:50.693525 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:50.697730 kubelet[2562]: E0117 00:26:50.697295 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:50.697730 kubelet[2562]: W0117 00:26:50.697324 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:50.697730 kubelet[2562]: E0117 00:26:50.697349 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:50.704132 kubelet[2562]: E0117 00:26:50.701813 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:50.704132 kubelet[2562]: W0117 00:26:50.701841 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:50.704132 kubelet[2562]: E0117 00:26:50.701905 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:50.707490 kubelet[2562]: E0117 00:26:50.707376 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:50.707490 kubelet[2562]: W0117 00:26:50.707408 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:50.707973 kubelet[2562]: E0117 00:26:50.707873 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:50.709310 kubelet[2562]: E0117 00:26:50.708817 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:50.709310 kubelet[2562]: W0117 00:26:50.708845 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:50.709310 kubelet[2562]: E0117 00:26:50.708871 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:50.709786 kubelet[2562]: E0117 00:26:50.709579 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:50.709786 kubelet[2562]: W0117 00:26:50.709604 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:50.709786 kubelet[2562]: E0117 00:26:50.709731 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:50.710381 kubelet[2562]: E0117 00:26:50.710301 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:50.710381 kubelet[2562]: W0117 00:26:50.710330 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:50.710579 kubelet[2562]: E0117 00:26:50.710541 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:50.711208 kubelet[2562]: E0117 00:26:50.711043 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:50.711208 kubelet[2562]: W0117 00:26:50.711070 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:50.711371 kubelet[2562]: E0117 00:26:50.711207 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:50.712085 kubelet[2562]: E0117 00:26:50.712039 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:50.712085 kubelet[2562]: W0117 00:26:50.712071 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:50.712970 kubelet[2562]: E0117 00:26:50.712202 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:50.712970 kubelet[2562]: E0117 00:26:50.712628 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:50.712970 kubelet[2562]: W0117 00:26:50.712647 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:50.712970 kubelet[2562]: E0117 00:26:50.712725 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:50.713266 kubelet[2562]: E0117 00:26:50.713222 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:50.713266 kubelet[2562]: W0117 00:26:50.713241 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:50.713398 kubelet[2562]: E0117 00:26:50.713301 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:50.713899 kubelet[2562]: E0117 00:26:50.713826 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:50.713899 kubelet[2562]: W0117 00:26:50.713853 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:50.713899 kubelet[2562]: E0117 00:26:50.713885 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:50.714613 kubelet[2562]: E0117 00:26:50.714557 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:50.714613 kubelet[2562]: W0117 00:26:50.714579 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:50.715074 kubelet[2562]: E0117 00:26:50.714811 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:50.715658 kubelet[2562]: E0117 00:26:50.715615 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:50.715658 kubelet[2562]: W0117 00:26:50.715642 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:50.715908 kubelet[2562]: E0117 00:26:50.715773 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:50.716351 kubelet[2562]: E0117 00:26:50.716305 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:50.716351 kubelet[2562]: W0117 00:26:50.716335 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:50.716784 kubelet[2562]: E0117 00:26:50.716554 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:50.716879 kubelet[2562]: E0117 00:26:50.716854 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:50.716976 kubelet[2562]: W0117 00:26:50.716876 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:50.716976 kubelet[2562]: E0117 00:26:50.716907 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:50.717565 kubelet[2562]: E0117 00:26:50.717520 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:50.717565 kubelet[2562]: W0117 00:26:50.717550 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:50.717728 kubelet[2562]: E0117 00:26:50.717572 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:50.718245 kubelet[2562]: E0117 00:26:50.718199 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:50.718245 kubelet[2562]: W0117 00:26:50.718227 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:50.718398 kubelet[2562]: E0117 00:26:50.718253 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:50.719147 kubelet[2562]: E0117 00:26:50.719057 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:50.719147 kubelet[2562]: W0117 00:26:50.719084 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:50.719289 kubelet[2562]: E0117 00:26:50.719191 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:51.553655 containerd[1501]: time="2026-01-17T00:26:51.553543266Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:26:51.555160 containerd[1501]: time="2026-01-17T00:26:51.554996971Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Jan 17 00:26:51.559140 containerd[1501]: time="2026-01-17T00:26:51.557070846Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:26:51.561502 containerd[1501]: time="2026-01-17T00:26:51.561445962Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:26:51.563066 containerd[1501]: time="2026-01-17T00:26:51.563013501Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.870915239s" Jan 17 00:26:51.563066 containerd[1501]: time="2026-01-17T00:26:51.563060953Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 17 00:26:51.567674 containerd[1501]: time="2026-01-17T00:26:51.567133630Z" level=info msg="CreateContainer within sandbox \"e335f2f9ecfefe993cc1249a9fcc9f48f9f5989872cd244d582946404faded38\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 17 00:26:51.590031 containerd[1501]: time="2026-01-17T00:26:51.589928161Z" level=info msg="CreateContainer within sandbox \"e335f2f9ecfefe993cc1249a9fcc9f48f9f5989872cd244d582946404faded38\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"cae33d7deb6be2c1d514105e61d52ed117f789419cca3c85b62d510b689980c3\"" Jan 17 00:26:51.590877 containerd[1501]: time="2026-01-17T00:26:51.590822700Z" level=info msg="StartContainer for \"cae33d7deb6be2c1d514105e61d52ed117f789419cca3c85b62d510b689980c3\"" Jan 17 00:26:51.664315 systemd[1]: run-containerd-runc-k8s.io-cae33d7deb6be2c1d514105e61d52ed117f789419cca3c85b62d510b689980c3-runc.zB4NJc.mount: Deactivated successfully. Jan 17 00:26:51.671891 kubelet[2562]: I0117 00:26:51.671849 2562 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 00:26:51.677487 systemd[1]: Started cri-containerd-cae33d7deb6be2c1d514105e61d52ed117f789419cca3c85b62d510b689980c3.scope - libcontainer container cae33d7deb6be2c1d514105e61d52ed117f789419cca3c85b62d510b689980c3. Jan 17 00:26:51.699557 kubelet[2562]: E0117 00:26:51.699530 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:51.699649 kubelet[2562]: W0117 00:26:51.699558 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:51.699649 kubelet[2562]: E0117 00:26:51.699590 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:51.700111 kubelet[2562]: E0117 00:26:51.700038 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:51.700111 kubelet[2562]: W0117 00:26:51.700066 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:51.700111 kubelet[2562]: E0117 00:26:51.700083 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:51.700475 kubelet[2562]: E0117 00:26:51.700444 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:51.700475 kubelet[2562]: W0117 00:26:51.700467 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:51.700526 kubelet[2562]: E0117 00:26:51.700483 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:51.701413 kubelet[2562]: E0117 00:26:51.701309 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:51.701413 kubelet[2562]: W0117 00:26:51.701318 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:51.701413 kubelet[2562]: E0117 00:26:51.701328 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:51.701670 kubelet[2562]: E0117 00:26:51.701646 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:51.701710 kubelet[2562]: W0117 00:26:51.701703 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:51.701800 kubelet[2562]: E0117 00:26:51.701765 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:51.702046 kubelet[2562]: E0117 00:26:51.702038 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:51.702176 kubelet[2562]: W0117 00:26:51.702091 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:51.702176 kubelet[2562]: E0117 00:26:51.702119 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:51.702540 kubelet[2562]: E0117 00:26:51.702531 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:51.702630 kubelet[2562]: W0117 00:26:51.702588 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:51.702630 kubelet[2562]: E0117 00:26:51.702597 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:51.702933 kubelet[2562]: E0117 00:26:51.702868 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:51.702933 kubelet[2562]: W0117 00:26:51.702876 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:51.702933 kubelet[2562]: E0117 00:26:51.702885 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:51.703208 kubelet[2562]: E0117 00:26:51.703200 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:51.703312 kubelet[2562]: W0117 00:26:51.703250 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:51.703312 kubelet[2562]: E0117 00:26:51.703259 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:51.703527 kubelet[2562]: E0117 00:26:51.703505 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:51.703607 kubelet[2562]: W0117 00:26:51.703566 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:51.703607 kubelet[2562]: E0117 00:26:51.703575 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:51.703869 kubelet[2562]: E0117 00:26:51.703839 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:51.703869 kubelet[2562]: W0117 00:26:51.703846 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:51.704009 kubelet[2562]: E0117 00:26:51.703925 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:51.704234 kubelet[2562]: E0117 00:26:51.704226 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:51.704342 kubelet[2562]: W0117 00:26:51.704295 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:51.704342 kubelet[2562]: E0117 00:26:51.704304 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:51.704799 kubelet[2562]: E0117 00:26:51.704700 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:51.704799 kubelet[2562]: W0117 00:26:51.704710 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:51.704799 kubelet[2562]: E0117 00:26:51.704717 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:51.705021 kubelet[2562]: E0117 00:26:51.705012 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:51.705056 kubelet[2562]: W0117 00:26:51.705049 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:51.705116 kubelet[2562]: E0117 00:26:51.705108 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:51.705412 kubelet[2562]: E0117 00:26:51.705347 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:51.705412 kubelet[2562]: W0117 00:26:51.705354 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:51.705412 kubelet[2562]: E0117 00:26:51.705361 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:51.718090 kubelet[2562]: E0117 00:26:51.718049 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:51.718090 kubelet[2562]: W0117 00:26:51.718081 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:51.718090 kubelet[2562]: E0117 00:26:51.718132 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:51.718950 kubelet[2562]: E0117 00:26:51.718689 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:51.718950 kubelet[2562]: W0117 00:26:51.718708 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:51.718950 kubelet[2562]: E0117 00:26:51.718749 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:51.719395 kubelet[2562]: E0117 00:26:51.719374 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:51.719426 kubelet[2562]: W0117 00:26:51.719399 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:51.719468 kubelet[2562]: E0117 00:26:51.719422 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:51.720081 kubelet[2562]: E0117 00:26:51.720059 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:51.720139 kubelet[2562]: W0117 00:26:51.720083 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:51.720189 kubelet[2562]: E0117 00:26:51.720175 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:51.720732 kubelet[2562]: E0117 00:26:51.720624 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:51.720732 kubelet[2562]: W0117 00:26:51.720640 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:51.720782 kubelet[2562]: E0117 00:26:51.720744 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:51.721199 kubelet[2562]: E0117 00:26:51.721169 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:51.721199 kubelet[2562]: W0117 00:26:51.721192 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:51.721364 kubelet[2562]: E0117 00:26:51.721343 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:51.721552 kubelet[2562]: E0117 00:26:51.721511 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:51.721552 kubelet[2562]: W0117 00:26:51.721526 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:51.721803 kubelet[2562]: E0117 00:26:51.721728 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:51.721917 kubelet[2562]: E0117 00:26:51.721890 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:51.721917 kubelet[2562]: W0117 00:26:51.721907 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:51.721958 kubelet[2562]: E0117 00:26:51.721928 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:51.723376 kubelet[2562]: E0117 00:26:51.723343 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:51.723467 kubelet[2562]: W0117 00:26:51.723446 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:51.723511 kubelet[2562]: E0117 00:26:51.723495 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:51.723894 kubelet[2562]: E0117 00:26:51.723884 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:51.724184 kubelet[2562]: W0117 00:26:51.724021 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:51.724184 kubelet[2562]: E0117 00:26:51.724038 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:51.724413 kubelet[2562]: E0117 00:26:51.724404 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:51.724471 kubelet[2562]: W0117 00:26:51.724445 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:51.724547 kubelet[2562]: E0117 00:26:51.724497 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:51.724982 kubelet[2562]: E0117 00:26:51.724952 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:51.725057 kubelet[2562]: W0117 00:26:51.725023 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:51.725057 kubelet[2562]: E0117 00:26:51.725033 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:51.725987 kubelet[2562]: E0117 00:26:51.725876 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:51.725987 kubelet[2562]: W0117 00:26:51.725885 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:51.725987 kubelet[2562]: E0117 00:26:51.725894 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:51.728919 kubelet[2562]: E0117 00:26:51.728907 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:51.729082 kubelet[2562]: W0117 00:26:51.728994 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:51.729082 kubelet[2562]: E0117 00:26:51.729011 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:51.729403 kubelet[2562]: E0117 00:26:51.729303 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:51.729403 kubelet[2562]: W0117 00:26:51.729331 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:51.729403 kubelet[2562]: E0117 00:26:51.729338 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:51.729683 kubelet[2562]: E0117 00:26:51.729673 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:51.729787 kubelet[2562]: W0117 00:26:51.729738 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:51.729787 kubelet[2562]: E0117 00:26:51.729748 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:51.731106 kubelet[2562]: E0117 00:26:51.730231 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:51.731106 kubelet[2562]: W0117 00:26:51.730239 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:51.731106 kubelet[2562]: E0117 00:26:51.730247 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:51.731939 kubelet[2562]: E0117 00:26:51.731915 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:26:51.732023 kubelet[2562]: W0117 00:26:51.732014 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:26:51.732056 kubelet[2562]: E0117 00:26:51.732048 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:26:51.733199 containerd[1501]: time="2026-01-17T00:26:51.733163703Z" level=info msg="StartContainer for \"cae33d7deb6be2c1d514105e61d52ed117f789419cca3c85b62d510b689980c3\" returns successfully" Jan 17 00:26:51.757211 systemd[1]: cri-containerd-cae33d7deb6be2c1d514105e61d52ed117f789419cca3c85b62d510b689980c3.scope: Deactivated successfully. Jan 17 00:26:51.790194 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cae33d7deb6be2c1d514105e61d52ed117f789419cca3c85b62d510b689980c3-rootfs.mount: Deactivated successfully. Jan 17 00:26:51.887785 containerd[1501]: time="2026-01-17T00:26:51.887471560Z" level=info msg="shim disconnected" id=cae33d7deb6be2c1d514105e61d52ed117f789419cca3c85b62d510b689980c3 namespace=k8s.io Jan 17 00:26:51.887785 containerd[1501]: time="2026-01-17T00:26:51.887574093Z" level=warning msg="cleaning up after shim disconnected" id=cae33d7deb6be2c1d514105e61d52ed117f789419cca3c85b62d510b689980c3 namespace=k8s.io Jan 17 00:26:51.887785 containerd[1501]: time="2026-01-17T00:26:51.887589474Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:26:51.921517 containerd[1501]: time="2026-01-17T00:26:51.921369928Z" level=warning msg="cleanup warnings time=\"2026-01-17T00:26:51Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 00:26:52.515398 kubelet[2562]: E0117 00:26:52.514840 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wn7sn" podUID="eb95b785-13b8-4aa9-b43b-38efbd205ceb" Jan 17 00:26:52.677342 containerd[1501]: time="2026-01-17T00:26:52.676669231Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 17 00:26:54.515465 kubelet[2562]: E0117 00:26:54.515333 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wn7sn" podUID="eb95b785-13b8-4aa9-b43b-38efbd205ceb" Jan 17 00:26:56.515690 kubelet[2562]: E0117 00:26:56.515638 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wn7sn" podUID="eb95b785-13b8-4aa9-b43b-38efbd205ceb" Jan 17 00:26:56.943110 containerd[1501]: time="2026-01-17T00:26:56.942855819Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:26:56.943979 containerd[1501]: time="2026-01-17T00:26:56.943944479Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 17 00:26:56.944794 containerd[1501]: time="2026-01-17T00:26:56.944759396Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:26:56.946408 containerd[1501]: time="2026-01-17T00:26:56.946391622Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:26:56.947178 containerd[1501]: time="2026-01-17T00:26:56.946835922Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 4.270123149s" Jan 17 00:26:56.947178 containerd[1501]: time="2026-01-17T00:26:56.946859713Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 17 00:26:56.949831 containerd[1501]: time="2026-01-17T00:26:56.949748337Z" level=info msg="CreateContainer within sandbox \"e335f2f9ecfefe993cc1249a9fcc9f48f9f5989872cd244d582946404faded38\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 17 00:26:56.966430 containerd[1501]: time="2026-01-17T00:26:56.966375804Z" level=info msg="CreateContainer within sandbox \"e335f2f9ecfefe993cc1249a9fcc9f48f9f5989872cd244d582946404faded38\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"712dcbd5f822a5ca176300aa34f5f45f50c6eaf0b80fcf7ea3cafd34a113c9b8\"" Jan 17 00:26:56.967338 containerd[1501]: time="2026-01-17T00:26:56.967312978Z" level=info msg="StartContainer for \"712dcbd5f822a5ca176300aa34f5f45f50c6eaf0b80fcf7ea3cafd34a113c9b8\"" Jan 17 00:26:57.000264 systemd[1]: Started cri-containerd-712dcbd5f822a5ca176300aa34f5f45f50c6eaf0b80fcf7ea3cafd34a113c9b8.scope - libcontainer container 712dcbd5f822a5ca176300aa34f5f45f50c6eaf0b80fcf7ea3cafd34a113c9b8. Jan 17 00:26:57.034765 containerd[1501]: time="2026-01-17T00:26:57.034175153Z" level=info msg="StartContainer for \"712dcbd5f822a5ca176300aa34f5f45f50c6eaf0b80fcf7ea3cafd34a113c9b8\" returns successfully" Jan 17 00:26:57.630466 containerd[1501]: time="2026-01-17T00:26:57.630360453Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 00:26:57.637204 systemd[1]: cri-containerd-712dcbd5f822a5ca176300aa34f5f45f50c6eaf0b80fcf7ea3cafd34a113c9b8.scope: Deactivated successfully. Jan 17 00:26:57.681690 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-712dcbd5f822a5ca176300aa34f5f45f50c6eaf0b80fcf7ea3cafd34a113c9b8-rootfs.mount: Deactivated successfully. Jan 17 00:26:57.745464 kubelet[2562]: I0117 00:26:57.744479 2562 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 17 00:26:57.775146 containerd[1501]: time="2026-01-17T00:26:57.775019838Z" level=info msg="shim disconnected" id=712dcbd5f822a5ca176300aa34f5f45f50c6eaf0b80fcf7ea3cafd34a113c9b8 namespace=k8s.io Jan 17 00:26:57.775146 containerd[1501]: time="2026-01-17T00:26:57.775114042Z" level=warning msg="cleaning up after shim disconnected" id=712dcbd5f822a5ca176300aa34f5f45f50c6eaf0b80fcf7ea3cafd34a113c9b8 namespace=k8s.io Jan 17 00:26:57.775146 containerd[1501]: time="2026-01-17T00:26:57.775130033Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:26:57.830962 systemd[1]: Created slice kubepods-besteffort-pod481372c3_ef6e_46bf_86cb_78fea87a79f9.slice - libcontainer container kubepods-besteffort-pod481372c3_ef6e_46bf_86cb_78fea87a79f9.slice. Jan 17 00:26:57.844576 systemd[1]: Created slice kubepods-burstable-poddd4b2cd2_cd64_4cf4_9264_84814b92189d.slice - libcontainer container kubepods-burstable-poddd4b2cd2_cd64_4cf4_9264_84814b92189d.slice. Jan 17 00:26:57.857361 systemd[1]: Created slice kubepods-burstable-podc92915ad_48c4_496c_93e3_f83efa51b583.slice - libcontainer container kubepods-burstable-podc92915ad_48c4_496c_93e3_f83efa51b583.slice. Jan 17 00:26:57.866973 kubelet[2562]: I0117 00:26:57.866931 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phtw6\" (UniqueName: \"kubernetes.io/projected/dd4b2cd2-cd64-4cf4-9264-84814b92189d-kube-api-access-phtw6\") pod \"coredns-668d6bf9bc-djgpg\" (UID: \"dd4b2cd2-cd64-4cf4-9264-84814b92189d\") " pod="kube-system/coredns-668d6bf9bc-djgpg" Jan 17 00:26:57.866973 kubelet[2562]: I0117 00:26:57.866961 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a05bf132-cecb-477a-a941-f502759ced80-goldmane-ca-bundle\") pod \"goldmane-666569f655-ssv4k\" (UID: \"a05bf132-cecb-477a-a941-f502759ced80\") " pod="calico-system/goldmane-666569f655-ssv4k" Jan 17 00:26:57.866973 kubelet[2562]: I0117 00:26:57.866977 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz5jx\" (UniqueName: \"kubernetes.io/projected/7ea2b3c0-00a9-42a4-a1ac-e5bd2a459fee-kube-api-access-kz5jx\") pod \"calico-apiserver-6675cb976f-qgmnw\" (UID: \"7ea2b3c0-00a9-42a4-a1ac-e5bd2a459fee\") " pod="calico-apiserver/calico-apiserver-6675cb976f-qgmnw" Jan 17 00:26:57.867306 kubelet[2562]: I0117 00:26:57.866992 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7ea2b3c0-00a9-42a4-a1ac-e5bd2a459fee-calico-apiserver-certs\") pod \"calico-apiserver-6675cb976f-qgmnw\" (UID: \"7ea2b3c0-00a9-42a4-a1ac-e5bd2a459fee\") " pod="calico-apiserver/calico-apiserver-6675cb976f-qgmnw" Jan 17 00:26:57.867306 kubelet[2562]: I0117 00:26:57.867008 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nkr5\" (UniqueName: \"kubernetes.io/projected/481372c3-ef6e-46bf-86cb-78fea87a79f9-kube-api-access-2nkr5\") pod \"calico-kube-controllers-64b8756fdc-7qc6h\" (UID: \"481372c3-ef6e-46bf-86cb-78fea87a79f9\") " pod="calico-system/calico-kube-controllers-64b8756fdc-7qc6h" Jan 17 00:26:57.867306 kubelet[2562]: I0117 00:26:57.867021 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dd4b2cd2-cd64-4cf4-9264-84814b92189d-config-volume\") pod \"coredns-668d6bf9bc-djgpg\" (UID: \"dd4b2cd2-cd64-4cf4-9264-84814b92189d\") " pod="kube-system/coredns-668d6bf9bc-djgpg" Jan 17 00:26:57.867306 kubelet[2562]: I0117 00:26:57.867034 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bn44h\" (UniqueName: \"kubernetes.io/projected/a05bf132-cecb-477a-a941-f502759ced80-kube-api-access-bn44h\") pod \"goldmane-666569f655-ssv4k\" (UID: \"a05bf132-cecb-477a-a941-f502759ced80\") " pod="calico-system/goldmane-666569f655-ssv4k" Jan 17 00:26:57.867306 kubelet[2562]: I0117 00:26:57.867049 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/73c589a5-8e71-425c-a060-0cf6cb3ed239-calico-apiserver-certs\") pod \"calico-apiserver-6675cb976f-s7mzt\" (UID: \"73c589a5-8e71-425c-a060-0cf6cb3ed239\") " pod="calico-apiserver/calico-apiserver-6675cb976f-s7mzt" Jan 17 00:26:57.868737 kubelet[2562]: I0117 00:26:57.867066 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/481372c3-ef6e-46bf-86cb-78fea87a79f9-tigera-ca-bundle\") pod \"calico-kube-controllers-64b8756fdc-7qc6h\" (UID: \"481372c3-ef6e-46bf-86cb-78fea87a79f9\") " pod="calico-system/calico-kube-controllers-64b8756fdc-7qc6h" Jan 17 00:26:57.868737 kubelet[2562]: I0117 00:26:57.867082 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/a05bf132-cecb-477a-a941-f502759ced80-goldmane-key-pair\") pod \"goldmane-666569f655-ssv4k\" (UID: \"a05bf132-cecb-477a-a941-f502759ced80\") " pod="calico-system/goldmane-666569f655-ssv4k" Jan 17 00:26:57.868737 kubelet[2562]: I0117 00:26:57.867110 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c92915ad-48c4-496c-93e3-f83efa51b583-config-volume\") pod \"coredns-668d6bf9bc-mgc4q\" (UID: \"c92915ad-48c4-496c-93e3-f83efa51b583\") " pod="kube-system/coredns-668d6bf9bc-mgc4q" Jan 17 00:26:57.868737 kubelet[2562]: I0117 00:26:57.867123 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6ghw\" (UniqueName: \"kubernetes.io/projected/c92915ad-48c4-496c-93e3-f83efa51b583-kube-api-access-j6ghw\") pod \"coredns-668d6bf9bc-mgc4q\" (UID: \"c92915ad-48c4-496c-93e3-f83efa51b583\") " pod="kube-system/coredns-668d6bf9bc-mgc4q" Jan 17 00:26:57.868737 kubelet[2562]: I0117 00:26:57.867155 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnczc\" (UniqueName: \"kubernetes.io/projected/73c589a5-8e71-425c-a060-0cf6cb3ed239-kube-api-access-fnczc\") pod \"calico-apiserver-6675cb976f-s7mzt\" (UID: \"73c589a5-8e71-425c-a060-0cf6cb3ed239\") " pod="calico-apiserver/calico-apiserver-6675cb976f-s7mzt" Jan 17 00:26:57.869072 kubelet[2562]: I0117 00:26:57.867168 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a05bf132-cecb-477a-a941-f502759ced80-config\") pod \"goldmane-666569f655-ssv4k\" (UID: \"a05bf132-cecb-477a-a941-f502759ced80\") " pod="calico-system/goldmane-666569f655-ssv4k" Jan 17 00:26:57.873519 systemd[1]: Created slice kubepods-besteffort-poda05bf132_cecb_477a_a941_f502759ced80.slice - libcontainer container kubepods-besteffort-poda05bf132_cecb_477a_a941_f502759ced80.slice. Jan 17 00:26:57.880961 systemd[1]: Created slice kubepods-besteffort-pod73c589a5_8e71_425c_a060_0cf6cb3ed239.slice - libcontainer container kubepods-besteffort-pod73c589a5_8e71_425c_a060_0cf6cb3ed239.slice. Jan 17 00:26:57.891987 systemd[1]: Created slice kubepods-besteffort-pod7ea2b3c0_00a9_42a4_a1ac_e5bd2a459fee.slice - libcontainer container kubepods-besteffort-pod7ea2b3c0_00a9_42a4_a1ac_e5bd2a459fee.slice. Jan 17 00:26:57.899900 systemd[1]: Created slice kubepods-besteffort-pod04e154f3_423d_435c_8e3a_9f74e4b2d1d0.slice - libcontainer container kubepods-besteffort-pod04e154f3_423d_435c_8e3a_9f74e4b2d1d0.slice. Jan 17 00:26:57.970138 kubelet[2562]: I0117 00:26:57.968465 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5c9f\" (UniqueName: \"kubernetes.io/projected/04e154f3-423d-435c-8e3a-9f74e4b2d1d0-kube-api-access-f5c9f\") pod \"whisker-ddfc459bc-95rsc\" (UID: \"04e154f3-423d-435c-8e3a-9f74e4b2d1d0\") " pod="calico-system/whisker-ddfc459bc-95rsc" Jan 17 00:26:57.970138 kubelet[2562]: I0117 00:26:57.968553 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/04e154f3-423d-435c-8e3a-9f74e4b2d1d0-whisker-backend-key-pair\") pod \"whisker-ddfc459bc-95rsc\" (UID: \"04e154f3-423d-435c-8e3a-9f74e4b2d1d0\") " pod="calico-system/whisker-ddfc459bc-95rsc" Jan 17 00:26:57.970138 kubelet[2562]: I0117 00:26:57.968598 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04e154f3-423d-435c-8e3a-9f74e4b2d1d0-whisker-ca-bundle\") pod \"whisker-ddfc459bc-95rsc\" (UID: \"04e154f3-423d-435c-8e3a-9f74e4b2d1d0\") " pod="calico-system/whisker-ddfc459bc-95rsc" Jan 17 00:26:58.139560 containerd[1501]: time="2026-01-17T00:26:58.139358400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64b8756fdc-7qc6h,Uid:481372c3-ef6e-46bf-86cb-78fea87a79f9,Namespace:calico-system,Attempt:0,}" Jan 17 00:26:58.151233 containerd[1501]: time="2026-01-17T00:26:58.150826751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-djgpg,Uid:dd4b2cd2-cd64-4cf4-9264-84814b92189d,Namespace:kube-system,Attempt:0,}" Jan 17 00:26:58.167258 containerd[1501]: time="2026-01-17T00:26:58.167169814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mgc4q,Uid:c92915ad-48c4-496c-93e3-f83efa51b583,Namespace:kube-system,Attempt:0,}" Jan 17 00:26:58.183701 containerd[1501]: time="2026-01-17T00:26:58.183035567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-ssv4k,Uid:a05bf132-cecb-477a-a941-f502759ced80,Namespace:calico-system,Attempt:0,}" Jan 17 00:26:58.187653 containerd[1501]: time="2026-01-17T00:26:58.187590627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6675cb976f-s7mzt,Uid:73c589a5-8e71-425c-a060-0cf6cb3ed239,Namespace:calico-apiserver,Attempt:0,}" Jan 17 00:26:58.197085 containerd[1501]: time="2026-01-17T00:26:58.197017828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6675cb976f-qgmnw,Uid:7ea2b3c0-00a9-42a4-a1ac-e5bd2a459fee,Namespace:calico-apiserver,Attempt:0,}" Jan 17 00:26:58.204380 containerd[1501]: time="2026-01-17T00:26:58.204309937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-ddfc459bc-95rsc,Uid:04e154f3-423d-435c-8e3a-9f74e4b2d1d0,Namespace:calico-system,Attempt:0,}" Jan 17 00:26:58.291516 containerd[1501]: time="2026-01-17T00:26:58.291456994Z" level=error msg="Failed to destroy network for sandbox \"5f1eaa4f504fb148622ab79b04b1b660a317bb48cc4e0e403547fdcf7b80b333\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:26:58.293582 containerd[1501]: time="2026-01-17T00:26:58.293433920Z" level=error msg="encountered an error cleaning up failed sandbox \"5f1eaa4f504fb148622ab79b04b1b660a317bb48cc4e0e403547fdcf7b80b333\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:26:58.293672 containerd[1501]: time="2026-01-17T00:26:58.293607918Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64b8756fdc-7qc6h,Uid:481372c3-ef6e-46bf-86cb-78fea87a79f9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5f1eaa4f504fb148622ab79b04b1b660a317bb48cc4e0e403547fdcf7b80b333\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:26:58.294223 kubelet[2562]: E0117 00:26:58.294183 2562 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f1eaa4f504fb148622ab79b04b1b660a317bb48cc4e0e403547fdcf7b80b333\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:26:58.294337 kubelet[2562]: E0117 00:26:58.294256 2562 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f1eaa4f504fb148622ab79b04b1b660a317bb48cc4e0e403547fdcf7b80b333\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-64b8756fdc-7qc6h" Jan 17 00:26:58.294337 kubelet[2562]: E0117 00:26:58.294275 2562 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f1eaa4f504fb148622ab79b04b1b660a317bb48cc4e0e403547fdcf7b80b333\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-64b8756fdc-7qc6h" Jan 17 00:26:58.294572 kubelet[2562]: E0117 00:26:58.294538 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-64b8756fdc-7qc6h_calico-system(481372c3-ef6e-46bf-86cb-78fea87a79f9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-64b8756fdc-7qc6h_calico-system(481372c3-ef6e-46bf-86cb-78fea87a79f9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5f1eaa4f504fb148622ab79b04b1b660a317bb48cc4e0e403547fdcf7b80b333\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-64b8756fdc-7qc6h" podUID="481372c3-ef6e-46bf-86cb-78fea87a79f9" Jan 17 00:26:58.355996 containerd[1501]: time="2026-01-17T00:26:58.355934851Z" level=error msg="Failed to destroy network for sandbox \"3d92bb12f39a75e0bf578ea5270c93418c3fb227f6c5f2b52594ba64f87d76d8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:26:58.356493 containerd[1501]: time="2026-01-17T00:26:58.356416801Z" level=error msg="encountered an error cleaning up failed sandbox \"3d92bb12f39a75e0bf578ea5270c93418c3fb227f6c5f2b52594ba64f87d76d8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:26:58.356493 containerd[1501]: time="2026-01-17T00:26:58.356469684Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mgc4q,Uid:c92915ad-48c4-496c-93e3-f83efa51b583,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3d92bb12f39a75e0bf578ea5270c93418c3fb227f6c5f2b52594ba64f87d76d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:26:58.357226 kubelet[2562]: E0117 00:26:58.356693 2562 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d92bb12f39a75e0bf578ea5270c93418c3fb227f6c5f2b52594ba64f87d76d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:26:58.357226 kubelet[2562]: E0117 00:26:58.356751 2562 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d92bb12f39a75e0bf578ea5270c93418c3fb227f6c5f2b52594ba64f87d76d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-mgc4q" Jan 17 00:26:58.357226 kubelet[2562]: E0117 00:26:58.356770 2562 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d92bb12f39a75e0bf578ea5270c93418c3fb227f6c5f2b52594ba64f87d76d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-mgc4q" Jan 17 00:26:58.357430 kubelet[2562]: E0117 00:26:58.356806 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-mgc4q_kube-system(c92915ad-48c4-496c-93e3-f83efa51b583)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-mgc4q_kube-system(c92915ad-48c4-496c-93e3-f83efa51b583)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3d92bb12f39a75e0bf578ea5270c93418c3fb227f6c5f2b52594ba64f87d76d8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-mgc4q" podUID="c92915ad-48c4-496c-93e3-f83efa51b583" Jan 17 00:26:58.406606 containerd[1501]: time="2026-01-17T00:26:58.406485119Z" level=error msg="Failed to destroy network for sandbox \"e9e413599c888e558f6b61d8a7e054695b06434e7332c6067770af00b48c784e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:26:58.408393 containerd[1501]: time="2026-01-17T00:26:58.407923262Z" level=error msg="encountered an error cleaning up failed sandbox \"e9e413599c888e558f6b61d8a7e054695b06434e7332c6067770af00b48c784e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:26:58.408393 containerd[1501]: time="2026-01-17T00:26:58.408238926Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-djgpg,Uid:dd4b2cd2-cd64-4cf4-9264-84814b92189d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e9e413599c888e558f6b61d8a7e054695b06434e7332c6067770af00b48c784e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:26:58.408573 kubelet[2562]: E0117 00:26:58.408510 2562 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9e413599c888e558f6b61d8a7e054695b06434e7332c6067770af00b48c784e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:26:58.408573 kubelet[2562]: E0117 00:26:58.408566 2562 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9e413599c888e558f6b61d8a7e054695b06434e7332c6067770af00b48c784e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-djgpg" Jan 17 00:26:58.408656 kubelet[2562]: E0117 00:26:58.408586 2562 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9e413599c888e558f6b61d8a7e054695b06434e7332c6067770af00b48c784e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-djgpg" Jan 17 00:26:58.408656 kubelet[2562]: E0117 00:26:58.408640 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-djgpg_kube-system(dd4b2cd2-cd64-4cf4-9264-84814b92189d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-djgpg_kube-system(dd4b2cd2-cd64-4cf4-9264-84814b92189d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e9e413599c888e558f6b61d8a7e054695b06434e7332c6067770af00b48c784e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-djgpg" podUID="dd4b2cd2-cd64-4cf4-9264-84814b92189d" Jan 17 00:26:58.414612 containerd[1501]: time="2026-01-17T00:26:58.414464627Z" level=error msg="Failed to destroy network for sandbox \"b9a908f1fd378b1d4d36a6616aa607fb6d749c5a1e70bfe5a158f7cf5fe0cba4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:26:58.415182 containerd[1501]: time="2026-01-17T00:26:58.414983031Z" level=error msg="encountered an error cleaning up failed sandbox \"b9a908f1fd378b1d4d36a6616aa607fb6d749c5a1e70bfe5a158f7cf5fe0cba4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:26:58.415182 containerd[1501]: time="2026-01-17T00:26:58.415028952Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-ddfc459bc-95rsc,Uid:04e154f3-423d-435c-8e3a-9f74e4b2d1d0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b9a908f1fd378b1d4d36a6616aa607fb6d749c5a1e70bfe5a158f7cf5fe0cba4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:26:58.415674 containerd[1501]: time="2026-01-17T00:26:58.415651509Z" level=error msg="Failed to destroy network for sandbox \"449644b9235453a357ba716e41402bd70e63edc2368ae781bf798daffcfedbc6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:26:58.416154 kubelet[2562]: E0117 00:26:58.416084 2562 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9a908f1fd378b1d4d36a6616aa607fb6d749c5a1e70bfe5a158f7cf5fe0cba4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:26:58.416245 kubelet[2562]: E0117 00:26:58.416173 2562 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9a908f1fd378b1d4d36a6616aa607fb6d749c5a1e70bfe5a158f7cf5fe0cba4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-ddfc459bc-95rsc" Jan 17 00:26:58.416280 kubelet[2562]: E0117 00:26:58.416245 2562 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9a908f1fd378b1d4d36a6616aa607fb6d749c5a1e70bfe5a158f7cf5fe0cba4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-ddfc459bc-95rsc" Jan 17 00:26:58.416776 kubelet[2562]: E0117 00:26:58.416711 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-ddfc459bc-95rsc_calico-system(04e154f3-423d-435c-8e3a-9f74e4b2d1d0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-ddfc459bc-95rsc_calico-system(04e154f3-423d-435c-8e3a-9f74e4b2d1d0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b9a908f1fd378b1d4d36a6616aa607fb6d749c5a1e70bfe5a158f7cf5fe0cba4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-ddfc459bc-95rsc" podUID="04e154f3-423d-435c-8e3a-9f74e4b2d1d0" Jan 17 00:26:58.417235 containerd[1501]: time="2026-01-17T00:26:58.417213228Z" level=error msg="encountered an error cleaning up failed sandbox \"449644b9235453a357ba716e41402bd70e63edc2368ae781bf798daffcfedbc6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:26:58.417331 containerd[1501]: time="2026-01-17T00:26:58.417314552Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6675cb976f-qgmnw,Uid:7ea2b3c0-00a9-42a4-a1ac-e5bd2a459fee,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"449644b9235453a357ba716e41402bd70e63edc2368ae781bf798daffcfedbc6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:26:58.417595 kubelet[2562]: E0117 00:26:58.417577 2562 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"449644b9235453a357ba716e41402bd70e63edc2368ae781bf798daffcfedbc6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:26:58.417688 kubelet[2562]: E0117 00:26:58.417677 2562 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"449644b9235453a357ba716e41402bd70e63edc2368ae781bf798daffcfedbc6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6675cb976f-qgmnw" Jan 17 00:26:58.417756 kubelet[2562]: E0117 00:26:58.417745 2562 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"449644b9235453a357ba716e41402bd70e63edc2368ae781bf798daffcfedbc6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6675cb976f-qgmnw" Jan 17 00:26:58.417867 kubelet[2562]: E0117 00:26:58.417828 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6675cb976f-qgmnw_calico-apiserver(7ea2b3c0-00a9-42a4-a1ac-e5bd2a459fee)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6675cb976f-qgmnw_calico-apiserver(7ea2b3c0-00a9-42a4-a1ac-e5bd2a459fee)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"449644b9235453a357ba716e41402bd70e63edc2368ae781bf798daffcfedbc6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6675cb976f-qgmnw" podUID="7ea2b3c0-00a9-42a4-a1ac-e5bd2a459fee" Jan 17 00:26:58.430282 containerd[1501]: time="2026-01-17T00:26:58.430223556Z" level=error msg="Failed to destroy network for sandbox \"9d33cd18386e695ae774c21f623f57628505898b093b56bca4d944a9a7f7b11a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:26:58.431547 containerd[1501]: time="2026-01-17T00:26:58.431360315Z" level=error msg="encountered an error cleaning up failed sandbox \"9d33cd18386e695ae774c21f623f57628505898b093b56bca4d944a9a7f7b11a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:26:58.431547 containerd[1501]: time="2026-01-17T00:26:58.431438268Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-ssv4k,Uid:a05bf132-cecb-477a-a941-f502759ced80,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9d33cd18386e695ae774c21f623f57628505898b093b56bca4d944a9a7f7b11a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:26:58.431714 kubelet[2562]: E0117 00:26:58.431660 2562 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d33cd18386e695ae774c21f623f57628505898b093b56bca4d944a9a7f7b11a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:26:58.431757 kubelet[2562]: E0117 00:26:58.431717 2562 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d33cd18386e695ae774c21f623f57628505898b093b56bca4d944a9a7f7b11a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-ssv4k" Jan 17 00:26:58.431757 kubelet[2562]: E0117 00:26:58.431736 2562 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d33cd18386e695ae774c21f623f57628505898b093b56bca4d944a9a7f7b11a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-ssv4k" Jan 17 00:26:58.431810 kubelet[2562]: E0117 00:26:58.431776 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-ssv4k_calico-system(a05bf132-cecb-477a-a941-f502759ced80)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-ssv4k_calico-system(a05bf132-cecb-477a-a941-f502759ced80)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9d33cd18386e695ae774c21f623f57628505898b093b56bca4d944a9a7f7b11a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-ssv4k" podUID="a05bf132-cecb-477a-a941-f502759ced80" Jan 17 00:26:58.438425 containerd[1501]: time="2026-01-17T00:26:58.438382082Z" level=error msg="Failed to destroy network for sandbox \"5fe36568b56f37ae0885938ce6eb85e2b2da230eaf0ab30919938732eed7ec67\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:26:58.438765 containerd[1501]: time="2026-01-17T00:26:58.438740188Z" level=error msg="encountered an error cleaning up failed sandbox \"5fe36568b56f37ae0885938ce6eb85e2b2da230eaf0ab30919938732eed7ec67\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:26:58.438811 containerd[1501]: time="2026-01-17T00:26:58.438790491Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6675cb976f-s7mzt,Uid:73c589a5-8e71-425c-a060-0cf6cb3ed239,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5fe36568b56f37ae0885938ce6eb85e2b2da230eaf0ab30919938732eed7ec67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:26:58.439040 kubelet[2562]: E0117 00:26:58.438997 2562 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5fe36568b56f37ae0885938ce6eb85e2b2da230eaf0ab30919938732eed7ec67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:26:58.439121 kubelet[2562]: E0117 00:26:58.439057 2562 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5fe36568b56f37ae0885938ce6eb85e2b2da230eaf0ab30919938732eed7ec67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6675cb976f-s7mzt" Jan 17 00:26:58.439121 kubelet[2562]: E0117 00:26:58.439078 2562 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5fe36568b56f37ae0885938ce6eb85e2b2da230eaf0ab30919938732eed7ec67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6675cb976f-s7mzt" Jan 17 00:26:58.439175 kubelet[2562]: E0117 00:26:58.439137 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6675cb976f-s7mzt_calico-apiserver(73c589a5-8e71-425c-a060-0cf6cb3ed239)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6675cb976f-s7mzt_calico-apiserver(73c589a5-8e71-425c-a060-0cf6cb3ed239)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5fe36568b56f37ae0885938ce6eb85e2b2da230eaf0ab30919938732eed7ec67\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6675cb976f-s7mzt" podUID="73c589a5-8e71-425c-a060-0cf6cb3ed239" Jan 17 00:26:58.524695 systemd[1]: Created slice kubepods-besteffort-podeb95b785_13b8_4aa9_b43b_38efbd205ceb.slice - libcontainer container kubepods-besteffort-podeb95b785_13b8_4aa9_b43b_38efbd205ceb.slice. Jan 17 00:26:58.527741 containerd[1501]: time="2026-01-17T00:26:58.527686864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wn7sn,Uid:eb95b785-13b8-4aa9-b43b-38efbd205ceb,Namespace:calico-system,Attempt:0,}" Jan 17 00:26:58.595471 containerd[1501]: time="2026-01-17T00:26:58.595401582Z" level=error msg="Failed to destroy network for sandbox \"30102ecbd68f8dad1b7eebc6b63bf07d8791c72fae20f544db97065f0c52e549\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:26:58.595853 containerd[1501]: time="2026-01-17T00:26:58.595815590Z" level=error msg="encountered an error cleaning up failed sandbox \"30102ecbd68f8dad1b7eebc6b63bf07d8791c72fae20f544db97065f0c52e549\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:26:58.595910 containerd[1501]: time="2026-01-17T00:26:58.595890263Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wn7sn,Uid:eb95b785-13b8-4aa9-b43b-38efbd205ceb,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"30102ecbd68f8dad1b7eebc6b63bf07d8791c72fae20f544db97065f0c52e549\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:26:58.596217 kubelet[2562]: E0117 00:26:58.596160 2562 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30102ecbd68f8dad1b7eebc6b63bf07d8791c72fae20f544db97065f0c52e549\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:26:58.596296 kubelet[2562]: E0117 00:26:58.596219 2562 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30102ecbd68f8dad1b7eebc6b63bf07d8791c72fae20f544db97065f0c52e549\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wn7sn" Jan 17 00:26:58.596296 kubelet[2562]: E0117 00:26:58.596242 2562 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30102ecbd68f8dad1b7eebc6b63bf07d8791c72fae20f544db97065f0c52e549\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wn7sn" Jan 17 00:26:58.596375 kubelet[2562]: E0117 00:26:58.596285 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wn7sn_calico-system(eb95b785-13b8-4aa9-b43b-38efbd205ceb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wn7sn_calico-system(eb95b785-13b8-4aa9-b43b-38efbd205ceb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"30102ecbd68f8dad1b7eebc6b63bf07d8791c72fae20f544db97065f0c52e549\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wn7sn" podUID="eb95b785-13b8-4aa9-b43b-38efbd205ceb" Jan 17 00:26:58.705713 kubelet[2562]: I0117 00:26:58.705517 2562 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="449644b9235453a357ba716e41402bd70e63edc2368ae781bf798daffcfedbc6" Jan 17 00:26:58.707643 containerd[1501]: time="2026-01-17T00:26:58.707538860Z" level=info msg="StopPodSandbox for \"449644b9235453a357ba716e41402bd70e63edc2368ae781bf798daffcfedbc6\"" Jan 17 00:26:58.707985 containerd[1501]: time="2026-01-17T00:26:58.707782982Z" level=info msg="Ensure that sandbox 449644b9235453a357ba716e41402bd70e63edc2368ae781bf798daffcfedbc6 in task-service has been cleanup successfully" Jan 17 00:26:58.710915 kubelet[2562]: I0117 00:26:58.710879 2562 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5fe36568b56f37ae0885938ce6eb85e2b2da230eaf0ab30919938732eed7ec67" Jan 17 00:26:58.711580 containerd[1501]: time="2026-01-17T00:26:58.711520144Z" level=info msg="StopPodSandbox for \"5fe36568b56f37ae0885938ce6eb85e2b2da230eaf0ab30919938732eed7ec67\"" Jan 17 00:26:58.711814 containerd[1501]: time="2026-01-17T00:26:58.711745094Z" level=info msg="Ensure that sandbox 5fe36568b56f37ae0885938ce6eb85e2b2da230eaf0ab30919938732eed7ec67 in task-service has been cleanup successfully" Jan 17 00:26:58.713690 kubelet[2562]: I0117 00:26:58.713538 2562 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30102ecbd68f8dad1b7eebc6b63bf07d8791c72fae20f544db97065f0c52e549" Jan 17 00:26:58.715337 containerd[1501]: time="2026-01-17T00:26:58.714844820Z" level=info msg="StopPodSandbox for \"30102ecbd68f8dad1b7eebc6b63bf07d8791c72fae20f544db97065f0c52e549\"" Jan 17 00:26:58.715337 containerd[1501]: time="2026-01-17T00:26:58.715034408Z" level=info msg="Ensure that sandbox 30102ecbd68f8dad1b7eebc6b63bf07d8791c72fae20f544db97065f0c52e549 in task-service has been cleanup successfully" Jan 17 00:26:58.718173 kubelet[2562]: I0117 00:26:58.717509 2562 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d92bb12f39a75e0bf578ea5270c93418c3fb227f6c5f2b52594ba64f87d76d8" Jan 17 00:26:58.720083 containerd[1501]: time="2026-01-17T00:26:58.720010796Z" level=info msg="StopPodSandbox for \"3d92bb12f39a75e0bf578ea5270c93418c3fb227f6c5f2b52594ba64f87d76d8\"" Jan 17 00:26:58.721644 containerd[1501]: time="2026-01-17T00:26:58.721579834Z" level=info msg="Ensure that sandbox 3d92bb12f39a75e0bf578ea5270c93418c3fb227f6c5f2b52594ba64f87d76d8 in task-service has been cleanup successfully" Jan 17 00:26:58.731233 kubelet[2562]: I0117 00:26:58.731156 2562 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f1eaa4f504fb148622ab79b04b1b660a317bb48cc4e0e403547fdcf7b80b333" Jan 17 00:26:58.737801 containerd[1501]: time="2026-01-17T00:26:58.736722566Z" level=info msg="StopPodSandbox for \"5f1eaa4f504fb148622ab79b04b1b660a317bb48cc4e0e403547fdcf7b80b333\"" Jan 17 00:26:58.738062 containerd[1501]: time="2026-01-17T00:26:58.737897187Z" level=info msg="Ensure that sandbox 5f1eaa4f504fb148622ab79b04b1b660a317bb48cc4e0e403547fdcf7b80b333 in task-service has been cleanup successfully" Jan 17 00:26:58.752748 containerd[1501]: time="2026-01-17T00:26:58.752709244Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 17 00:26:58.755070 kubelet[2562]: I0117 00:26:58.754982 2562 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d33cd18386e695ae774c21f623f57628505898b093b56bca4d944a9a7f7b11a" Jan 17 00:26:58.761069 containerd[1501]: time="2026-01-17T00:26:58.760789387Z" level=info msg="StopPodSandbox for \"9d33cd18386e695ae774c21f623f57628505898b093b56bca4d944a9a7f7b11a\"" Jan 17 00:26:58.761069 containerd[1501]: time="2026-01-17T00:26:58.760949364Z" level=info msg="Ensure that sandbox 9d33cd18386e695ae774c21f623f57628505898b093b56bca4d944a9a7f7b11a in task-service has been cleanup successfully" Jan 17 00:26:58.771128 kubelet[2562]: I0117 00:26:58.770761 2562 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9a908f1fd378b1d4d36a6616aa607fb6d749c5a1e70bfe5a158f7cf5fe0cba4" Jan 17 00:26:58.775132 containerd[1501]: time="2026-01-17T00:26:58.774738426Z" level=info msg="StopPodSandbox for \"b9a908f1fd378b1d4d36a6616aa607fb6d749c5a1e70bfe5a158f7cf5fe0cba4\"" Jan 17 00:26:58.780059 containerd[1501]: time="2026-01-17T00:26:58.779335367Z" level=info msg="Ensure that sandbox b9a908f1fd378b1d4d36a6616aa607fb6d749c5a1e70bfe5a158f7cf5fe0cba4 in task-service has been cleanup successfully" Jan 17 00:26:58.789750 kubelet[2562]: I0117 00:26:58.789716 2562 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9e413599c888e558f6b61d8a7e054695b06434e7332c6067770af00b48c784e" Jan 17 00:26:58.795286 containerd[1501]: time="2026-01-17T00:26:58.795241622Z" level=info msg="StopPodSandbox for \"e9e413599c888e558f6b61d8a7e054695b06434e7332c6067770af00b48c784e\"" Jan 17 00:26:58.798278 containerd[1501]: time="2026-01-17T00:26:58.797685249Z" level=info msg="Ensure that sandbox e9e413599c888e558f6b61d8a7e054695b06434e7332c6067770af00b48c784e in task-service has been cleanup successfully" Jan 17 00:26:58.862293 containerd[1501]: time="2026-01-17T00:26:58.862231728Z" level=error msg="StopPodSandbox for \"449644b9235453a357ba716e41402bd70e63edc2368ae781bf798daffcfedbc6\" failed" error="failed to destroy network for sandbox \"449644b9235453a357ba716e41402bd70e63edc2368ae781bf798daffcfedbc6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:26:58.864345 kubelet[2562]: E0117 00:26:58.862803 2562 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"449644b9235453a357ba716e41402bd70e63edc2368ae781bf798daffcfedbc6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="449644b9235453a357ba716e41402bd70e63edc2368ae781bf798daffcfedbc6" Jan 17 00:26:58.864345 kubelet[2562]: E0117 00:26:58.862905 2562 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"449644b9235453a357ba716e41402bd70e63edc2368ae781bf798daffcfedbc6"} Jan 17 00:26:58.864345 kubelet[2562]: E0117 00:26:58.863009 2562 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7ea2b3c0-00a9-42a4-a1ac-e5bd2a459fee\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"449644b9235453a357ba716e41402bd70e63edc2368ae781bf798daffcfedbc6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:26:58.864345 kubelet[2562]: E0117 00:26:58.863045 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7ea2b3c0-00a9-42a4-a1ac-e5bd2a459fee\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"449644b9235453a357ba716e41402bd70e63edc2368ae781bf798daffcfedbc6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6675cb976f-qgmnw" podUID="7ea2b3c0-00a9-42a4-a1ac-e5bd2a459fee" Jan 17 00:26:58.874698 containerd[1501]: time="2026-01-17T00:26:58.874626360Z" level=error msg="StopPodSandbox for \"5fe36568b56f37ae0885938ce6eb85e2b2da230eaf0ab30919938732eed7ec67\" failed" error="failed to destroy network for sandbox \"5fe36568b56f37ae0885938ce6eb85e2b2da230eaf0ab30919938732eed7ec67\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:26:58.875054 kubelet[2562]: E0117 00:26:58.875012 2562 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5fe36568b56f37ae0885938ce6eb85e2b2da230eaf0ab30919938732eed7ec67\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5fe36568b56f37ae0885938ce6eb85e2b2da230eaf0ab30919938732eed7ec67" Jan 17 00:26:58.875255 kubelet[2562]: E0117 00:26:58.875233 2562 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5fe36568b56f37ae0885938ce6eb85e2b2da230eaf0ab30919938732eed7ec67"} Jan 17 00:26:58.875404 kubelet[2562]: E0117 00:26:58.875336 2562 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"73c589a5-8e71-425c-a060-0cf6cb3ed239\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5fe36568b56f37ae0885938ce6eb85e2b2da230eaf0ab30919938732eed7ec67\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:26:58.875404 kubelet[2562]: E0117 00:26:58.875375 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"73c589a5-8e71-425c-a060-0cf6cb3ed239\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5fe36568b56f37ae0885938ce6eb85e2b2da230eaf0ab30919938732eed7ec67\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6675cb976f-s7mzt" podUID="73c589a5-8e71-425c-a060-0cf6cb3ed239" Jan 17 00:26:58.877660 containerd[1501]: time="2026-01-17T00:26:58.877616230Z" level=error msg="StopPodSandbox for \"3d92bb12f39a75e0bf578ea5270c93418c3fb227f6c5f2b52594ba64f87d76d8\" failed" error="failed to destroy network for sandbox \"3d92bb12f39a75e0bf578ea5270c93418c3fb227f6c5f2b52594ba64f87d76d8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:26:58.877935 kubelet[2562]: E0117 00:26:58.877902 2562 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3d92bb12f39a75e0bf578ea5270c93418c3fb227f6c5f2b52594ba64f87d76d8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3d92bb12f39a75e0bf578ea5270c93418c3fb227f6c5f2b52594ba64f87d76d8" Jan 17 00:26:58.877990 kubelet[2562]: E0117 00:26:58.877939 2562 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3d92bb12f39a75e0bf578ea5270c93418c3fb227f6c5f2b52594ba64f87d76d8"} Jan 17 00:26:58.877990 kubelet[2562]: E0117 00:26:58.877967 2562 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c92915ad-48c4-496c-93e3-f83efa51b583\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3d92bb12f39a75e0bf578ea5270c93418c3fb227f6c5f2b52594ba64f87d76d8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:26:58.878226 kubelet[2562]: E0117 00:26:58.877986 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c92915ad-48c4-496c-93e3-f83efa51b583\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3d92bb12f39a75e0bf578ea5270c93418c3fb227f6c5f2b52594ba64f87d76d8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-mgc4q" podUID="c92915ad-48c4-496c-93e3-f83efa51b583" Jan 17 00:26:58.884803 containerd[1501]: time="2026-01-17T00:26:58.884726501Z" level=error msg="StopPodSandbox for \"30102ecbd68f8dad1b7eebc6b63bf07d8791c72fae20f544db97065f0c52e549\" failed" error="failed to destroy network for sandbox \"30102ecbd68f8dad1b7eebc6b63bf07d8791c72fae20f544db97065f0c52e549\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:26:58.885374 kubelet[2562]: E0117 00:26:58.885188 2562 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"30102ecbd68f8dad1b7eebc6b63bf07d8791c72fae20f544db97065f0c52e549\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="30102ecbd68f8dad1b7eebc6b63bf07d8791c72fae20f544db97065f0c52e549" Jan 17 00:26:58.885374 kubelet[2562]: E0117 00:26:58.885241 2562 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"30102ecbd68f8dad1b7eebc6b63bf07d8791c72fae20f544db97065f0c52e549"} Jan 17 00:26:58.885374 kubelet[2562]: E0117 00:26:58.885270 2562 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"eb95b785-13b8-4aa9-b43b-38efbd205ceb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"30102ecbd68f8dad1b7eebc6b63bf07d8791c72fae20f544db97065f0c52e549\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:26:58.885374 kubelet[2562]: E0117 00:26:58.885296 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"eb95b785-13b8-4aa9-b43b-38efbd205ceb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"30102ecbd68f8dad1b7eebc6b63bf07d8791c72fae20f544db97065f0c52e549\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wn7sn" podUID="eb95b785-13b8-4aa9-b43b-38efbd205ceb" Jan 17 00:26:58.893615 containerd[1501]: time="2026-01-17T00:26:58.893514535Z" level=error msg="StopPodSandbox for \"5f1eaa4f504fb148622ab79b04b1b660a317bb48cc4e0e403547fdcf7b80b333\" failed" error="failed to destroy network for sandbox \"5f1eaa4f504fb148622ab79b04b1b660a317bb48cc4e0e403547fdcf7b80b333\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:26:58.894144 kubelet[2562]: E0117 00:26:58.893994 2562 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5f1eaa4f504fb148622ab79b04b1b660a317bb48cc4e0e403547fdcf7b80b333\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5f1eaa4f504fb148622ab79b04b1b660a317bb48cc4e0e403547fdcf7b80b333" Jan 17 00:26:58.894144 kubelet[2562]: E0117 00:26:58.894045 2562 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5f1eaa4f504fb148622ab79b04b1b660a317bb48cc4e0e403547fdcf7b80b333"} Jan 17 00:26:58.894144 kubelet[2562]: E0117 00:26:58.894073 2562 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"481372c3-ef6e-46bf-86cb-78fea87a79f9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5f1eaa4f504fb148622ab79b04b1b660a317bb48cc4e0e403547fdcf7b80b333\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:26:58.894463 kubelet[2562]: E0117 00:26:58.894379 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"481372c3-ef6e-46bf-86cb-78fea87a79f9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5f1eaa4f504fb148622ab79b04b1b660a317bb48cc4e0e403547fdcf7b80b333\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-64b8756fdc-7qc6h" podUID="481372c3-ef6e-46bf-86cb-78fea87a79f9" Jan 17 00:26:58.904747 containerd[1501]: time="2026-01-17T00:26:58.904690943Z" level=error msg="StopPodSandbox for \"9d33cd18386e695ae774c21f623f57628505898b093b56bca4d944a9a7f7b11a\" failed" error="failed to destroy network for sandbox \"9d33cd18386e695ae774c21f623f57628505898b093b56bca4d944a9a7f7b11a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:26:58.905181 kubelet[2562]: E0117 00:26:58.904951 2562 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9d33cd18386e695ae774c21f623f57628505898b093b56bca4d944a9a7f7b11a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9d33cd18386e695ae774c21f623f57628505898b093b56bca4d944a9a7f7b11a" Jan 17 00:26:58.905181 kubelet[2562]: E0117 00:26:58.905009 2562 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9d33cd18386e695ae774c21f623f57628505898b093b56bca4d944a9a7f7b11a"} Jan 17 00:26:58.905181 kubelet[2562]: E0117 00:26:58.905043 2562 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a05bf132-cecb-477a-a941-f502759ced80\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9d33cd18386e695ae774c21f623f57628505898b093b56bca4d944a9a7f7b11a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:26:58.905181 kubelet[2562]: E0117 00:26:58.905063 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a05bf132-cecb-477a-a941-f502759ced80\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9d33cd18386e695ae774c21f623f57628505898b093b56bca4d944a9a7f7b11a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-ssv4k" podUID="a05bf132-cecb-477a-a941-f502759ced80" Jan 17 00:26:58.914723 containerd[1501]: time="2026-01-17T00:26:58.914537624Z" level=error msg="StopPodSandbox for \"e9e413599c888e558f6b61d8a7e054695b06434e7332c6067770af00b48c784e\" failed" error="failed to destroy network for sandbox \"e9e413599c888e558f6b61d8a7e054695b06434e7332c6067770af00b48c784e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:26:58.916273 kubelet[2562]: E0117 00:26:58.914803 2562 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e9e413599c888e558f6b61d8a7e054695b06434e7332c6067770af00b48c784e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e9e413599c888e558f6b61d8a7e054695b06434e7332c6067770af00b48c784e" Jan 17 00:26:58.916273 kubelet[2562]: E0117 00:26:58.914867 2562 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e9e413599c888e558f6b61d8a7e054695b06434e7332c6067770af00b48c784e"} Jan 17 00:26:58.916273 kubelet[2562]: E0117 00:26:58.914908 2562 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"dd4b2cd2-cd64-4cf4-9264-84814b92189d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e9e413599c888e558f6b61d8a7e054695b06434e7332c6067770af00b48c784e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:26:58.916273 kubelet[2562]: E0117 00:26:58.914936 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"dd4b2cd2-cd64-4cf4-9264-84814b92189d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e9e413599c888e558f6b61d8a7e054695b06434e7332c6067770af00b48c784e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-djgpg" podUID="dd4b2cd2-cd64-4cf4-9264-84814b92189d" Jan 17 00:26:58.916739 containerd[1501]: time="2026-01-17T00:26:58.916687288Z" level=error msg="StopPodSandbox for \"b9a908f1fd378b1d4d36a6616aa607fb6d749c5a1e70bfe5a158f7cf5fe0cba4\" failed" error="failed to destroy network for sandbox \"b9a908f1fd378b1d4d36a6616aa607fb6d749c5a1e70bfe5a158f7cf5fe0cba4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:26:58.916937 kubelet[2562]: E0117 00:26:58.916889 2562 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b9a908f1fd378b1d4d36a6616aa607fb6d749c5a1e70bfe5a158f7cf5fe0cba4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b9a908f1fd378b1d4d36a6616aa607fb6d749c5a1e70bfe5a158f7cf5fe0cba4" Jan 17 00:26:58.917004 kubelet[2562]: E0117 00:26:58.916946 2562 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b9a908f1fd378b1d4d36a6616aa607fb6d749c5a1e70bfe5a158f7cf5fe0cba4"} Jan 17 00:26:58.917004 kubelet[2562]: E0117 00:26:58.916982 2562 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"04e154f3-423d-435c-8e3a-9f74e4b2d1d0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b9a908f1fd378b1d4d36a6616aa607fb6d749c5a1e70bfe5a158f7cf5fe0cba4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:26:58.917082 kubelet[2562]: E0117 00:26:58.917008 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"04e154f3-423d-435c-8e3a-9f74e4b2d1d0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b9a908f1fd378b1d4d36a6616aa607fb6d749c5a1e70bfe5a158f7cf5fe0cba4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-ddfc459bc-95rsc" podUID="04e154f3-423d-435c-8e3a-9f74e4b2d1d0" Jan 17 00:27:01.814495 kubelet[2562]: I0117 00:27:01.813878 2562 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 00:27:06.399439 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2163595464.mount: Deactivated successfully. Jan 17 00:27:06.438980 containerd[1501]: time="2026-01-17T00:27:06.438193227Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:27:06.438980 containerd[1501]: time="2026-01-17T00:27:06.438940673Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 17 00:27:06.439694 containerd[1501]: time="2026-01-17T00:27:06.439672459Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:27:06.441404 containerd[1501]: time="2026-01-17T00:27:06.441377309Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:27:06.441930 containerd[1501]: time="2026-01-17T00:27:06.441899588Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 7.689009466s" Jan 17 00:27:06.441983 containerd[1501]: time="2026-01-17T00:27:06.441936129Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 17 00:27:06.469544 containerd[1501]: time="2026-01-17T00:27:06.469494519Z" level=info msg="CreateContainer within sandbox \"e335f2f9ecfefe993cc1249a9fcc9f48f9f5989872cd244d582946404faded38\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 17 00:27:06.506699 containerd[1501]: time="2026-01-17T00:27:06.506623507Z" level=info msg="CreateContainer within sandbox \"e335f2f9ecfefe993cc1249a9fcc9f48f9f5989872cd244d582946404faded38\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b2e509c4a29e2cbcde88ec8a53450484bfb47d3c744e2e0f0866632a5f60c688\"" Jan 17 00:27:06.508646 containerd[1501]: time="2026-01-17T00:27:06.507252709Z" level=info msg="StartContainer for \"b2e509c4a29e2cbcde88ec8a53450484bfb47d3c744e2e0f0866632a5f60c688\"" Jan 17 00:27:06.549500 systemd[1]: Started cri-containerd-b2e509c4a29e2cbcde88ec8a53450484bfb47d3c744e2e0f0866632a5f60c688.scope - libcontainer container b2e509c4a29e2cbcde88ec8a53450484bfb47d3c744e2e0f0866632a5f60c688. Jan 17 00:27:06.592700 containerd[1501]: time="2026-01-17T00:27:06.592530612Z" level=info msg="StartContainer for \"b2e509c4a29e2cbcde88ec8a53450484bfb47d3c744e2e0f0866632a5f60c688\" returns successfully" Jan 17 00:27:06.714308 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 17 00:27:06.714488 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 17 00:27:06.820410 containerd[1501]: time="2026-01-17T00:27:06.820023235Z" level=info msg="StopPodSandbox for \"b9a908f1fd378b1d4d36a6616aa607fb6d749c5a1e70bfe5a158f7cf5fe0cba4\"" Jan 17 00:27:06.906317 kubelet[2562]: I0117 00:27:06.905519 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-gqwgp" podStartSLOduration=1.367417887 podStartE2EDuration="20.905480385s" podCreationTimestamp="2026-01-17 00:26:46 +0000 UTC" firstStartedPulling="2026-01-17 00:26:46.904679649 +0000 UTC m=+21.539517034" lastFinishedPulling="2026-01-17 00:27:06.442742147 +0000 UTC m=+41.077579532" observedRunningTime="2026-01-17 00:27:06.90533098 +0000 UTC m=+41.540168365" watchObservedRunningTime="2026-01-17 00:27:06.905480385 +0000 UTC m=+41.540317770" Jan 17 00:27:07.019936 containerd[1501]: 2026-01-17 00:27:06.947 [INFO][3810] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b9a908f1fd378b1d4d36a6616aa607fb6d749c5a1e70bfe5a158f7cf5fe0cba4" Jan 17 00:27:07.019936 containerd[1501]: 2026-01-17 00:27:06.948 [INFO][3810] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b9a908f1fd378b1d4d36a6616aa607fb6d749c5a1e70bfe5a158f7cf5fe0cba4" iface="eth0" netns="/var/run/netns/cni-b0e21baf-d2ca-d164-bf08-ec07c7524514" Jan 17 00:27:07.019936 containerd[1501]: 2026-01-17 00:27:06.949 [INFO][3810] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b9a908f1fd378b1d4d36a6616aa607fb6d749c5a1e70bfe5a158f7cf5fe0cba4" iface="eth0" netns="/var/run/netns/cni-b0e21baf-d2ca-d164-bf08-ec07c7524514" Jan 17 00:27:07.019936 containerd[1501]: 2026-01-17 00:27:06.950 [INFO][3810] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b9a908f1fd378b1d4d36a6616aa607fb6d749c5a1e70bfe5a158f7cf5fe0cba4" iface="eth0" netns="/var/run/netns/cni-b0e21baf-d2ca-d164-bf08-ec07c7524514" Jan 17 00:27:07.019936 containerd[1501]: 2026-01-17 00:27:06.950 [INFO][3810] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b9a908f1fd378b1d4d36a6616aa607fb6d749c5a1e70bfe5a158f7cf5fe0cba4" Jan 17 00:27:07.019936 containerd[1501]: 2026-01-17 00:27:06.950 [INFO][3810] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b9a908f1fd378b1d4d36a6616aa607fb6d749c5a1e70bfe5a158f7cf5fe0cba4" Jan 17 00:27:07.019936 containerd[1501]: 2026-01-17 00:27:06.996 [INFO][3844] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b9a908f1fd378b1d4d36a6616aa607fb6d749c5a1e70bfe5a158f7cf5fe0cba4" HandleID="k8s-pod-network.b9a908f1fd378b1d4d36a6616aa607fb6d749c5a1e70bfe5a158f7cf5fe0cba4" Workload="ci--4081--3--6--n--e100e79615-k8s-whisker--ddfc459bc--95rsc-eth0" Jan 17 00:27:07.019936 containerd[1501]: 2026-01-17 00:27:06.996 [INFO][3844] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:27:07.019936 containerd[1501]: 2026-01-17 00:27:06.996 [INFO][3844] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:27:07.019936 containerd[1501]: 2026-01-17 00:27:07.006 [WARNING][3844] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b9a908f1fd378b1d4d36a6616aa607fb6d749c5a1e70bfe5a158f7cf5fe0cba4" HandleID="k8s-pod-network.b9a908f1fd378b1d4d36a6616aa607fb6d749c5a1e70bfe5a158f7cf5fe0cba4" Workload="ci--4081--3--6--n--e100e79615-k8s-whisker--ddfc459bc--95rsc-eth0" Jan 17 00:27:07.019936 containerd[1501]: 2026-01-17 00:27:07.006 [INFO][3844] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b9a908f1fd378b1d4d36a6616aa607fb6d749c5a1e70bfe5a158f7cf5fe0cba4" HandleID="k8s-pod-network.b9a908f1fd378b1d4d36a6616aa607fb6d749c5a1e70bfe5a158f7cf5fe0cba4" Workload="ci--4081--3--6--n--e100e79615-k8s-whisker--ddfc459bc--95rsc-eth0" Jan 17 00:27:07.019936 containerd[1501]: 2026-01-17 00:27:07.009 [INFO][3844] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:27:07.019936 containerd[1501]: 2026-01-17 00:27:07.016 [INFO][3810] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b9a908f1fd378b1d4d36a6616aa607fb6d749c5a1e70bfe5a158f7cf5fe0cba4" Jan 17 00:27:07.020404 containerd[1501]: time="2026-01-17T00:27:07.020181466Z" level=info msg="TearDown network for sandbox \"b9a908f1fd378b1d4d36a6616aa607fb6d749c5a1e70bfe5a158f7cf5fe0cba4\" successfully" Jan 17 00:27:07.020404 containerd[1501]: time="2026-01-17T00:27:07.020221537Z" level=info msg="StopPodSandbox for \"b9a908f1fd378b1d4d36a6616aa607fb6d749c5a1e70bfe5a158f7cf5fe0cba4\" returns successfully" Jan 17 00:27:07.138338 kubelet[2562]: I0117 00:27:07.137710 2562 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04e154f3-423d-435c-8e3a-9f74e4b2d1d0-whisker-ca-bundle\") pod \"04e154f3-423d-435c-8e3a-9f74e4b2d1d0\" (UID: \"04e154f3-423d-435c-8e3a-9f74e4b2d1d0\") " Jan 17 00:27:07.138338 kubelet[2562]: I0117 00:27:07.137782 2562 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/04e154f3-423d-435c-8e3a-9f74e4b2d1d0-whisker-backend-key-pair\") pod \"04e154f3-423d-435c-8e3a-9f74e4b2d1d0\" (UID: \"04e154f3-423d-435c-8e3a-9f74e4b2d1d0\") " Jan 17 00:27:07.138338 kubelet[2562]: I0117 00:27:07.137820 2562 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5c9f\" (UniqueName: \"kubernetes.io/projected/04e154f3-423d-435c-8e3a-9f74e4b2d1d0-kube-api-access-f5c9f\") pod \"04e154f3-423d-435c-8e3a-9f74e4b2d1d0\" (UID: \"04e154f3-423d-435c-8e3a-9f74e4b2d1d0\") " Jan 17 00:27:07.139061 kubelet[2562]: I0117 00:27:07.139027 2562 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04e154f3-423d-435c-8e3a-9f74e4b2d1d0-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "04e154f3-423d-435c-8e3a-9f74e4b2d1d0" (UID: "04e154f3-423d-435c-8e3a-9f74e4b2d1d0"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 17 00:27:07.143869 kubelet[2562]: I0117 00:27:07.143828 2562 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04e154f3-423d-435c-8e3a-9f74e4b2d1d0-kube-api-access-f5c9f" (OuterVolumeSpecName: "kube-api-access-f5c9f") pod "04e154f3-423d-435c-8e3a-9f74e4b2d1d0" (UID: "04e154f3-423d-435c-8e3a-9f74e4b2d1d0"). InnerVolumeSpecName "kube-api-access-f5c9f". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 17 00:27:07.145345 kubelet[2562]: I0117 00:27:07.145301 2562 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04e154f3-423d-435c-8e3a-9f74e4b2d1d0-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "04e154f3-423d-435c-8e3a-9f74e4b2d1d0" (UID: "04e154f3-423d-435c-8e3a-9f74e4b2d1d0"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 17 00:27:07.239211 kubelet[2562]: I0117 00:27:07.239005 2562 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-f5c9f\" (UniqueName: \"kubernetes.io/projected/04e154f3-423d-435c-8e3a-9f74e4b2d1d0-kube-api-access-f5c9f\") on node \"ci-4081-3-6-n-e100e79615\" DevicePath \"\"" Jan 17 00:27:07.239211 kubelet[2562]: I0117 00:27:07.239074 2562 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04e154f3-423d-435c-8e3a-9f74e4b2d1d0-whisker-ca-bundle\") on node \"ci-4081-3-6-n-e100e79615\" DevicePath \"\"" Jan 17 00:27:07.239211 kubelet[2562]: I0117 00:27:07.239159 2562 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/04e154f3-423d-435c-8e3a-9f74e4b2d1d0-whisker-backend-key-pair\") on node \"ci-4081-3-6-n-e100e79615\" DevicePath \"\"" Jan 17 00:27:07.400555 systemd[1]: run-netns-cni\x2db0e21baf\x2dd2ca\x2dd164\x2dbf08\x2dec07c7524514.mount: Deactivated successfully. Jan 17 00:27:07.400682 systemd[1]: var-lib-kubelet-pods-04e154f3\x2d423d\x2d435c\x2d8e3a\x2d9f74e4b2d1d0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2df5c9f.mount: Deactivated successfully. Jan 17 00:27:07.400785 systemd[1]: var-lib-kubelet-pods-04e154f3\x2d423d\x2d435c\x2d8e3a\x2d9f74e4b2d1d0-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 17 00:27:07.524039 systemd[1]: Removed slice kubepods-besteffort-pod04e154f3_423d_435c_8e3a_9f74e4b2d1d0.slice - libcontainer container kubepods-besteffort-pod04e154f3_423d_435c_8e3a_9f74e4b2d1d0.slice. Jan 17 00:27:07.973343 systemd[1]: Created slice kubepods-besteffort-pod95fdc174_63e2_499b_8b79_a226c39e6eaf.slice - libcontainer container kubepods-besteffort-pod95fdc174_63e2_499b_8b79_a226c39e6eaf.slice. Jan 17 00:27:08.049285 kubelet[2562]: I0117 00:27:08.049226 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/95fdc174-63e2-499b-8b79-a226c39e6eaf-whisker-backend-key-pair\") pod \"whisker-7f57f5f859-j9rxk\" (UID: \"95fdc174-63e2-499b-8b79-a226c39e6eaf\") " pod="calico-system/whisker-7f57f5f859-j9rxk" Jan 17 00:27:08.050013 kubelet[2562]: I0117 00:27:08.049905 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95fdc174-63e2-499b-8b79-a226c39e6eaf-whisker-ca-bundle\") pod \"whisker-7f57f5f859-j9rxk\" (UID: \"95fdc174-63e2-499b-8b79-a226c39e6eaf\") " pod="calico-system/whisker-7f57f5f859-j9rxk" Jan 17 00:27:08.050013 kubelet[2562]: I0117 00:27:08.049948 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scctk\" (UniqueName: \"kubernetes.io/projected/95fdc174-63e2-499b-8b79-a226c39e6eaf-kube-api-access-scctk\") pod \"whisker-7f57f5f859-j9rxk\" (UID: \"95fdc174-63e2-499b-8b79-a226c39e6eaf\") " pod="calico-system/whisker-7f57f5f859-j9rxk" Jan 17 00:27:08.278198 containerd[1501]: time="2026-01-17T00:27:08.277880967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f57f5f859-j9rxk,Uid:95fdc174-63e2-499b-8b79-a226c39e6eaf,Namespace:calico-system,Attempt:0,}" Jan 17 00:27:08.521372 systemd-networkd[1401]: calidb4c7429973: Link UP Jan 17 00:27:08.523450 systemd-networkd[1401]: calidb4c7429973: Gained carrier Jan 17 00:27:08.545726 containerd[1501]: 2026-01-17 00:27:08.362 [INFO][3974] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 17 00:27:08.545726 containerd[1501]: 2026-01-17 00:27:08.380 [INFO][3974] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--e100e79615-k8s-whisker--7f57f5f859--j9rxk-eth0 whisker-7f57f5f859- calico-system 95fdc174-63e2-499b-8b79-a226c39e6eaf 921 0 2026-01-17 00:27:07 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7f57f5f859 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-6-n-e100e79615 whisker-7f57f5f859-j9rxk eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calidb4c7429973 [] [] }} ContainerID="9b7267ff25b42e557c46ae1f2cd9b692b8bed3ed423abffa473ac8b8a88e8cfd" Namespace="calico-system" Pod="whisker-7f57f5f859-j9rxk" WorkloadEndpoint="ci--4081--3--6--n--e100e79615-k8s-whisker--7f57f5f859--j9rxk-" Jan 17 00:27:08.545726 containerd[1501]: 2026-01-17 00:27:08.380 [INFO][3974] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9b7267ff25b42e557c46ae1f2cd9b692b8bed3ed423abffa473ac8b8a88e8cfd" Namespace="calico-system" Pod="whisker-7f57f5f859-j9rxk" WorkloadEndpoint="ci--4081--3--6--n--e100e79615-k8s-whisker--7f57f5f859--j9rxk-eth0" Jan 17 00:27:08.545726 containerd[1501]: 2026-01-17 00:27:08.449 [INFO][3986] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9b7267ff25b42e557c46ae1f2cd9b692b8bed3ed423abffa473ac8b8a88e8cfd" HandleID="k8s-pod-network.9b7267ff25b42e557c46ae1f2cd9b692b8bed3ed423abffa473ac8b8a88e8cfd" Workload="ci--4081--3--6--n--e100e79615-k8s-whisker--7f57f5f859--j9rxk-eth0" Jan 17 00:27:08.545726 containerd[1501]: 2026-01-17 00:27:08.450 [INFO][3986] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9b7267ff25b42e557c46ae1f2cd9b692b8bed3ed423abffa473ac8b8a88e8cfd" HandleID="k8s-pod-network.9b7267ff25b42e557c46ae1f2cd9b692b8bed3ed423abffa473ac8b8a88e8cfd" Workload="ci--4081--3--6--n--e100e79615-k8s-whisker--7f57f5f859--j9rxk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032f420), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-e100e79615", "pod":"whisker-7f57f5f859-j9rxk", "timestamp":"2026-01-17 00:27:08.449957598 +0000 UTC"}, Hostname:"ci-4081-3-6-n-e100e79615", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:27:08.545726 containerd[1501]: 2026-01-17 00:27:08.450 [INFO][3986] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:27:08.545726 containerd[1501]: 2026-01-17 00:27:08.450 [INFO][3986] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:27:08.545726 containerd[1501]: 2026-01-17 00:27:08.450 [INFO][3986] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-e100e79615' Jan 17 00:27:08.545726 containerd[1501]: 2026-01-17 00:27:08.462 [INFO][3986] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9b7267ff25b42e557c46ae1f2cd9b692b8bed3ed423abffa473ac8b8a88e8cfd" host="ci-4081-3-6-n-e100e79615" Jan 17 00:27:08.545726 containerd[1501]: 2026-01-17 00:27:08.468 [INFO][3986] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-e100e79615" Jan 17 00:27:08.545726 containerd[1501]: 2026-01-17 00:27:08.474 [INFO][3986] ipam/ipam.go 511: Trying affinity for 192.168.114.128/26 host="ci-4081-3-6-n-e100e79615" Jan 17 00:27:08.545726 containerd[1501]: 2026-01-17 00:27:08.475 [INFO][3986] ipam/ipam.go 158: Attempting to load block cidr=192.168.114.128/26 host="ci-4081-3-6-n-e100e79615" Jan 17 00:27:08.545726 containerd[1501]: 2026-01-17 00:27:08.478 [INFO][3986] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.114.128/26 host="ci-4081-3-6-n-e100e79615" Jan 17 00:27:08.545726 containerd[1501]: 2026-01-17 00:27:08.478 [INFO][3986] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.114.128/26 handle="k8s-pod-network.9b7267ff25b42e557c46ae1f2cd9b692b8bed3ed423abffa473ac8b8a88e8cfd" host="ci-4081-3-6-n-e100e79615" Jan 17 00:27:08.545726 containerd[1501]: 2026-01-17 00:27:08.479 [INFO][3986] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9b7267ff25b42e557c46ae1f2cd9b692b8bed3ed423abffa473ac8b8a88e8cfd Jan 17 00:27:08.545726 containerd[1501]: 2026-01-17 00:27:08.484 [INFO][3986] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.114.128/26 handle="k8s-pod-network.9b7267ff25b42e557c46ae1f2cd9b692b8bed3ed423abffa473ac8b8a88e8cfd" host="ci-4081-3-6-n-e100e79615" Jan 17 00:27:08.545726 containerd[1501]: 2026-01-17 00:27:08.494 [INFO][3986] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.114.129/26] block=192.168.114.128/26 handle="k8s-pod-network.9b7267ff25b42e557c46ae1f2cd9b692b8bed3ed423abffa473ac8b8a88e8cfd" host="ci-4081-3-6-n-e100e79615" Jan 17 00:27:08.545726 containerd[1501]: 2026-01-17 00:27:08.495 [INFO][3986] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.114.129/26] handle="k8s-pod-network.9b7267ff25b42e557c46ae1f2cd9b692b8bed3ed423abffa473ac8b8a88e8cfd" host="ci-4081-3-6-n-e100e79615" Jan 17 00:27:08.545726 containerd[1501]: 2026-01-17 00:27:08.495 [INFO][3986] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:27:08.545726 containerd[1501]: 2026-01-17 00:27:08.496 [INFO][3986] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.114.129/26] IPv6=[] ContainerID="9b7267ff25b42e557c46ae1f2cd9b692b8bed3ed423abffa473ac8b8a88e8cfd" HandleID="k8s-pod-network.9b7267ff25b42e557c46ae1f2cd9b692b8bed3ed423abffa473ac8b8a88e8cfd" Workload="ci--4081--3--6--n--e100e79615-k8s-whisker--7f57f5f859--j9rxk-eth0" Jan 17 00:27:08.548459 containerd[1501]: 2026-01-17 00:27:08.502 [INFO][3974] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9b7267ff25b42e557c46ae1f2cd9b692b8bed3ed423abffa473ac8b8a88e8cfd" Namespace="calico-system" Pod="whisker-7f57f5f859-j9rxk" WorkloadEndpoint="ci--4081--3--6--n--e100e79615-k8s-whisker--7f57f5f859--j9rxk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--e100e79615-k8s-whisker--7f57f5f859--j9rxk-eth0", GenerateName:"whisker-7f57f5f859-", Namespace:"calico-system", SelfLink:"", UID:"95fdc174-63e2-499b-8b79-a226c39e6eaf", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 27, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7f57f5f859", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-e100e79615", ContainerID:"", Pod:"whisker-7f57f5f859-j9rxk", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.114.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calidb4c7429973", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:27:08.548459 containerd[1501]: 2026-01-17 00:27:08.503 [INFO][3974] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.129/32] ContainerID="9b7267ff25b42e557c46ae1f2cd9b692b8bed3ed423abffa473ac8b8a88e8cfd" Namespace="calico-system" Pod="whisker-7f57f5f859-j9rxk" WorkloadEndpoint="ci--4081--3--6--n--e100e79615-k8s-whisker--7f57f5f859--j9rxk-eth0" Jan 17 00:27:08.548459 containerd[1501]: 2026-01-17 00:27:08.503 [INFO][3974] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidb4c7429973 ContainerID="9b7267ff25b42e557c46ae1f2cd9b692b8bed3ed423abffa473ac8b8a88e8cfd" Namespace="calico-system" Pod="whisker-7f57f5f859-j9rxk" WorkloadEndpoint="ci--4081--3--6--n--e100e79615-k8s-whisker--7f57f5f859--j9rxk-eth0" Jan 17 00:27:08.548459 containerd[1501]: 2026-01-17 00:27:08.521 [INFO][3974] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9b7267ff25b42e557c46ae1f2cd9b692b8bed3ed423abffa473ac8b8a88e8cfd" Namespace="calico-system" Pod="whisker-7f57f5f859-j9rxk" WorkloadEndpoint="ci--4081--3--6--n--e100e79615-k8s-whisker--7f57f5f859--j9rxk-eth0" Jan 17 00:27:08.548459 containerd[1501]: 2026-01-17 00:27:08.526 [INFO][3974] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9b7267ff25b42e557c46ae1f2cd9b692b8bed3ed423abffa473ac8b8a88e8cfd" Namespace="calico-system" Pod="whisker-7f57f5f859-j9rxk" WorkloadEndpoint="ci--4081--3--6--n--e100e79615-k8s-whisker--7f57f5f859--j9rxk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--e100e79615-k8s-whisker--7f57f5f859--j9rxk-eth0", GenerateName:"whisker-7f57f5f859-", Namespace:"calico-system", SelfLink:"", UID:"95fdc174-63e2-499b-8b79-a226c39e6eaf", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 27, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7f57f5f859", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-e100e79615", ContainerID:"9b7267ff25b42e557c46ae1f2cd9b692b8bed3ed423abffa473ac8b8a88e8cfd", Pod:"whisker-7f57f5f859-j9rxk", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.114.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calidb4c7429973", MAC:"9e:ac:ae:85:41:e6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:27:08.548459 containerd[1501]: 2026-01-17 00:27:08.541 [INFO][3974] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9b7267ff25b42e557c46ae1f2cd9b692b8bed3ed423abffa473ac8b8a88e8cfd" Namespace="calico-system" Pod="whisker-7f57f5f859-j9rxk" WorkloadEndpoint="ci--4081--3--6--n--e100e79615-k8s-whisker--7f57f5f859--j9rxk-eth0" Jan 17 00:27:08.603130 containerd[1501]: time="2026-01-17T00:27:08.584498594Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:27:08.603130 containerd[1501]: time="2026-01-17T00:27:08.584552836Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:27:08.603130 containerd[1501]: time="2026-01-17T00:27:08.584561706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:27:08.603130 containerd[1501]: time="2026-01-17T00:27:08.584700841Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:27:08.649061 systemd[1]: Started cri-containerd-9b7267ff25b42e557c46ae1f2cd9b692b8bed3ed423abffa473ac8b8a88e8cfd.scope - libcontainer container 9b7267ff25b42e557c46ae1f2cd9b692b8bed3ed423abffa473ac8b8a88e8cfd. Jan 17 00:27:08.758252 containerd[1501]: time="2026-01-17T00:27:08.758060104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f57f5f859-j9rxk,Uid:95fdc174-63e2-499b-8b79-a226c39e6eaf,Namespace:calico-system,Attempt:0,} returns sandbox id \"9b7267ff25b42e557c46ae1f2cd9b692b8bed3ed423abffa473ac8b8a88e8cfd\"" Jan 17 00:27:08.761031 containerd[1501]: time="2026-01-17T00:27:08.760856198Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:27:08.795303 kernel: bpftool[4075]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 17 00:27:09.057067 systemd-networkd[1401]: vxlan.calico: Link UP Jan 17 00:27:09.057075 systemd-networkd[1401]: vxlan.calico: Gained carrier Jan 17 00:27:09.191729 containerd[1501]: time="2026-01-17T00:27:09.191458925Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:27:09.193127 containerd[1501]: time="2026-01-17T00:27:09.192927212Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:27:09.193127 containerd[1501]: time="2026-01-17T00:27:09.193041746Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:27:09.193350 kubelet[2562]: E0117 00:27:09.193269 2562 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:27:09.193350 kubelet[2562]: E0117 00:27:09.193342 2562 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:27:09.196767 kubelet[2562]: E0117 00:27:09.196701 2562 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:4cfe1eccd3c14c02813c65fb803c84dd,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-scctk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7f57f5f859-j9rxk_calico-system(95fdc174-63e2-499b-8b79-a226c39e6eaf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:27:09.199898 containerd[1501]: time="2026-01-17T00:27:09.199524038Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:27:09.517752 containerd[1501]: time="2026-01-17T00:27:09.516799578Z" level=info msg="StopPodSandbox for \"449644b9235453a357ba716e41402bd70e63edc2368ae781bf798daffcfedbc6\"" Jan 17 00:27:09.517752 containerd[1501]: time="2026-01-17T00:27:09.517389687Z" level=info msg="StopPodSandbox for \"5f1eaa4f504fb148622ab79b04b1b660a317bb48cc4e0e403547fdcf7b80b333\"" Jan 17 00:27:09.526610 kubelet[2562]: I0117 00:27:09.524377 2562 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04e154f3-423d-435c-8e3a-9f74e4b2d1d0" path="/var/lib/kubelet/pods/04e154f3-423d-435c-8e3a-9f74e4b2d1d0/volumes" Jan 17 00:27:09.541849 systemd-networkd[1401]: calidb4c7429973: Gained IPv6LL Jan 17 00:27:09.622345 containerd[1501]: time="2026-01-17T00:27:09.622288943Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:27:09.625637 containerd[1501]: time="2026-01-17T00:27:09.625580651Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:27:09.628127 containerd[1501]: time="2026-01-17T00:27:09.625638552Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:27:09.633235 kubelet[2562]: E0117 00:27:09.633154 2562 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:27:09.633508 kubelet[2562]: E0117 00:27:09.633470 2562 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:27:09.634291 kubelet[2562]: E0117 00:27:09.634231 2562 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-scctk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7f57f5f859-j9rxk_calico-system(95fdc174-63e2-499b-8b79-a226c39e6eaf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:27:09.636384 kubelet[2562]: E0117 00:27:09.636207 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7f57f5f859-j9rxk" podUID="95fdc174-63e2-499b-8b79-a226c39e6eaf" Jan 17 00:27:09.733264 containerd[1501]: 2026-01-17 00:27:09.629 [INFO][4184] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="449644b9235453a357ba716e41402bd70e63edc2368ae781bf798daffcfedbc6" Jan 17 00:27:09.733264 containerd[1501]: 2026-01-17 00:27:09.629 [INFO][4184] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="449644b9235453a357ba716e41402bd70e63edc2368ae781bf798daffcfedbc6" iface="eth0" netns="/var/run/netns/cni-54acbe8f-257f-5bc5-6b53-3545a7ab26a4" Jan 17 00:27:09.733264 containerd[1501]: 2026-01-17 00:27:09.629 [INFO][4184] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="449644b9235453a357ba716e41402bd70e63edc2368ae781bf798daffcfedbc6" iface="eth0" netns="/var/run/netns/cni-54acbe8f-257f-5bc5-6b53-3545a7ab26a4" Jan 17 00:27:09.733264 containerd[1501]: 2026-01-17 00:27:09.629 [INFO][4184] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="449644b9235453a357ba716e41402bd70e63edc2368ae781bf798daffcfedbc6" iface="eth0" netns="/var/run/netns/cni-54acbe8f-257f-5bc5-6b53-3545a7ab26a4" Jan 17 00:27:09.733264 containerd[1501]: 2026-01-17 00:27:09.629 [INFO][4184] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="449644b9235453a357ba716e41402bd70e63edc2368ae781bf798daffcfedbc6" Jan 17 00:27:09.733264 containerd[1501]: 2026-01-17 00:27:09.629 [INFO][4184] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="449644b9235453a357ba716e41402bd70e63edc2368ae781bf798daffcfedbc6" Jan 17 00:27:09.733264 containerd[1501]: 2026-01-17 00:27:09.710 [INFO][4203] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="449644b9235453a357ba716e41402bd70e63edc2368ae781bf798daffcfedbc6" HandleID="k8s-pod-network.449644b9235453a357ba716e41402bd70e63edc2368ae781bf798daffcfedbc6" Workload="ci--4081--3--6--n--e100e79615-k8s-calico--apiserver--6675cb976f--qgmnw-eth0" Jan 17 00:27:09.733264 containerd[1501]: 2026-01-17 00:27:09.712 [INFO][4203] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:27:09.733264 containerd[1501]: 2026-01-17 00:27:09.712 [INFO][4203] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:27:09.733264 containerd[1501]: 2026-01-17 00:27:09.719 [WARNING][4203] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="449644b9235453a357ba716e41402bd70e63edc2368ae781bf798daffcfedbc6" HandleID="k8s-pod-network.449644b9235453a357ba716e41402bd70e63edc2368ae781bf798daffcfedbc6" Workload="ci--4081--3--6--n--e100e79615-k8s-calico--apiserver--6675cb976f--qgmnw-eth0" Jan 17 00:27:09.733264 containerd[1501]: 2026-01-17 00:27:09.719 [INFO][4203] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="449644b9235453a357ba716e41402bd70e63edc2368ae781bf798daffcfedbc6" HandleID="k8s-pod-network.449644b9235453a357ba716e41402bd70e63edc2368ae781bf798daffcfedbc6" Workload="ci--4081--3--6--n--e100e79615-k8s-calico--apiserver--6675cb976f--qgmnw-eth0" Jan 17 00:27:09.733264 containerd[1501]: 2026-01-17 00:27:09.720 [INFO][4203] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:27:09.733264 containerd[1501]: 2026-01-17 00:27:09.725 [INFO][4184] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="449644b9235453a357ba716e41402bd70e63edc2368ae781bf798daffcfedbc6" Jan 17 00:27:09.733760 containerd[1501]: time="2026-01-17T00:27:09.733512034Z" level=info msg="TearDown network for sandbox \"449644b9235453a357ba716e41402bd70e63edc2368ae781bf798daffcfedbc6\" successfully" Jan 17 00:27:09.733760 containerd[1501]: time="2026-01-17T00:27:09.733584566Z" level=info msg="StopPodSandbox for \"449644b9235453a357ba716e41402bd70e63edc2368ae781bf798daffcfedbc6\" returns successfully" Jan 17 00:27:09.737360 containerd[1501]: time="2026-01-17T00:27:09.737318378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6675cb976f-qgmnw,Uid:7ea2b3c0-00a9-42a4-a1ac-e5bd2a459fee,Namespace:calico-apiserver,Attempt:1,}" Jan 17 00:27:09.738052 systemd[1]: run-netns-cni\x2d54acbe8f\x2d257f\x2d5bc5\x2d6b53\x2d3545a7ab26a4.mount: Deactivated successfully. Jan 17 00:27:09.752750 containerd[1501]: 2026-01-17 00:27:09.626 [INFO][4188] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5f1eaa4f504fb148622ab79b04b1b660a317bb48cc4e0e403547fdcf7b80b333" Jan 17 00:27:09.752750 containerd[1501]: 2026-01-17 00:27:09.627 [INFO][4188] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5f1eaa4f504fb148622ab79b04b1b660a317bb48cc4e0e403547fdcf7b80b333" iface="eth0" netns="/var/run/netns/cni-c63265fd-9cdd-375a-1983-35b4b38e65f2" Jan 17 00:27:09.752750 containerd[1501]: 2026-01-17 00:27:09.628 [INFO][4188] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5f1eaa4f504fb148622ab79b04b1b660a317bb48cc4e0e403547fdcf7b80b333" iface="eth0" netns="/var/run/netns/cni-c63265fd-9cdd-375a-1983-35b4b38e65f2" Jan 17 00:27:09.752750 containerd[1501]: 2026-01-17 00:27:09.628 [INFO][4188] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5f1eaa4f504fb148622ab79b04b1b660a317bb48cc4e0e403547fdcf7b80b333" iface="eth0" netns="/var/run/netns/cni-c63265fd-9cdd-375a-1983-35b4b38e65f2" Jan 17 00:27:09.752750 containerd[1501]: 2026-01-17 00:27:09.628 [INFO][4188] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5f1eaa4f504fb148622ab79b04b1b660a317bb48cc4e0e403547fdcf7b80b333" Jan 17 00:27:09.752750 containerd[1501]: 2026-01-17 00:27:09.628 [INFO][4188] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5f1eaa4f504fb148622ab79b04b1b660a317bb48cc4e0e403547fdcf7b80b333" Jan 17 00:27:09.752750 containerd[1501]: 2026-01-17 00:27:09.717 [INFO][4201] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5f1eaa4f504fb148622ab79b04b1b660a317bb48cc4e0e403547fdcf7b80b333" HandleID="k8s-pod-network.5f1eaa4f504fb148622ab79b04b1b660a317bb48cc4e0e403547fdcf7b80b333" Workload="ci--4081--3--6--n--e100e79615-k8s-calico--kube--controllers--64b8756fdc--7qc6h-eth0" Jan 17 00:27:09.752750 containerd[1501]: 2026-01-17 00:27:09.717 [INFO][4201] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:27:09.752750 containerd[1501]: 2026-01-17 00:27:09.721 [INFO][4201] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:27:09.752750 containerd[1501]: 2026-01-17 00:27:09.738 [WARNING][4201] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5f1eaa4f504fb148622ab79b04b1b660a317bb48cc4e0e403547fdcf7b80b333" HandleID="k8s-pod-network.5f1eaa4f504fb148622ab79b04b1b660a317bb48cc4e0e403547fdcf7b80b333" Workload="ci--4081--3--6--n--e100e79615-k8s-calico--kube--controllers--64b8756fdc--7qc6h-eth0" Jan 17 00:27:09.752750 containerd[1501]: 2026-01-17 00:27:09.739 [INFO][4201] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5f1eaa4f504fb148622ab79b04b1b660a317bb48cc4e0e403547fdcf7b80b333" HandleID="k8s-pod-network.5f1eaa4f504fb148622ab79b04b1b660a317bb48cc4e0e403547fdcf7b80b333" Workload="ci--4081--3--6--n--e100e79615-k8s-calico--kube--controllers--64b8756fdc--7qc6h-eth0" Jan 17 00:27:09.752750 containerd[1501]: 2026-01-17 00:27:09.741 [INFO][4201] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:27:09.752750 containerd[1501]: 2026-01-17 00:27:09.744 [INFO][4188] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5f1eaa4f504fb148622ab79b04b1b660a317bb48cc4e0e403547fdcf7b80b333" Jan 17 00:27:09.755215 containerd[1501]: time="2026-01-17T00:27:09.755171999Z" level=info msg="TearDown network for sandbox \"5f1eaa4f504fb148622ab79b04b1b660a317bb48cc4e0e403547fdcf7b80b333\" successfully" Jan 17 00:27:09.755215 containerd[1501]: time="2026-01-17T00:27:09.755214091Z" level=info msg="StopPodSandbox for \"5f1eaa4f504fb148622ab79b04b1b660a317bb48cc4e0e403547fdcf7b80b333\" returns successfully" Jan 17 00:27:09.758511 containerd[1501]: time="2026-01-17T00:27:09.756988638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64b8756fdc-7qc6h,Uid:481372c3-ef6e-46bf-86cb-78fea87a79f9,Namespace:calico-system,Attempt:1,}" Jan 17 00:27:09.758289 systemd[1]: run-netns-cni\x2dc63265fd\x2d9cdd\x2d375a\x2d1983\x2d35b4b38e65f2.mount: Deactivated successfully. Jan 17 00:27:09.837935 kubelet[2562]: E0117 00:27:09.837629 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7f57f5f859-j9rxk" podUID="95fdc174-63e2-499b-8b79-a226c39e6eaf" Jan 17 00:27:09.960903 systemd-networkd[1401]: cali00d40863985: Link UP Jan 17 00:27:09.962416 systemd-networkd[1401]: cali00d40863985: Gained carrier Jan 17 00:27:09.983565 containerd[1501]: 2026-01-17 00:27:09.853 [INFO][4214] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--e100e79615-k8s-calico--apiserver--6675cb976f--qgmnw-eth0 calico-apiserver-6675cb976f- calico-apiserver 7ea2b3c0-00a9-42a4-a1ac-e5bd2a459fee 938 0 2026-01-17 00:26:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6675cb976f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-n-e100e79615 calico-apiserver-6675cb976f-qgmnw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali00d40863985 [] [] }} ContainerID="5e89e2b6d8d94c30a770aac5fb14041e0e62d89c0ac0ba27e8fe21df856ad1ca" Namespace="calico-apiserver" Pod="calico-apiserver-6675cb976f-qgmnw" WorkloadEndpoint="ci--4081--3--6--n--e100e79615-k8s-calico--apiserver--6675cb976f--qgmnw-" Jan 17 00:27:09.983565 containerd[1501]: 2026-01-17 00:27:09.853 [INFO][4214] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5e89e2b6d8d94c30a770aac5fb14041e0e62d89c0ac0ba27e8fe21df856ad1ca" Namespace="calico-apiserver" Pod="calico-apiserver-6675cb976f-qgmnw" WorkloadEndpoint="ci--4081--3--6--n--e100e79615-k8s-calico--apiserver--6675cb976f--qgmnw-eth0" Jan 17 00:27:09.983565 containerd[1501]: 2026-01-17 00:27:09.906 [INFO][4239] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5e89e2b6d8d94c30a770aac5fb14041e0e62d89c0ac0ba27e8fe21df856ad1ca" HandleID="k8s-pod-network.5e89e2b6d8d94c30a770aac5fb14041e0e62d89c0ac0ba27e8fe21df856ad1ca" Workload="ci--4081--3--6--n--e100e79615-k8s-calico--apiserver--6675cb976f--qgmnw-eth0" Jan 17 00:27:09.983565 containerd[1501]: 2026-01-17 00:27:09.906 [INFO][4239] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5e89e2b6d8d94c30a770aac5fb14041e0e62d89c0ac0ba27e8fe21df856ad1ca" HandleID="k8s-pod-network.5e89e2b6d8d94c30a770aac5fb14041e0e62d89c0ac0ba27e8fe21df856ad1ca" Workload="ci--4081--3--6--n--e100e79615-k8s-calico--apiserver--6675cb976f--qgmnw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024ef40), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-6-n-e100e79615", "pod":"calico-apiserver-6675cb976f-qgmnw", "timestamp":"2026-01-17 00:27:09.906345351 +0000 UTC"}, Hostname:"ci-4081-3-6-n-e100e79615", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:27:09.983565 containerd[1501]: 2026-01-17 00:27:09.906 [INFO][4239] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:27:09.983565 containerd[1501]: 2026-01-17 00:27:09.906 [INFO][4239] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:27:09.983565 containerd[1501]: 2026-01-17 00:27:09.906 [INFO][4239] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-e100e79615' Jan 17 00:27:09.983565 containerd[1501]: 2026-01-17 00:27:09.913 [INFO][4239] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5e89e2b6d8d94c30a770aac5fb14041e0e62d89c0ac0ba27e8fe21df856ad1ca" host="ci-4081-3-6-n-e100e79615" Jan 17 00:27:09.983565 containerd[1501]: 2026-01-17 00:27:09.918 [INFO][4239] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-e100e79615" Jan 17 00:27:09.983565 containerd[1501]: 2026-01-17 00:27:09.923 [INFO][4239] ipam/ipam.go 511: Trying affinity for 192.168.114.128/26 host="ci-4081-3-6-n-e100e79615" Jan 17 00:27:09.983565 containerd[1501]: 2026-01-17 00:27:09.925 [INFO][4239] ipam/ipam.go 158: Attempting to load block cidr=192.168.114.128/26 host="ci-4081-3-6-n-e100e79615" Jan 17 00:27:09.983565 containerd[1501]: 2026-01-17 00:27:09.927 [INFO][4239] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.114.128/26 host="ci-4081-3-6-n-e100e79615" Jan 17 00:27:09.983565 containerd[1501]: 2026-01-17 00:27:09.927 [INFO][4239] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.114.128/26 handle="k8s-pod-network.5e89e2b6d8d94c30a770aac5fb14041e0e62d89c0ac0ba27e8fe21df856ad1ca" host="ci-4081-3-6-n-e100e79615" Jan 17 00:27:09.983565 containerd[1501]: 2026-01-17 00:27:09.929 [INFO][4239] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5e89e2b6d8d94c30a770aac5fb14041e0e62d89c0ac0ba27e8fe21df856ad1ca Jan 17 00:27:09.983565 containerd[1501]: 2026-01-17 00:27:09.934 [INFO][4239] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.114.128/26 handle="k8s-pod-network.5e89e2b6d8d94c30a770aac5fb14041e0e62d89c0ac0ba27e8fe21df856ad1ca" host="ci-4081-3-6-n-e100e79615" Jan 17 00:27:09.983565 containerd[1501]: 2026-01-17 00:27:09.944 [INFO][4239] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.114.130/26] block=192.168.114.128/26 handle="k8s-pod-network.5e89e2b6d8d94c30a770aac5fb14041e0e62d89c0ac0ba27e8fe21df856ad1ca" host="ci-4081-3-6-n-e100e79615" Jan 17 00:27:09.983565 containerd[1501]: 2026-01-17 00:27:09.944 [INFO][4239] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.114.130/26] handle="k8s-pod-network.5e89e2b6d8d94c30a770aac5fb14041e0e62d89c0ac0ba27e8fe21df856ad1ca" host="ci-4081-3-6-n-e100e79615" Jan 17 00:27:09.983565 containerd[1501]: 2026-01-17 00:27:09.944 [INFO][4239] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:27:09.983565 containerd[1501]: 2026-01-17 00:27:09.944 [INFO][4239] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.114.130/26] IPv6=[] ContainerID="5e89e2b6d8d94c30a770aac5fb14041e0e62d89c0ac0ba27e8fe21df856ad1ca" HandleID="k8s-pod-network.5e89e2b6d8d94c30a770aac5fb14041e0e62d89c0ac0ba27e8fe21df856ad1ca" Workload="ci--4081--3--6--n--e100e79615-k8s-calico--apiserver--6675cb976f--qgmnw-eth0" Jan 17 00:27:09.985600 containerd[1501]: 2026-01-17 00:27:09.949 [INFO][4214] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5e89e2b6d8d94c30a770aac5fb14041e0e62d89c0ac0ba27e8fe21df856ad1ca" Namespace="calico-apiserver" Pod="calico-apiserver-6675cb976f-qgmnw" WorkloadEndpoint="ci--4081--3--6--n--e100e79615-k8s-calico--apiserver--6675cb976f--qgmnw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--e100e79615-k8s-calico--apiserver--6675cb976f--qgmnw-eth0", GenerateName:"calico-apiserver-6675cb976f-", Namespace:"calico-apiserver", SelfLink:"", UID:"7ea2b3c0-00a9-42a4-a1ac-e5bd2a459fee", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 26, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6675cb976f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-e100e79615", ContainerID:"", Pod:"calico-apiserver-6675cb976f-qgmnw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali00d40863985", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:27:09.985600 containerd[1501]: 2026-01-17 00:27:09.950 [INFO][4214] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.130/32] ContainerID="5e89e2b6d8d94c30a770aac5fb14041e0e62d89c0ac0ba27e8fe21df856ad1ca" Namespace="calico-apiserver" Pod="calico-apiserver-6675cb976f-qgmnw" WorkloadEndpoint="ci--4081--3--6--n--e100e79615-k8s-calico--apiserver--6675cb976f--qgmnw-eth0" Jan 17 00:27:09.985600 containerd[1501]: 2026-01-17 00:27:09.950 [INFO][4214] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali00d40863985 ContainerID="5e89e2b6d8d94c30a770aac5fb14041e0e62d89c0ac0ba27e8fe21df856ad1ca" Namespace="calico-apiserver" Pod="calico-apiserver-6675cb976f-qgmnw" WorkloadEndpoint="ci--4081--3--6--n--e100e79615-k8s-calico--apiserver--6675cb976f--qgmnw-eth0" Jan 17 00:27:09.985600 containerd[1501]: 2026-01-17 00:27:09.963 [INFO][4214] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5e89e2b6d8d94c30a770aac5fb14041e0e62d89c0ac0ba27e8fe21df856ad1ca" Namespace="calico-apiserver" Pod="calico-apiserver-6675cb976f-qgmnw" WorkloadEndpoint="ci--4081--3--6--n--e100e79615-k8s-calico--apiserver--6675cb976f--qgmnw-eth0" Jan 17 00:27:09.985600 containerd[1501]: 2026-01-17 00:27:09.966 [INFO][4214] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5e89e2b6d8d94c30a770aac5fb14041e0e62d89c0ac0ba27e8fe21df856ad1ca" Namespace="calico-apiserver" Pod="calico-apiserver-6675cb976f-qgmnw" WorkloadEndpoint="ci--4081--3--6--n--e100e79615-k8s-calico--apiserver--6675cb976f--qgmnw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--e100e79615-k8s-calico--apiserver--6675cb976f--qgmnw-eth0", GenerateName:"calico-apiserver-6675cb976f-", Namespace:"calico-apiserver", SelfLink:"", UID:"7ea2b3c0-00a9-42a4-a1ac-e5bd2a459fee", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 26, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6675cb976f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-e100e79615", ContainerID:"5e89e2b6d8d94c30a770aac5fb14041e0e62d89c0ac0ba27e8fe21df856ad1ca", Pod:"calico-apiserver-6675cb976f-qgmnw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali00d40863985", MAC:"0a:55:6f:c9:28:91", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:27:09.985600 containerd[1501]: 2026-01-17 00:27:09.980 [INFO][4214] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5e89e2b6d8d94c30a770aac5fb14041e0e62d89c0ac0ba27e8fe21df856ad1ca" Namespace="calico-apiserver" Pod="calico-apiserver-6675cb976f-qgmnw" WorkloadEndpoint="ci--4081--3--6--n--e100e79615-k8s-calico--apiserver--6675cb976f--qgmnw-eth0" Jan 17 00:27:10.019003 containerd[1501]: time="2026-01-17T00:27:10.018519030Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:27:10.019003 containerd[1501]: time="2026-01-17T00:27:10.018590202Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:27:10.019675 containerd[1501]: time="2026-01-17T00:27:10.019230982Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:27:10.019675 containerd[1501]: time="2026-01-17T00:27:10.019361747Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:27:10.045355 systemd[1]: Started cri-containerd-5e89e2b6d8d94c30a770aac5fb14041e0e62d89c0ac0ba27e8fe21df856ad1ca.scope - libcontainer container 5e89e2b6d8d94c30a770aac5fb14041e0e62d89c0ac0ba27e8fe21df856ad1ca. Jan 17 00:27:10.099681 systemd-networkd[1401]: cali4fc9e1807c8: Link UP Jan 17 00:27:10.102846 systemd-networkd[1401]: cali4fc9e1807c8: Gained carrier Jan 17 00:27:10.117948 containerd[1501]: time="2026-01-17T00:27:10.117367926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6675cb976f-qgmnw,Uid:7ea2b3c0-00a9-42a4-a1ac-e5bd2a459fee,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"5e89e2b6d8d94c30a770aac5fb14041e0e62d89c0ac0ba27e8fe21df856ad1ca\"" Jan 17 00:27:10.120151 containerd[1501]: time="2026-01-17T00:27:10.120125373Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:27:10.136251 containerd[1501]: 2026-01-17 00:27:09.863 [INFO][4224] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--e100e79615-k8s-calico--kube--controllers--64b8756fdc--7qc6h-eth0 calico-kube-controllers-64b8756fdc- calico-system 481372c3-ef6e-46bf-86cb-78fea87a79f9 937 0 2026-01-17 00:26:46 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:64b8756fdc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-6-n-e100e79615 calico-kube-controllers-64b8756fdc-7qc6h eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali4fc9e1807c8 [] [] }} ContainerID="9d9e65ef6f72f349cdb809aa7a0981ef0b933898fadc4ab7271dd997743c8d7e" Namespace="calico-system" Pod="calico-kube-controllers-64b8756fdc-7qc6h" WorkloadEndpoint="ci--4081--3--6--n--e100e79615-k8s-calico--kube--controllers--64b8756fdc--7qc6h-" Jan 17 00:27:10.136251 containerd[1501]: 2026-01-17 00:27:09.863 [INFO][4224] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9d9e65ef6f72f349cdb809aa7a0981ef0b933898fadc4ab7271dd997743c8d7e" Namespace="calico-system" Pod="calico-kube-controllers-64b8756fdc-7qc6h" WorkloadEndpoint="ci--4081--3--6--n--e100e79615-k8s-calico--kube--controllers--64b8756fdc--7qc6h-eth0" Jan 17 00:27:10.136251 containerd[1501]: 2026-01-17 00:27:09.912 [INFO][4241] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9d9e65ef6f72f349cdb809aa7a0981ef0b933898fadc4ab7271dd997743c8d7e" HandleID="k8s-pod-network.9d9e65ef6f72f349cdb809aa7a0981ef0b933898fadc4ab7271dd997743c8d7e" Workload="ci--4081--3--6--n--e100e79615-k8s-calico--kube--controllers--64b8756fdc--7qc6h-eth0" Jan 17 00:27:10.136251 containerd[1501]: 2026-01-17 00:27:09.913 [INFO][4241] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9d9e65ef6f72f349cdb809aa7a0981ef0b933898fadc4ab7271dd997743c8d7e" HandleID="k8s-pod-network.9d9e65ef6f72f349cdb809aa7a0981ef0b933898fadc4ab7271dd997743c8d7e" Workload="ci--4081--3--6--n--e100e79615-k8s-calico--kube--controllers--64b8756fdc--7qc6h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024efe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-e100e79615", "pod":"calico-kube-controllers-64b8756fdc-7qc6h", "timestamp":"2026-01-17 00:27:09.912822203 +0000 UTC"}, Hostname:"ci-4081-3-6-n-e100e79615", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:27:10.136251 containerd[1501]: 2026-01-17 00:27:09.913 [INFO][4241] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:27:10.136251 containerd[1501]: 2026-01-17 00:27:09.944 [INFO][4241] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:27:10.136251 containerd[1501]: 2026-01-17 00:27:09.944 [INFO][4241] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-e100e79615' Jan 17 00:27:10.136251 containerd[1501]: 2026-01-17 00:27:10.016 [INFO][4241] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9d9e65ef6f72f349cdb809aa7a0981ef0b933898fadc4ab7271dd997743c8d7e" host="ci-4081-3-6-n-e100e79615" Jan 17 00:27:10.136251 containerd[1501]: 2026-01-17 00:27:10.027 [INFO][4241] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-e100e79615" Jan 17 00:27:10.136251 containerd[1501]: 2026-01-17 00:27:10.040 [INFO][4241] ipam/ipam.go 511: Trying affinity for 192.168.114.128/26 host="ci-4081-3-6-n-e100e79615" Jan 17 00:27:10.136251 containerd[1501]: 2026-01-17 00:27:10.048 [INFO][4241] ipam/ipam.go 158: Attempting to load block cidr=192.168.114.128/26 host="ci-4081-3-6-n-e100e79615" Jan 17 00:27:10.136251 containerd[1501]: 2026-01-17 00:27:10.051 [INFO][4241] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.114.128/26 host="ci-4081-3-6-n-e100e79615" Jan 17 00:27:10.136251 containerd[1501]: 2026-01-17 00:27:10.051 [INFO][4241] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.114.128/26 handle="k8s-pod-network.9d9e65ef6f72f349cdb809aa7a0981ef0b933898fadc4ab7271dd997743c8d7e" host="ci-4081-3-6-n-e100e79615" Jan 17 00:27:10.136251 containerd[1501]: 2026-01-17 00:27:10.054 [INFO][4241] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9d9e65ef6f72f349cdb809aa7a0981ef0b933898fadc4ab7271dd997743c8d7e Jan 17 00:27:10.136251 containerd[1501]: 2026-01-17 00:27:10.075 [INFO][4241] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.114.128/26 handle="k8s-pod-network.9d9e65ef6f72f349cdb809aa7a0981ef0b933898fadc4ab7271dd997743c8d7e" host="ci-4081-3-6-n-e100e79615" Jan 17 00:27:10.136251 containerd[1501]: 2026-01-17 00:27:10.085 [INFO][4241] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.114.131/26] block=192.168.114.128/26 handle="k8s-pod-network.9d9e65ef6f72f349cdb809aa7a0981ef0b933898fadc4ab7271dd997743c8d7e" host="ci-4081-3-6-n-e100e79615" Jan 17 00:27:10.136251 containerd[1501]: 2026-01-17 00:27:10.085 [INFO][4241] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.114.131/26] handle="k8s-pod-network.9d9e65ef6f72f349cdb809aa7a0981ef0b933898fadc4ab7271dd997743c8d7e" host="ci-4081-3-6-n-e100e79615" Jan 17 00:27:10.136251 containerd[1501]: 2026-01-17 00:27:10.085 [INFO][4241] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:27:10.136251 containerd[1501]: 2026-01-17 00:27:10.085 [INFO][4241] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.114.131/26] IPv6=[] ContainerID="9d9e65ef6f72f349cdb809aa7a0981ef0b933898fadc4ab7271dd997743c8d7e" HandleID="k8s-pod-network.9d9e65ef6f72f349cdb809aa7a0981ef0b933898fadc4ab7271dd997743c8d7e" Workload="ci--4081--3--6--n--e100e79615-k8s-calico--kube--controllers--64b8756fdc--7qc6h-eth0" Jan 17 00:27:10.137399 containerd[1501]: 2026-01-17 00:27:10.089 [INFO][4224] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9d9e65ef6f72f349cdb809aa7a0981ef0b933898fadc4ab7271dd997743c8d7e" Namespace="calico-system" Pod="calico-kube-controllers-64b8756fdc-7qc6h" WorkloadEndpoint="ci--4081--3--6--n--e100e79615-k8s-calico--kube--controllers--64b8756fdc--7qc6h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--e100e79615-k8s-calico--kube--controllers--64b8756fdc--7qc6h-eth0", GenerateName:"calico-kube-controllers-64b8756fdc-", Namespace:"calico-system", SelfLink:"", UID:"481372c3-ef6e-46bf-86cb-78fea87a79f9", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 26, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64b8756fdc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-e100e79615", ContainerID:"", Pod:"calico-kube-controllers-64b8756fdc-7qc6h", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.114.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4fc9e1807c8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:27:10.137399 containerd[1501]: 2026-01-17 00:27:10.090 [INFO][4224] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.131/32] ContainerID="9d9e65ef6f72f349cdb809aa7a0981ef0b933898fadc4ab7271dd997743c8d7e" Namespace="calico-system" Pod="calico-kube-controllers-64b8756fdc-7qc6h" WorkloadEndpoint="ci--4081--3--6--n--e100e79615-k8s-calico--kube--controllers--64b8756fdc--7qc6h-eth0" Jan 17 00:27:10.137399 containerd[1501]: 2026-01-17 00:27:10.090 [INFO][4224] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4fc9e1807c8 ContainerID="9d9e65ef6f72f349cdb809aa7a0981ef0b933898fadc4ab7271dd997743c8d7e" Namespace="calico-system" Pod="calico-kube-controllers-64b8756fdc-7qc6h" WorkloadEndpoint="ci--4081--3--6--n--e100e79615-k8s-calico--kube--controllers--64b8756fdc--7qc6h-eth0" Jan 17 00:27:10.137399 containerd[1501]: 2026-01-17 00:27:10.102 [INFO][4224] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9d9e65ef6f72f349cdb809aa7a0981ef0b933898fadc4ab7271dd997743c8d7e" Namespace="calico-system" Pod="calico-kube-controllers-64b8756fdc-7qc6h" WorkloadEndpoint="ci--4081--3--6--n--e100e79615-k8s-calico--kube--controllers--64b8756fdc--7qc6h-eth0" Jan 17 00:27:10.137399 containerd[1501]: 2026-01-17 00:27:10.109 [INFO][4224] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9d9e65ef6f72f349cdb809aa7a0981ef0b933898fadc4ab7271dd997743c8d7e" Namespace="calico-system" Pod="calico-kube-controllers-64b8756fdc-7qc6h" WorkloadEndpoint="ci--4081--3--6--n--e100e79615-k8s-calico--kube--controllers--64b8756fdc--7qc6h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--e100e79615-k8s-calico--kube--controllers--64b8756fdc--7qc6h-eth0", GenerateName:"calico-kube-controllers-64b8756fdc-", Namespace:"calico-system", SelfLink:"", UID:"481372c3-ef6e-46bf-86cb-78fea87a79f9", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 26, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64b8756fdc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-e100e79615", ContainerID:"9d9e65ef6f72f349cdb809aa7a0981ef0b933898fadc4ab7271dd997743c8d7e", Pod:"calico-kube-controllers-64b8756fdc-7qc6h", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.114.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4fc9e1807c8", MAC:"76:34:04:f5:f4:ff", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:27:10.137399 containerd[1501]: 2026-01-17 00:27:10.130 [INFO][4224] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9d9e65ef6f72f349cdb809aa7a0981ef0b933898fadc4ab7271dd997743c8d7e" Namespace="calico-system" Pod="calico-kube-controllers-64b8756fdc-7qc6h" WorkloadEndpoint="ci--4081--3--6--n--e100e79615-k8s-calico--kube--controllers--64b8756fdc--7qc6h-eth0" Jan 17 00:27:10.166137 containerd[1501]: time="2026-01-17T00:27:10.165958569Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:27:10.166137 containerd[1501]: time="2026-01-17T00:27:10.166030011Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:27:10.166137 containerd[1501]: time="2026-01-17T00:27:10.166058502Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:27:10.166414 containerd[1501]: time="2026-01-17T00:27:10.166206686Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:27:10.190208 systemd[1]: Started cri-containerd-9d9e65ef6f72f349cdb809aa7a0981ef0b933898fadc4ab7271dd997743c8d7e.scope - libcontainer container 9d9e65ef6f72f349cdb809aa7a0981ef0b933898fadc4ab7271dd997743c8d7e. Jan 17 00:27:10.252014 containerd[1501]: time="2026-01-17T00:27:10.251956186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64b8756fdc-7qc6h,Uid:481372c3-ef6e-46bf-86cb-78fea87a79f9,Namespace:calico-system,Attempt:1,} returns sandbox id \"9d9e65ef6f72f349cdb809aa7a0981ef0b933898fadc4ab7271dd997743c8d7e\"" Jan 17 00:27:10.558759 containerd[1501]: time="2026-01-17T00:27:10.558670218Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:27:10.560253 containerd[1501]: time="2026-01-17T00:27:10.560039712Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:27:10.560253 containerd[1501]: time="2026-01-17T00:27:10.560125505Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:27:10.560432 kubelet[2562]: E0117 00:27:10.560395 2562 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:27:10.561009 kubelet[2562]: E0117 00:27:10.560451 2562 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:27:10.561009 kubelet[2562]: E0117 00:27:10.560735 2562 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kz5jx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6675cb976f-qgmnw_calico-apiserver(7ea2b3c0-00a9-42a4-a1ac-e5bd2a459fee): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:27:10.562533 containerd[1501]: time="2026-01-17T00:27:10.561650963Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:27:10.562754 kubelet[2562]: E0117 00:27:10.562439 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6675cb976f-qgmnw" podUID="7ea2b3c0-00a9-42a4-a1ac-e5bd2a459fee" Jan 17 00:27:10.821379 systemd-networkd[1401]: vxlan.calico: Gained IPv6LL Jan 17 00:27:10.842122 kubelet[2562]: E0117 00:27:10.840431 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6675cb976f-qgmnw" podUID="7ea2b3c0-00a9-42a4-a1ac-e5bd2a459fee" Jan 17 00:27:10.988598 containerd[1501]: time="2026-01-17T00:27:10.988539847Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:27:10.992273 containerd[1501]: time="2026-01-17T00:27:10.992201454Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:27:10.992756 containerd[1501]: time="2026-01-17T00:27:10.992341618Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:27:10.993366 kubelet[2562]: E0117 00:27:10.993011 2562 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:27:10.993366 kubelet[2562]: E0117 00:27:10.993215 2562 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:27:10.994429 kubelet[2562]: E0117 00:27:10.994289 2562 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2nkr5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-64b8756fdc-7qc6h_calico-system(481372c3-ef6e-46bf-86cb-78fea87a79f9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:27:10.996971 kubelet[2562]: E0117 00:27:10.996637 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64b8756fdc-7qc6h" podUID="481372c3-ef6e-46bf-86cb-78fea87a79f9" Jan 17 00:27:11.080483 systemd-networkd[1401]: cali00d40863985: Gained IPv6LL Jan 17 00:27:11.462878 systemd-networkd[1401]: cali4fc9e1807c8: Gained IPv6LL Jan 17 00:27:11.857226 kubelet[2562]: E0117 00:27:11.856064 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6675cb976f-qgmnw" podUID="7ea2b3c0-00a9-42a4-a1ac-e5bd2a459fee" Jan 17 00:27:11.857226 kubelet[2562]: E0117 00:27:11.857074 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64b8756fdc-7qc6h" podUID="481372c3-ef6e-46bf-86cb-78fea87a79f9" Jan 17 00:27:12.516824 containerd[1501]: time="2026-01-17T00:27:12.516748773Z" level=info msg="StopPodSandbox for \"5fe36568b56f37ae0885938ce6eb85e2b2da230eaf0ab30919938732eed7ec67\"" Jan 17 00:27:12.643245 containerd[1501]: 2026-01-17 00:27:12.588 [INFO][4366] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5fe36568b56f37ae0885938ce6eb85e2b2da230eaf0ab30919938732eed7ec67" Jan 17 00:27:12.643245 containerd[1501]: 2026-01-17 00:27:12.588 [INFO][4366] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5fe36568b56f37ae0885938ce6eb85e2b2da230eaf0ab30919938732eed7ec67" iface="eth0" netns="/var/run/netns/cni-5cccb14e-d85a-eb69-2523-03d300c9c89c" Jan 17 00:27:12.643245 containerd[1501]: 2026-01-17 00:27:12.589 [INFO][4366] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5fe36568b56f37ae0885938ce6eb85e2b2da230eaf0ab30919938732eed7ec67" iface="eth0" netns="/var/run/netns/cni-5cccb14e-d85a-eb69-2523-03d300c9c89c" Jan 17 00:27:12.643245 containerd[1501]: 2026-01-17 00:27:12.589 [INFO][4366] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5fe36568b56f37ae0885938ce6eb85e2b2da230eaf0ab30919938732eed7ec67" iface="eth0" netns="/var/run/netns/cni-5cccb14e-d85a-eb69-2523-03d300c9c89c" Jan 17 00:27:12.643245 containerd[1501]: 2026-01-17 00:27:12.589 [INFO][4366] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5fe36568b56f37ae0885938ce6eb85e2b2da230eaf0ab30919938732eed7ec67" Jan 17 00:27:12.643245 containerd[1501]: 2026-01-17 00:27:12.589 [INFO][4366] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5fe36568b56f37ae0885938ce6eb85e2b2da230eaf0ab30919938732eed7ec67" Jan 17 00:27:12.643245 containerd[1501]: 2026-01-17 00:27:12.626 [INFO][4374] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5fe36568b56f37ae0885938ce6eb85e2b2da230eaf0ab30919938732eed7ec67" HandleID="k8s-pod-network.5fe36568b56f37ae0885938ce6eb85e2b2da230eaf0ab30919938732eed7ec67" Workload="ci--4081--3--6--n--e100e79615-k8s-calico--apiserver--6675cb976f--s7mzt-eth0" Jan 17 00:27:12.643245 containerd[1501]: 2026-01-17 00:27:12.626 [INFO][4374] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:27:12.643245 containerd[1501]: 2026-01-17 00:27:12.626 [INFO][4374] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:27:12.643245 containerd[1501]: 2026-01-17 00:27:12.632 [WARNING][4374] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5fe36568b56f37ae0885938ce6eb85e2b2da230eaf0ab30919938732eed7ec67" HandleID="k8s-pod-network.5fe36568b56f37ae0885938ce6eb85e2b2da230eaf0ab30919938732eed7ec67" Workload="ci--4081--3--6--n--e100e79615-k8s-calico--apiserver--6675cb976f--s7mzt-eth0" Jan 17 00:27:12.643245 containerd[1501]: 2026-01-17 00:27:12.632 [INFO][4374] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5fe36568b56f37ae0885938ce6eb85e2b2da230eaf0ab30919938732eed7ec67" HandleID="k8s-pod-network.5fe36568b56f37ae0885938ce6eb85e2b2da230eaf0ab30919938732eed7ec67" Workload="ci--4081--3--6--n--e100e79615-k8s-calico--apiserver--6675cb976f--s7mzt-eth0" Jan 17 00:27:12.643245 containerd[1501]: 2026-01-17 00:27:12.634 [INFO][4374] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:27:12.643245 containerd[1501]: 2026-01-17 00:27:12.638 [INFO][4366] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5fe36568b56f37ae0885938ce6eb85e2b2da230eaf0ab30919938732eed7ec67" Jan 17 00:27:12.643245 containerd[1501]: time="2026-01-17T00:27:12.641677069Z" level=info msg="TearDown network for sandbox \"5fe36568b56f37ae0885938ce6eb85e2b2da230eaf0ab30919938732eed7ec67\" successfully" Jan 17 00:27:12.643245 containerd[1501]: time="2026-01-17T00:27:12.641725420Z" level=info msg="StopPodSandbox for \"5fe36568b56f37ae0885938ce6eb85e2b2da230eaf0ab30919938732eed7ec67\" returns successfully" Jan 17 00:27:12.648405 containerd[1501]: time="2026-01-17T00:27:12.647562426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6675cb976f-s7mzt,Uid:73c589a5-8e71-425c-a060-0cf6cb3ed239,Namespace:calico-apiserver,Attempt:1,}" Jan 17 00:27:12.648917 systemd[1]: run-netns-cni\x2d5cccb14e\x2dd85a\x2deb69\x2d2523\x2d03d300c9c89c.mount: Deactivated successfully. Jan 17 00:27:12.788869 systemd-networkd[1401]: cali4e9d4212d78: Link UP Jan 17 00:27:12.790610 systemd-networkd[1401]: cali4e9d4212d78: Gained carrier Jan 17 00:27:12.809557 containerd[1501]: 2026-01-17 00:27:12.706 [INFO][4380] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--e100e79615-k8s-calico--apiserver--6675cb976f--s7mzt-eth0 calico-apiserver-6675cb976f- calico-apiserver 73c589a5-8e71-425c-a060-0cf6cb3ed239 977 0 2026-01-17 00:26:42 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6675cb976f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-n-e100e79615 calico-apiserver-6675cb976f-s7mzt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4e9d4212d78 [] [] }} ContainerID="3b3d0c8239b43fedba5b1fbb351d2a99ec60b34a3f2bcbffeeeac12b9fa8cf44" Namespace="calico-apiserver" Pod="calico-apiserver-6675cb976f-s7mzt" WorkloadEndpoint="ci--4081--3--6--n--e100e79615-k8s-calico--apiserver--6675cb976f--s7mzt-" Jan 17 00:27:12.809557 containerd[1501]: 2026-01-17 00:27:12.706 [INFO][4380] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3b3d0c8239b43fedba5b1fbb351d2a99ec60b34a3f2bcbffeeeac12b9fa8cf44" Namespace="calico-apiserver" Pod="calico-apiserver-6675cb976f-s7mzt" WorkloadEndpoint="ci--4081--3--6--n--e100e79615-k8s-calico--apiserver--6675cb976f--s7mzt-eth0" Jan 17 00:27:12.809557 containerd[1501]: 2026-01-17 00:27:12.733 [INFO][4392] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3b3d0c8239b43fedba5b1fbb351d2a99ec60b34a3f2bcbffeeeac12b9fa8cf44" HandleID="k8s-pod-network.3b3d0c8239b43fedba5b1fbb351d2a99ec60b34a3f2bcbffeeeac12b9fa8cf44" Workload="ci--4081--3--6--n--e100e79615-k8s-calico--apiserver--6675cb976f--s7mzt-eth0" Jan 17 00:27:12.809557 containerd[1501]: 2026-01-17 00:27:12.733 [INFO][4392] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3b3d0c8239b43fedba5b1fbb351d2a99ec60b34a3f2bcbffeeeac12b9fa8cf44" HandleID="k8s-pod-network.3b3d0c8239b43fedba5b1fbb351d2a99ec60b34a3f2bcbffeeeac12b9fa8cf44" Workload="ci--4081--3--6--n--e100e79615-k8s-calico--apiserver--6675cb976f--s7mzt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c5660), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-6-n-e100e79615", "pod":"calico-apiserver-6675cb976f-s7mzt", "timestamp":"2026-01-17 00:27:12.733040913 +0000 UTC"}, Hostname:"ci-4081-3-6-n-e100e79615", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:27:12.809557 containerd[1501]: 2026-01-17 00:27:12.733 [INFO][4392] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:27:12.809557 containerd[1501]: 2026-01-17 00:27:12.733 [INFO][4392] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:27:12.809557 containerd[1501]: 2026-01-17 00:27:12.733 [INFO][4392] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-e100e79615' Jan 17 00:27:12.809557 containerd[1501]: 2026-01-17 00:27:12.740 [INFO][4392] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3b3d0c8239b43fedba5b1fbb351d2a99ec60b34a3f2bcbffeeeac12b9fa8cf44" host="ci-4081-3-6-n-e100e79615" Jan 17 00:27:12.809557 containerd[1501]: 2026-01-17 00:27:12.745 [INFO][4392] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-e100e79615" Jan 17 00:27:12.809557 containerd[1501]: 2026-01-17 00:27:12.750 [INFO][4392] ipam/ipam.go 511: Trying affinity for 192.168.114.128/26 host="ci-4081-3-6-n-e100e79615" Jan 17 00:27:12.809557 containerd[1501]: 2026-01-17 00:27:12.752 [INFO][4392] ipam/ipam.go 158: Attempting to load block cidr=192.168.114.128/26 host="ci-4081-3-6-n-e100e79615" Jan 17 00:27:12.809557 containerd[1501]: 2026-01-17 00:27:12.755 [INFO][4392] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.114.128/26 host="ci-4081-3-6-n-e100e79615" Jan 17 00:27:12.809557 containerd[1501]: 2026-01-17 00:27:12.756 [INFO][4392] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.114.128/26 handle="k8s-pod-network.3b3d0c8239b43fedba5b1fbb351d2a99ec60b34a3f2bcbffeeeac12b9fa8cf44" host="ci-4081-3-6-n-e100e79615" Jan 17 00:27:12.809557 containerd[1501]: 2026-01-17 00:27:12.758 [INFO][4392] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3b3d0c8239b43fedba5b1fbb351d2a99ec60b34a3f2bcbffeeeac12b9fa8cf44 Jan 17 00:27:12.809557 containerd[1501]: 2026-01-17 00:27:12.764 [INFO][4392] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.114.128/26 handle="k8s-pod-network.3b3d0c8239b43fedba5b1fbb351d2a99ec60b34a3f2bcbffeeeac12b9fa8cf44" host="ci-4081-3-6-n-e100e79615" Jan 17 00:27:12.809557 containerd[1501]: 2026-01-17 00:27:12.773 [INFO][4392] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.114.132/26] block=192.168.114.128/26 handle="k8s-pod-network.3b3d0c8239b43fedba5b1fbb351d2a99ec60b34a3f2bcbffeeeac12b9fa8cf44" host="ci-4081-3-6-n-e100e79615" Jan 17 00:27:12.809557 containerd[1501]: 2026-01-17 00:27:12.775 [INFO][4392] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.114.132/26] handle="k8s-pod-network.3b3d0c8239b43fedba5b1fbb351d2a99ec60b34a3f2bcbffeeeac12b9fa8cf44" host="ci-4081-3-6-n-e100e79615" Jan 17 00:27:12.809557 containerd[1501]: 2026-01-17 00:27:12.775 [INFO][4392] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:27:12.809557 containerd[1501]: 2026-01-17 00:27:12.775 [INFO][4392] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.114.132/26] IPv6=[] ContainerID="3b3d0c8239b43fedba5b1fbb351d2a99ec60b34a3f2bcbffeeeac12b9fa8cf44" HandleID="k8s-pod-network.3b3d0c8239b43fedba5b1fbb351d2a99ec60b34a3f2bcbffeeeac12b9fa8cf44" Workload="ci--4081--3--6--n--e100e79615-k8s-calico--apiserver--6675cb976f--s7mzt-eth0" Jan 17 00:27:12.810269 containerd[1501]: 2026-01-17 00:27:12.779 [INFO][4380] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3b3d0c8239b43fedba5b1fbb351d2a99ec60b34a3f2bcbffeeeac12b9fa8cf44" Namespace="calico-apiserver" Pod="calico-apiserver-6675cb976f-s7mzt" WorkloadEndpoint="ci--4081--3--6--n--e100e79615-k8s-calico--apiserver--6675cb976f--s7mzt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--e100e79615-k8s-calico--apiserver--6675cb976f--s7mzt-eth0", GenerateName:"calico-apiserver-6675cb976f-", Namespace:"calico-apiserver", SelfLink:"", UID:"73c589a5-8e71-425c-a060-0cf6cb3ed239", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 26, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6675cb976f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-e100e79615", ContainerID:"", Pod:"calico-apiserver-6675cb976f-s7mzt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4e9d4212d78", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:27:12.810269 containerd[1501]: 2026-01-17 00:27:12.779 [INFO][4380] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.132/32] ContainerID="3b3d0c8239b43fedba5b1fbb351d2a99ec60b34a3f2bcbffeeeac12b9fa8cf44" Namespace="calico-apiserver" Pod="calico-apiserver-6675cb976f-s7mzt" WorkloadEndpoint="ci--4081--3--6--n--e100e79615-k8s-calico--apiserver--6675cb976f--s7mzt-eth0" Jan 17 00:27:12.810269 containerd[1501]: 2026-01-17 00:27:12.779 [INFO][4380] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4e9d4212d78 ContainerID="3b3d0c8239b43fedba5b1fbb351d2a99ec60b34a3f2bcbffeeeac12b9fa8cf44" Namespace="calico-apiserver" Pod="calico-apiserver-6675cb976f-s7mzt" WorkloadEndpoint="ci--4081--3--6--n--e100e79615-k8s-calico--apiserver--6675cb976f--s7mzt-eth0" Jan 17 00:27:12.810269 containerd[1501]: 2026-01-17 00:27:12.790 [INFO][4380] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3b3d0c8239b43fedba5b1fbb351d2a99ec60b34a3f2bcbffeeeac12b9fa8cf44" Namespace="calico-apiserver" Pod="calico-apiserver-6675cb976f-s7mzt" WorkloadEndpoint="ci--4081--3--6--n--e100e79615-k8s-calico--apiserver--6675cb976f--s7mzt-eth0" Jan 17 00:27:12.810269 containerd[1501]: 2026-01-17 00:27:12.791 [INFO][4380] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3b3d0c8239b43fedba5b1fbb351d2a99ec60b34a3f2bcbffeeeac12b9fa8cf44" Namespace="calico-apiserver" Pod="calico-apiserver-6675cb976f-s7mzt" WorkloadEndpoint="ci--4081--3--6--n--e100e79615-k8s-calico--apiserver--6675cb976f--s7mzt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--e100e79615-k8s-calico--apiserver--6675cb976f--s7mzt-eth0", GenerateName:"calico-apiserver-6675cb976f-", Namespace:"calico-apiserver", SelfLink:"", UID:"73c589a5-8e71-425c-a060-0cf6cb3ed239", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 26, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6675cb976f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-e100e79615", ContainerID:"3b3d0c8239b43fedba5b1fbb351d2a99ec60b34a3f2bcbffeeeac12b9fa8cf44", Pod:"calico-apiserver-6675cb976f-s7mzt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4e9d4212d78", MAC:"0e:2d:69:72:cf:f4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:27:12.810269 containerd[1501]: 2026-01-17 00:27:12.803 [INFO][4380] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3b3d0c8239b43fedba5b1fbb351d2a99ec60b34a3f2bcbffeeeac12b9fa8cf44" Namespace="calico-apiserver" Pod="calico-apiserver-6675cb976f-s7mzt" WorkloadEndpoint="ci--4081--3--6--n--e100e79615-k8s-calico--apiserver--6675cb976f--s7mzt-eth0" Jan 17 00:27:12.835805 containerd[1501]: time="2026-01-17T00:27:12.835422988Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:27:12.835805 containerd[1501]: time="2026-01-17T00:27:12.835500940Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:27:12.835805 containerd[1501]: time="2026-01-17T00:27:12.835512711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:27:12.835805 containerd[1501]: time="2026-01-17T00:27:12.835589703Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:27:12.871618 systemd[1]: Started cri-containerd-3b3d0c8239b43fedba5b1fbb351d2a99ec60b34a3f2bcbffeeeac12b9fa8cf44.scope - libcontainer container 3b3d0c8239b43fedba5b1fbb351d2a99ec60b34a3f2bcbffeeeac12b9fa8cf44. Jan 17 00:27:12.941811 containerd[1501]: time="2026-01-17T00:27:12.941501506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6675cb976f-s7mzt,Uid:73c589a5-8e71-425c-a060-0cf6cb3ed239,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"3b3d0c8239b43fedba5b1fbb351d2a99ec60b34a3f2bcbffeeeac12b9fa8cf44\"" Jan 17 00:27:12.946854 containerd[1501]: time="2026-01-17T00:27:12.946547278Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:27:13.372737 containerd[1501]: time="2026-01-17T00:27:13.372644891Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:27:13.374530 containerd[1501]: time="2026-01-17T00:27:13.374408803Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:27:13.374530 containerd[1501]: time="2026-01-17T00:27:13.374483185Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:27:13.374767 kubelet[2562]: E0117 00:27:13.374658 2562 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:27:13.374767 kubelet[2562]: E0117 00:27:13.374737 2562 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:27:13.375376 kubelet[2562]: E0117 00:27:13.374924 2562 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fnczc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6675cb976f-s7mzt_calico-apiserver(73c589a5-8e71-425c-a060-0cf6cb3ed239): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:27:13.376230 kubelet[2562]: E0117 00:27:13.376131 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6675cb976f-s7mzt" podUID="73c589a5-8e71-425c-a060-0cf6cb3ed239" Jan 17 00:27:13.517417 containerd[1501]: time="2026-01-17T00:27:13.517363163Z" level=info msg="StopPodSandbox for \"3d92bb12f39a75e0bf578ea5270c93418c3fb227f6c5f2b52594ba64f87d76d8\"" Jan 17 00:27:13.642767 containerd[1501]: 2026-01-17 00:27:13.587 [INFO][4459] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3d92bb12f39a75e0bf578ea5270c93418c3fb227f6c5f2b52594ba64f87d76d8" Jan 17 00:27:13.642767 containerd[1501]: 2026-01-17 00:27:13.587 [INFO][4459] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3d92bb12f39a75e0bf578ea5270c93418c3fb227f6c5f2b52594ba64f87d76d8" iface="eth0" netns="/var/run/netns/cni-2448f522-e3e6-490c-fc2f-184be8b77fb4" Jan 17 00:27:13.642767 containerd[1501]: 2026-01-17 00:27:13.588 [INFO][4459] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3d92bb12f39a75e0bf578ea5270c93418c3fb227f6c5f2b52594ba64f87d76d8" iface="eth0" netns="/var/run/netns/cni-2448f522-e3e6-490c-fc2f-184be8b77fb4" Jan 17 00:27:13.642767 containerd[1501]: 2026-01-17 00:27:13.590 [INFO][4459] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3d92bb12f39a75e0bf578ea5270c93418c3fb227f6c5f2b52594ba64f87d76d8" iface="eth0" netns="/var/run/netns/cni-2448f522-e3e6-490c-fc2f-184be8b77fb4" Jan 17 00:27:13.642767 containerd[1501]: 2026-01-17 00:27:13.590 [INFO][4459] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3d92bb12f39a75e0bf578ea5270c93418c3fb227f6c5f2b52594ba64f87d76d8" Jan 17 00:27:13.642767 containerd[1501]: 2026-01-17 00:27:13.590 [INFO][4459] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3d92bb12f39a75e0bf578ea5270c93418c3fb227f6c5f2b52594ba64f87d76d8" Jan 17 00:27:13.642767 containerd[1501]: 2026-01-17 00:27:13.622 [INFO][4466] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3d92bb12f39a75e0bf578ea5270c93418c3fb227f6c5f2b52594ba64f87d76d8" HandleID="k8s-pod-network.3d92bb12f39a75e0bf578ea5270c93418c3fb227f6c5f2b52594ba64f87d76d8" Workload="ci--4081--3--6--n--e100e79615-k8s-coredns--668d6bf9bc--mgc4q-eth0" Jan 17 00:27:13.642767 containerd[1501]: 2026-01-17 00:27:13.623 [INFO][4466] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:27:13.642767 containerd[1501]: 2026-01-17 00:27:13.623 [INFO][4466] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:27:13.642767 containerd[1501]: 2026-01-17 00:27:13.632 [WARNING][4466] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3d92bb12f39a75e0bf578ea5270c93418c3fb227f6c5f2b52594ba64f87d76d8" HandleID="k8s-pod-network.3d92bb12f39a75e0bf578ea5270c93418c3fb227f6c5f2b52594ba64f87d76d8" Workload="ci--4081--3--6--n--e100e79615-k8s-coredns--668d6bf9bc--mgc4q-eth0" Jan 17 00:27:13.642767 containerd[1501]: 2026-01-17 00:27:13.632 [INFO][4466] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3d92bb12f39a75e0bf578ea5270c93418c3fb227f6c5f2b52594ba64f87d76d8" HandleID="k8s-pod-network.3d92bb12f39a75e0bf578ea5270c93418c3fb227f6c5f2b52594ba64f87d76d8" Workload="ci--4081--3--6--n--e100e79615-k8s-coredns--668d6bf9bc--mgc4q-eth0" Jan 17 00:27:13.642767 containerd[1501]: 2026-01-17 00:27:13.635 [INFO][4466] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:27:13.642767 containerd[1501]: 2026-01-17 00:27:13.637 [INFO][4459] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3d92bb12f39a75e0bf578ea5270c93418c3fb227f6c5f2b52594ba64f87d76d8" Jan 17 00:27:13.642767 containerd[1501]: time="2026-01-17T00:27:13.641073729Z" level=info msg="TearDown network for sandbox \"3d92bb12f39a75e0bf578ea5270c93418c3fb227f6c5f2b52594ba64f87d76d8\" successfully" Jan 17 00:27:13.642767 containerd[1501]: time="2026-01-17T00:27:13.641141551Z" level=info msg="StopPodSandbox for \"3d92bb12f39a75e0bf578ea5270c93418c3fb227f6c5f2b52594ba64f87d76d8\" returns successfully" Jan 17 00:27:13.642767 containerd[1501]: time="2026-01-17T00:27:13.642219212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mgc4q,Uid:c92915ad-48c4-496c-93e3-f83efa51b583,Namespace:kube-system,Attempt:1,}" Jan 17 00:27:13.653055 systemd[1]: run-netns-cni\x2d2448f522\x2de3e6\x2d490c\x2dfc2f\x2d184be8b77fb4.mount: Deactivated successfully. Jan 17 00:27:13.801679 systemd-networkd[1401]: cali20426f0244b: Link UP Jan 17 00:27:13.809222 systemd-networkd[1401]: cali20426f0244b: Gained carrier Jan 17 00:27:13.832196 containerd[1501]: 2026-01-17 00:27:13.717 [INFO][4472] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--e100e79615-k8s-coredns--668d6bf9bc--mgc4q-eth0 coredns-668d6bf9bc- kube-system c92915ad-48c4-496c-93e3-f83efa51b583 987 0 2026-01-17 00:26:32 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-n-e100e79615 coredns-668d6bf9bc-mgc4q eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali20426f0244b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b0b232bb2895138a077c20c711c1cb8ed8db3f4fc1410689b977067eff0ac7e0" Namespace="kube-system" Pod="coredns-668d6bf9bc-mgc4q" WorkloadEndpoint="ci--4081--3--6--n--e100e79615-k8s-coredns--668d6bf9bc--mgc4q-" Jan 17 00:27:13.832196 containerd[1501]: 2026-01-17 00:27:13.717 [INFO][4472] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b0b232bb2895138a077c20c711c1cb8ed8db3f4fc1410689b977067eff0ac7e0" Namespace="kube-system" Pod="coredns-668d6bf9bc-mgc4q" WorkloadEndpoint="ci--4081--3--6--n--e100e79615-k8s-coredns--668d6bf9bc--mgc4q-eth0" Jan 17 00:27:13.832196 containerd[1501]: 2026-01-17 00:27:13.747 [INFO][4484] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b0b232bb2895138a077c20c711c1cb8ed8db3f4fc1410689b977067eff0ac7e0" HandleID="k8s-pod-network.b0b232bb2895138a077c20c711c1cb8ed8db3f4fc1410689b977067eff0ac7e0" Workload="ci--4081--3--6--n--e100e79615-k8s-coredns--668d6bf9bc--mgc4q-eth0" Jan 17 00:27:13.832196 containerd[1501]: 2026-01-17 00:27:13.747 [INFO][4484] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b0b232bb2895138a077c20c711c1cb8ed8db3f4fc1410689b977067eff0ac7e0" HandleID="k8s-pod-network.b0b232bb2895138a077c20c711c1cb8ed8db3f4fc1410689b977067eff0ac7e0" Workload="ci--4081--3--6--n--e100e79615-k8s-coredns--668d6bf9bc--mgc4q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f260), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-n-e100e79615", "pod":"coredns-668d6bf9bc-mgc4q", "timestamp":"2026-01-17 00:27:13.747030033 +0000 UTC"}, Hostname:"ci-4081-3-6-n-e100e79615", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:27:13.832196 containerd[1501]: 2026-01-17 00:27:13.747 [INFO][4484] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:27:13.832196 containerd[1501]: 2026-01-17 00:27:13.747 [INFO][4484] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:27:13.832196 containerd[1501]: 2026-01-17 00:27:13.747 [INFO][4484] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-e100e79615' Jan 17 00:27:13.832196 containerd[1501]: 2026-01-17 00:27:13.754 [INFO][4484] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b0b232bb2895138a077c20c711c1cb8ed8db3f4fc1410689b977067eff0ac7e0" host="ci-4081-3-6-n-e100e79615" Jan 17 00:27:13.832196 containerd[1501]: 2026-01-17 00:27:13.759 [INFO][4484] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-e100e79615" Jan 17 00:27:13.832196 containerd[1501]: 2026-01-17 00:27:13.764 [INFO][4484] ipam/ipam.go 511: Trying affinity for 192.168.114.128/26 host="ci-4081-3-6-n-e100e79615" Jan 17 00:27:13.832196 containerd[1501]: 2026-01-17 00:27:13.767 [INFO][4484] ipam/ipam.go 158: Attempting to load block cidr=192.168.114.128/26 host="ci-4081-3-6-n-e100e79615" Jan 17 00:27:13.832196 containerd[1501]: 2026-01-17 00:27:13.769 [INFO][4484] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.114.128/26 host="ci-4081-3-6-n-e100e79615" Jan 17 00:27:13.832196 containerd[1501]: 2026-01-17 00:27:13.769 [INFO][4484] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.114.128/26 handle="k8s-pod-network.b0b232bb2895138a077c20c711c1cb8ed8db3f4fc1410689b977067eff0ac7e0" host="ci-4081-3-6-n-e100e79615" Jan 17 00:27:13.832196 containerd[1501]: 2026-01-17 00:27:13.771 [INFO][4484] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b0b232bb2895138a077c20c711c1cb8ed8db3f4fc1410689b977067eff0ac7e0 Jan 17 00:27:13.832196 containerd[1501]: 2026-01-17 00:27:13.775 [INFO][4484] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.114.128/26 handle="k8s-pod-network.b0b232bb2895138a077c20c711c1cb8ed8db3f4fc1410689b977067eff0ac7e0" host="ci-4081-3-6-n-e100e79615" Jan 17 00:27:13.832196 containerd[1501]: 2026-01-17 00:27:13.784 [INFO][4484] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.114.133/26] block=192.168.114.128/26 handle="k8s-pod-network.b0b232bb2895138a077c20c711c1cb8ed8db3f4fc1410689b977067eff0ac7e0" host="ci-4081-3-6-n-e100e79615" Jan 17 00:27:13.832196 containerd[1501]: 2026-01-17 00:27:13.784 [INFO][4484] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.114.133/26] handle="k8s-pod-network.b0b232bb2895138a077c20c711c1cb8ed8db3f4fc1410689b977067eff0ac7e0" host="ci-4081-3-6-n-e100e79615" Jan 17 00:27:13.832196 containerd[1501]: 2026-01-17 00:27:13.784 [INFO][4484] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:27:13.832196 containerd[1501]: 2026-01-17 00:27:13.784 [INFO][4484] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.114.133/26] IPv6=[] ContainerID="b0b232bb2895138a077c20c711c1cb8ed8db3f4fc1410689b977067eff0ac7e0" HandleID="k8s-pod-network.b0b232bb2895138a077c20c711c1cb8ed8db3f4fc1410689b977067eff0ac7e0" Workload="ci--4081--3--6--n--e100e79615-k8s-coredns--668d6bf9bc--mgc4q-eth0" Jan 17 00:27:13.833971 containerd[1501]: 2026-01-17 00:27:13.788 [INFO][4472] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b0b232bb2895138a077c20c711c1cb8ed8db3f4fc1410689b977067eff0ac7e0" Namespace="kube-system" Pod="coredns-668d6bf9bc-mgc4q" WorkloadEndpoint="ci--4081--3--6--n--e100e79615-k8s-coredns--668d6bf9bc--mgc4q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--e100e79615-k8s-coredns--668d6bf9bc--mgc4q-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c92915ad-48c4-496c-93e3-f83efa51b583", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 26, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-e100e79615", ContainerID:"", Pod:"coredns-668d6bf9bc-mgc4q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali20426f0244b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:27:13.833971 containerd[1501]: 2026-01-17 00:27:13.788 [INFO][4472] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.133/32] ContainerID="b0b232bb2895138a077c20c711c1cb8ed8db3f4fc1410689b977067eff0ac7e0" Namespace="kube-system" Pod="coredns-668d6bf9bc-mgc4q" WorkloadEndpoint="ci--4081--3--6--n--e100e79615-k8s-coredns--668d6bf9bc--mgc4q-eth0" Jan 17 00:27:13.833971 containerd[1501]: 2026-01-17 00:27:13.789 [INFO][4472] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali20426f0244b ContainerID="b0b232bb2895138a077c20c711c1cb8ed8db3f4fc1410689b977067eff0ac7e0" Namespace="kube-system" Pod="coredns-668d6bf9bc-mgc4q" WorkloadEndpoint="ci--4081--3--6--n--e100e79615-k8s-coredns--668d6bf9bc--mgc4q-eth0" Jan 17 00:27:13.833971 containerd[1501]: 2026-01-17 00:27:13.809 [INFO][4472] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b0b232bb2895138a077c20c711c1cb8ed8db3f4fc1410689b977067eff0ac7e0" Namespace="kube-system" Pod="coredns-668d6bf9bc-mgc4q" WorkloadEndpoint="ci--4081--3--6--n--e100e79615-k8s-coredns--668d6bf9bc--mgc4q-eth0" Jan 17 00:27:13.833971 containerd[1501]: 2026-01-17 00:27:13.811 [INFO][4472] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b0b232bb2895138a077c20c711c1cb8ed8db3f4fc1410689b977067eff0ac7e0" Namespace="kube-system" Pod="coredns-668d6bf9bc-mgc4q" WorkloadEndpoint="ci--4081--3--6--n--e100e79615-k8s-coredns--668d6bf9bc--mgc4q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--e100e79615-k8s-coredns--668d6bf9bc--mgc4q-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c92915ad-48c4-496c-93e3-f83efa51b583", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 26, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-e100e79615", ContainerID:"b0b232bb2895138a077c20c711c1cb8ed8db3f4fc1410689b977067eff0ac7e0", Pod:"coredns-668d6bf9bc-mgc4q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali20426f0244b", MAC:"c6:ea:c3:40:f3:ed", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:27:13.833971 containerd[1501]: 2026-01-17 00:27:13.825 [INFO][4472] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b0b232bb2895138a077c20c711c1cb8ed8db3f4fc1410689b977067eff0ac7e0" Namespace="kube-system" Pod="coredns-668d6bf9bc-mgc4q" WorkloadEndpoint="ci--4081--3--6--n--e100e79615-k8s-coredns--668d6bf9bc--mgc4q-eth0" Jan 17 00:27:13.866699 containerd[1501]: time="2026-01-17T00:27:13.866244596Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:27:13.866699 containerd[1501]: time="2026-01-17T00:27:13.866346189Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:27:13.866699 containerd[1501]: time="2026-01-17T00:27:13.866379730Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:27:13.866699 containerd[1501]: time="2026-01-17T00:27:13.866520254Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:27:13.889347 kubelet[2562]: E0117 00:27:13.888916 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6675cb976f-s7mzt" podUID="73c589a5-8e71-425c-a060-0cf6cb3ed239" Jan 17 00:27:13.920505 systemd[1]: Started cri-containerd-b0b232bb2895138a077c20c711c1cb8ed8db3f4fc1410689b977067eff0ac7e0.scope - libcontainer container b0b232bb2895138a077c20c711c1cb8ed8db3f4fc1410689b977067eff0ac7e0. Jan 17 00:27:13.975837 containerd[1501]: time="2026-01-17T00:27:13.975736383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mgc4q,Uid:c92915ad-48c4-496c-93e3-f83efa51b583,Namespace:kube-system,Attempt:1,} returns sandbox id \"b0b232bb2895138a077c20c711c1cb8ed8db3f4fc1410689b977067eff0ac7e0\"" Jan 17 00:27:13.982238 containerd[1501]: time="2026-01-17T00:27:13.982015958Z" level=info msg="CreateContainer within sandbox \"b0b232bb2895138a077c20c711c1cb8ed8db3f4fc1410689b977067eff0ac7e0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:27:14.003360 containerd[1501]: time="2026-01-17T00:27:14.003301342Z" level=info msg="CreateContainer within sandbox \"b0b232bb2895138a077c20c711c1cb8ed8db3f4fc1410689b977067eff0ac7e0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"100b62f5697e387f4d5ca78484efacf0b0dbaf930c89d1f3cc888518cf626ea6\"" Jan 17 00:27:14.005500 containerd[1501]: time="2026-01-17T00:27:14.005147804Z" level=info msg="StartContainer for \"100b62f5697e387f4d5ca78484efacf0b0dbaf930c89d1f3cc888518cf626ea6\"" Jan 17 00:27:14.035294 systemd[1]: Started cri-containerd-100b62f5697e387f4d5ca78484efacf0b0dbaf930c89d1f3cc888518cf626ea6.scope - libcontainer container 100b62f5697e387f4d5ca78484efacf0b0dbaf930c89d1f3cc888518cf626ea6. Jan 17 00:27:14.068510 containerd[1501]: time="2026-01-17T00:27:14.068467339Z" level=info msg="StartContainer for \"100b62f5697e387f4d5ca78484efacf0b0dbaf930c89d1f3cc888518cf626ea6\" returns successfully" Jan 17 00:27:14.086250 systemd-networkd[1401]: cali4e9d4212d78: Gained IPv6LL Jan 17 00:27:14.517294 containerd[1501]: time="2026-01-17T00:27:14.517160586Z" level=info msg="StopPodSandbox for \"e9e413599c888e558f6b61d8a7e054695b06434e7332c6067770af00b48c784e\"" Jan 17 00:27:14.518507 containerd[1501]: time="2026-01-17T00:27:14.518314429Z" level=info msg="StopPodSandbox for \"30102ecbd68f8dad1b7eebc6b63bf07d8791c72fae20f544db97065f0c52e549\"" Jan 17 00:27:14.519604 containerd[1501]: time="2026-01-17T00:27:14.519192675Z" level=info msg="StopPodSandbox for \"9d33cd18386e695ae774c21f623f57628505898b093b56bca4d944a9a7f7b11a\"" Jan 17 00:27:14.688269 containerd[1501]: 2026-01-17 00:27:14.605 [INFO][4604] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e9e413599c888e558f6b61d8a7e054695b06434e7332c6067770af00b48c784e" Jan 17 00:27:14.688269 containerd[1501]: 2026-01-17 00:27:14.605 [INFO][4604] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e9e413599c888e558f6b61d8a7e054695b06434e7332c6067770af00b48c784e" iface="eth0" netns="/var/run/netns/cni-4383b9ba-6843-8e65-39b1-13738b448a6b" Jan 17 00:27:14.688269 containerd[1501]: 2026-01-17 00:27:14.606 [INFO][4604] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e9e413599c888e558f6b61d8a7e054695b06434e7332c6067770af00b48c784e" iface="eth0" netns="/var/run/netns/cni-4383b9ba-6843-8e65-39b1-13738b448a6b" Jan 17 00:27:14.688269 containerd[1501]: 2026-01-17 00:27:14.608 [INFO][4604] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e9e413599c888e558f6b61d8a7e054695b06434e7332c6067770af00b48c784e" iface="eth0" netns="/var/run/netns/cni-4383b9ba-6843-8e65-39b1-13738b448a6b" Jan 17 00:27:14.688269 containerd[1501]: 2026-01-17 00:27:14.608 [INFO][4604] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e9e413599c888e558f6b61d8a7e054695b06434e7332c6067770af00b48c784e" Jan 17 00:27:14.688269 containerd[1501]: 2026-01-17 00:27:14.608 [INFO][4604] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e9e413599c888e558f6b61d8a7e054695b06434e7332c6067770af00b48c784e" Jan 17 00:27:14.688269 containerd[1501]: 2026-01-17 00:27:14.666 [INFO][4631] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e9e413599c888e558f6b61d8a7e054695b06434e7332c6067770af00b48c784e" HandleID="k8s-pod-network.e9e413599c888e558f6b61d8a7e054695b06434e7332c6067770af00b48c784e" Workload="ci--4081--3--6--n--e100e79615-k8s-coredns--668d6bf9bc--djgpg-eth0" Jan 17 00:27:14.688269 containerd[1501]: 2026-01-17 00:27:14.666 [INFO][4631] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:27:14.688269 containerd[1501]: 2026-01-17 00:27:14.667 [INFO][4631] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:27:14.688269 containerd[1501]: 2026-01-17 00:27:14.677 [WARNING][4631] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e9e413599c888e558f6b61d8a7e054695b06434e7332c6067770af00b48c784e" HandleID="k8s-pod-network.e9e413599c888e558f6b61d8a7e054695b06434e7332c6067770af00b48c784e" Workload="ci--4081--3--6--n--e100e79615-k8s-coredns--668d6bf9bc--djgpg-eth0" Jan 17 00:27:14.688269 containerd[1501]: 2026-01-17 00:27:14.677 [INFO][4631] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e9e413599c888e558f6b61d8a7e054695b06434e7332c6067770af00b48c784e" HandleID="k8s-pod-network.e9e413599c888e558f6b61d8a7e054695b06434e7332c6067770af00b48c784e" Workload="ci--4081--3--6--n--e100e79615-k8s-coredns--668d6bf9bc--djgpg-eth0" Jan 17 00:27:14.688269 containerd[1501]: 2026-01-17 00:27:14.680 [INFO][4631] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:27:14.688269 containerd[1501]: 2026-01-17 00:27:14.682 [INFO][4604] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e9e413599c888e558f6b61d8a7e054695b06434e7332c6067770af00b48c784e" Jan 17 00:27:14.689295 containerd[1501]: time="2026-01-17T00:27:14.689036971Z" level=info msg="TearDown network for sandbox \"e9e413599c888e558f6b61d8a7e054695b06434e7332c6067770af00b48c784e\" successfully" Jan 17 00:27:14.689295 containerd[1501]: time="2026-01-17T00:27:14.689066412Z" level=info msg="StopPodSandbox for \"e9e413599c888e558f6b61d8a7e054695b06434e7332c6067770af00b48c784e\" returns successfully" Jan 17 00:27:14.697546 systemd[1]: run-netns-cni\x2d4383b9ba\x2d6843\x2d8e65\x2d39b1\x2d13738b448a6b.mount: Deactivated successfully. Jan 17 00:27:14.710753 containerd[1501]: time="2026-01-17T00:27:14.710063454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-djgpg,Uid:dd4b2cd2-cd64-4cf4-9264-84814b92189d,Namespace:kube-system,Attempt:1,}" Jan 17 00:27:14.712600 containerd[1501]: 2026-01-17 00:27:14.617 [INFO][4617] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="30102ecbd68f8dad1b7eebc6b63bf07d8791c72fae20f544db97065f0c52e549" Jan 17 00:27:14.712600 containerd[1501]: 2026-01-17 00:27:14.632 [INFO][4617] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="30102ecbd68f8dad1b7eebc6b63bf07d8791c72fae20f544db97065f0c52e549" iface="eth0" netns="/var/run/netns/cni-8532272a-1207-b072-88e1-939f4a9d5e72" Jan 17 00:27:14.712600 containerd[1501]: 2026-01-17 00:27:14.634 [INFO][4617] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="30102ecbd68f8dad1b7eebc6b63bf07d8791c72fae20f544db97065f0c52e549" iface="eth0" netns="/var/run/netns/cni-8532272a-1207-b072-88e1-939f4a9d5e72" Jan 17 00:27:14.712600 containerd[1501]: 2026-01-17 00:27:14.635 [INFO][4617] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="30102ecbd68f8dad1b7eebc6b63bf07d8791c72fae20f544db97065f0c52e549" iface="eth0" netns="/var/run/netns/cni-8532272a-1207-b072-88e1-939f4a9d5e72" Jan 17 00:27:14.712600 containerd[1501]: 2026-01-17 00:27:14.635 [INFO][4617] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="30102ecbd68f8dad1b7eebc6b63bf07d8791c72fae20f544db97065f0c52e549" Jan 17 00:27:14.712600 containerd[1501]: 2026-01-17 00:27:14.635 [INFO][4617] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="30102ecbd68f8dad1b7eebc6b63bf07d8791c72fae20f544db97065f0c52e549" Jan 17 00:27:14.712600 containerd[1501]: 2026-01-17 00:27:14.678 [INFO][4639] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="30102ecbd68f8dad1b7eebc6b63bf07d8791c72fae20f544db97065f0c52e549" HandleID="k8s-pod-network.30102ecbd68f8dad1b7eebc6b63bf07d8791c72fae20f544db97065f0c52e549" Workload="ci--4081--3--6--n--e100e79615-k8s-csi--node--driver--wn7sn-eth0" Jan 17 00:27:14.712600 containerd[1501]: 2026-01-17 00:27:14.678 [INFO][4639] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:27:14.712600 containerd[1501]: 2026-01-17 00:27:14.680 [INFO][4639] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:27:14.712600 containerd[1501]: 2026-01-17 00:27:14.687 [WARNING][4639] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="30102ecbd68f8dad1b7eebc6b63bf07d8791c72fae20f544db97065f0c52e549" HandleID="k8s-pod-network.30102ecbd68f8dad1b7eebc6b63bf07d8791c72fae20f544db97065f0c52e549" Workload="ci--4081--3--6--n--e100e79615-k8s-csi--node--driver--wn7sn-eth0" Jan 17 00:27:14.712600 containerd[1501]: 2026-01-17 00:27:14.688 [INFO][4639] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="30102ecbd68f8dad1b7eebc6b63bf07d8791c72fae20f544db97065f0c52e549" HandleID="k8s-pod-network.30102ecbd68f8dad1b7eebc6b63bf07d8791c72fae20f544db97065f0c52e549" Workload="ci--4081--3--6--n--e100e79615-k8s-csi--node--driver--wn7sn-eth0" Jan 17 00:27:14.712600 containerd[1501]: 2026-01-17 00:27:14.693 [INFO][4639] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:27:14.712600 containerd[1501]: 2026-01-17 00:27:14.705 [INFO][4617] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="30102ecbd68f8dad1b7eebc6b63bf07d8791c72fae20f544db97065f0c52e549" Jan 17 00:27:14.717146 containerd[1501]: time="2026-01-17T00:27:14.717061454Z" level=info msg="TearDown network for sandbox \"30102ecbd68f8dad1b7eebc6b63bf07d8791c72fae20f544db97065f0c52e549\" successfully" Jan 17 00:27:14.717146 containerd[1501]: time="2026-01-17T00:27:14.717132486Z" level=info msg="StopPodSandbox for \"30102ecbd68f8dad1b7eebc6b63bf07d8791c72fae20f544db97065f0c52e549\" returns successfully" Jan 17 00:27:14.717849 containerd[1501]: time="2026-01-17T00:27:14.717820406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wn7sn,Uid:eb95b785-13b8-4aa9-b43b-38efbd205ceb,Namespace:calico-system,Attempt:1,}" Jan 17 00:27:14.720479 systemd[1]: run-netns-cni\x2d8532272a\x2d1207\x2db072\x2d88e1\x2d939f4a9d5e72.mount: Deactivated successfully. Jan 17 00:27:14.725065 containerd[1501]: 2026-01-17 00:27:14.651 [INFO][4618] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9d33cd18386e695ae774c21f623f57628505898b093b56bca4d944a9a7f7b11a" Jan 17 00:27:14.725065 containerd[1501]: 2026-01-17 00:27:14.652 [INFO][4618] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9d33cd18386e695ae774c21f623f57628505898b093b56bca4d944a9a7f7b11a" iface="eth0" netns="/var/run/netns/cni-c1488c0d-5ce2-72dc-e4e9-d95a98add8e4" Jan 17 00:27:14.725065 containerd[1501]: 2026-01-17 00:27:14.654 [INFO][4618] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9d33cd18386e695ae774c21f623f57628505898b093b56bca4d944a9a7f7b11a" iface="eth0" netns="/var/run/netns/cni-c1488c0d-5ce2-72dc-e4e9-d95a98add8e4" Jan 17 00:27:14.725065 containerd[1501]: 2026-01-17 00:27:14.654 [INFO][4618] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9d33cd18386e695ae774c21f623f57628505898b093b56bca4d944a9a7f7b11a" iface="eth0" netns="/var/run/netns/cni-c1488c0d-5ce2-72dc-e4e9-d95a98add8e4" Jan 17 00:27:14.725065 containerd[1501]: 2026-01-17 00:27:14.654 [INFO][4618] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9d33cd18386e695ae774c21f623f57628505898b093b56bca4d944a9a7f7b11a" Jan 17 00:27:14.725065 containerd[1501]: 2026-01-17 00:27:14.654 [INFO][4618] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9d33cd18386e695ae774c21f623f57628505898b093b56bca4d944a9a7f7b11a" Jan 17 00:27:14.725065 containerd[1501]: 2026-01-17 00:27:14.682 [INFO][4644] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9d33cd18386e695ae774c21f623f57628505898b093b56bca4d944a9a7f7b11a" HandleID="k8s-pod-network.9d33cd18386e695ae774c21f623f57628505898b093b56bca4d944a9a7f7b11a" Workload="ci--4081--3--6--n--e100e79615-k8s-goldmane--666569f655--ssv4k-eth0" Jan 17 00:27:14.725065 containerd[1501]: 2026-01-17 00:27:14.683 [INFO][4644] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:27:14.725065 containerd[1501]: 2026-01-17 00:27:14.693 [INFO][4644] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:27:14.725065 containerd[1501]: 2026-01-17 00:27:14.711 [WARNING][4644] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9d33cd18386e695ae774c21f623f57628505898b093b56bca4d944a9a7f7b11a" HandleID="k8s-pod-network.9d33cd18386e695ae774c21f623f57628505898b093b56bca4d944a9a7f7b11a" Workload="ci--4081--3--6--n--e100e79615-k8s-goldmane--666569f655--ssv4k-eth0" Jan 17 00:27:14.725065 containerd[1501]: 2026-01-17 00:27:14.711 [INFO][4644] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9d33cd18386e695ae774c21f623f57628505898b093b56bca4d944a9a7f7b11a" HandleID="k8s-pod-network.9d33cd18386e695ae774c21f623f57628505898b093b56bca4d944a9a7f7b11a" Workload="ci--4081--3--6--n--e100e79615-k8s-goldmane--666569f655--ssv4k-eth0" Jan 17 00:27:14.725065 containerd[1501]: 2026-01-17 00:27:14.716 [INFO][4644] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:27:14.725065 containerd[1501]: 2026-01-17 00:27:14.722 [INFO][4618] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9d33cd18386e695ae774c21f623f57628505898b093b56bca4d944a9a7f7b11a" Jan 17 00:27:14.725781 containerd[1501]: time="2026-01-17T00:27:14.725444434Z" level=info msg="TearDown network for sandbox \"9d33cd18386e695ae774c21f623f57628505898b093b56bca4d944a9a7f7b11a\" successfully" Jan 17 00:27:14.725781 containerd[1501]: time="2026-01-17T00:27:14.725492076Z" level=info msg="StopPodSandbox for \"9d33cd18386e695ae774c21f623f57628505898b093b56bca4d944a9a7f7b11a\" returns successfully" Jan 17 00:27:14.731205 containerd[1501]: time="2026-01-17T00:27:14.731083786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-ssv4k,Uid:a05bf132-cecb-477a-a941-f502759ced80,Namespace:calico-system,Attempt:1,}" Jan 17 00:27:14.734344 systemd[1]: run-netns-cni\x2dc1488c0d\x2d5ce2\x2d72dc\x2de4e9\x2dd95a98add8e4.mount: Deactivated successfully. Jan 17 00:27:14.898475 kubelet[2562]: E0117 00:27:14.897809 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6675cb976f-s7mzt" podUID="73c589a5-8e71-425c-a060-0cf6cb3ed239" Jan 17 00:27:14.976052 kubelet[2562]: I0117 00:27:14.972619 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-mgc4q" podStartSLOduration=42.972591986 podStartE2EDuration="42.972591986s" podCreationTimestamp="2026-01-17 00:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:27:14.931026245 +0000 UTC m=+49.565863630" watchObservedRunningTime="2026-01-17 00:27:14.972591986 +0000 UTC m=+49.607429361" Jan 17 00:27:15.025032 systemd-networkd[1401]: calie77eb6d5c1d: Link UP Jan 17 00:27:15.029227 systemd-networkd[1401]: calie77eb6d5c1d: Gained carrier Jan 17 00:27:15.055839 containerd[1501]: 2026-01-17 00:27:14.829 [INFO][4653] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--e100e79615-k8s-coredns--668d6bf9bc--djgpg-eth0 coredns-668d6bf9bc- kube-system dd4b2cd2-cd64-4cf4-9264-84814b92189d 1005 0 2026-01-17 00:26:32 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-n-e100e79615 coredns-668d6bf9bc-djgpg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie77eb6d5c1d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d09761f24b595a6b2a20a5cb08b5cddef9d9513e128604764bf33689024a5d00" Namespace="kube-system" Pod="coredns-668d6bf9bc-djgpg" WorkloadEndpoint="ci--4081--3--6--n--e100e79615-k8s-coredns--668d6bf9bc--djgpg-" Jan 17 00:27:15.055839 containerd[1501]: 2026-01-17 00:27:14.829 [INFO][4653] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d09761f24b595a6b2a20a5cb08b5cddef9d9513e128604764bf33689024a5d00" Namespace="kube-system" Pod="coredns-668d6bf9bc-djgpg" WorkloadEndpoint="ci--4081--3--6--n--e100e79615-k8s-coredns--668d6bf9bc--djgpg-eth0" Jan 17 00:27:15.055839 containerd[1501]: 2026-01-17 00:27:14.871 [INFO][4688] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d09761f24b595a6b2a20a5cb08b5cddef9d9513e128604764bf33689024a5d00" HandleID="k8s-pod-network.d09761f24b595a6b2a20a5cb08b5cddef9d9513e128604764bf33689024a5d00" Workload="ci--4081--3--6--n--e100e79615-k8s-coredns--668d6bf9bc--djgpg-eth0" Jan 17 00:27:15.055839 containerd[1501]: 2026-01-17 00:27:14.871 [INFO][4688] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d09761f24b595a6b2a20a5cb08b5cddef9d9513e128604764bf33689024a5d00" HandleID="k8s-pod-network.d09761f24b595a6b2a20a5cb08b5cddef9d9513e128604764bf33689024a5d00" Workload="ci--4081--3--6--n--e100e79615-k8s-coredns--668d6bf9bc--djgpg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000332510), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-n-e100e79615", "pod":"coredns-668d6bf9bc-djgpg", "timestamp":"2026-01-17 00:27:14.871009995 +0000 UTC"}, Hostname:"ci-4081-3-6-n-e100e79615", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:27:15.055839 containerd[1501]: 2026-01-17 00:27:14.871 [INFO][4688] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:27:15.055839 containerd[1501]: 2026-01-17 00:27:14.871 [INFO][4688] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:27:15.055839 containerd[1501]: 2026-01-17 00:27:14.872 [INFO][4688] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-e100e79615' Jan 17 00:27:15.055839 containerd[1501]: 2026-01-17 00:27:14.879 [INFO][4688] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d09761f24b595a6b2a20a5cb08b5cddef9d9513e128604764bf33689024a5d00" host="ci-4081-3-6-n-e100e79615" Jan 17 00:27:15.055839 containerd[1501]: 2026-01-17 00:27:14.889 [INFO][4688] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-e100e79615" Jan 17 00:27:15.055839 containerd[1501]: 2026-01-17 00:27:14.911 [INFO][4688] ipam/ipam.go 511: Trying affinity for 192.168.114.128/26 host="ci-4081-3-6-n-e100e79615" Jan 17 00:27:15.055839 containerd[1501]: 2026-01-17 00:27:14.919 [INFO][4688] ipam/ipam.go 158: Attempting to load block cidr=192.168.114.128/26 host="ci-4081-3-6-n-e100e79615" Jan 17 00:27:15.055839 containerd[1501]: 2026-01-17 00:27:14.935 [INFO][4688] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.114.128/26 host="ci-4081-3-6-n-e100e79615" Jan 17 00:27:15.055839 containerd[1501]: 2026-01-17 00:27:14.938 [INFO][4688] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.114.128/26 handle="k8s-pod-network.d09761f24b595a6b2a20a5cb08b5cddef9d9513e128604764bf33689024a5d00" host="ci-4081-3-6-n-e100e79615" Jan 17 00:27:15.055839 containerd[1501]: 2026-01-17 00:27:14.943 [INFO][4688] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d09761f24b595a6b2a20a5cb08b5cddef9d9513e128604764bf33689024a5d00 Jan 17 00:27:15.055839 containerd[1501]: 2026-01-17 00:27:14.981 [INFO][4688] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.114.128/26 handle="k8s-pod-network.d09761f24b595a6b2a20a5cb08b5cddef9d9513e128604764bf33689024a5d00" host="ci-4081-3-6-n-e100e79615" Jan 17 00:27:15.055839 containerd[1501]: 2026-01-17 00:27:14.998 [INFO][4688] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.114.134/26] block=192.168.114.128/26 handle="k8s-pod-network.d09761f24b595a6b2a20a5cb08b5cddef9d9513e128604764bf33689024a5d00" host="ci-4081-3-6-n-e100e79615" Jan 17 00:27:15.055839 containerd[1501]: 2026-01-17 00:27:15.000 [INFO][4688] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.114.134/26] handle="k8s-pod-network.d09761f24b595a6b2a20a5cb08b5cddef9d9513e128604764bf33689024a5d00" host="ci-4081-3-6-n-e100e79615" Jan 17 00:27:15.055839 containerd[1501]: 2026-01-17 00:27:15.000 [INFO][4688] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:27:15.055839 containerd[1501]: 2026-01-17 00:27:15.000 [INFO][4688] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.114.134/26] IPv6=[] ContainerID="d09761f24b595a6b2a20a5cb08b5cddef9d9513e128604764bf33689024a5d00" HandleID="k8s-pod-network.d09761f24b595a6b2a20a5cb08b5cddef9d9513e128604764bf33689024a5d00" Workload="ci--4081--3--6--n--e100e79615-k8s-coredns--668d6bf9bc--djgpg-eth0" Jan 17 00:27:15.057145 containerd[1501]: 2026-01-17 00:27:15.007 [INFO][4653] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d09761f24b595a6b2a20a5cb08b5cddef9d9513e128604764bf33689024a5d00" Namespace="kube-system" Pod="coredns-668d6bf9bc-djgpg" WorkloadEndpoint="ci--4081--3--6--n--e100e79615-k8s-coredns--668d6bf9bc--djgpg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--e100e79615-k8s-coredns--668d6bf9bc--djgpg-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"dd4b2cd2-cd64-4cf4-9264-84814b92189d", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 26, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-e100e79615", ContainerID:"", Pod:"coredns-668d6bf9bc-djgpg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie77eb6d5c1d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:27:15.057145 containerd[1501]: 2026-01-17 00:27:15.007 [INFO][4653] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.134/32] ContainerID="d09761f24b595a6b2a20a5cb08b5cddef9d9513e128604764bf33689024a5d00" Namespace="kube-system" Pod="coredns-668d6bf9bc-djgpg" WorkloadEndpoint="ci--4081--3--6--n--e100e79615-k8s-coredns--668d6bf9bc--djgpg-eth0" Jan 17 00:27:15.057145 containerd[1501]: 2026-01-17 00:27:15.007 [INFO][4653] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie77eb6d5c1d ContainerID="d09761f24b595a6b2a20a5cb08b5cddef9d9513e128604764bf33689024a5d00" Namespace="kube-system" Pod="coredns-668d6bf9bc-djgpg" WorkloadEndpoint="ci--4081--3--6--n--e100e79615-k8s-coredns--668d6bf9bc--djgpg-eth0" Jan 17 00:27:15.057145 containerd[1501]: 2026-01-17 00:27:15.027 [INFO][4653] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d09761f24b595a6b2a20a5cb08b5cddef9d9513e128604764bf33689024a5d00" Namespace="kube-system" Pod="coredns-668d6bf9bc-djgpg" WorkloadEndpoint="ci--4081--3--6--n--e100e79615-k8s-coredns--668d6bf9bc--djgpg-eth0" Jan 17 00:27:15.057145 containerd[1501]: 2026-01-17 00:27:15.032 [INFO][4653] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d09761f24b595a6b2a20a5cb08b5cddef9d9513e128604764bf33689024a5d00" Namespace="kube-system" Pod="coredns-668d6bf9bc-djgpg" WorkloadEndpoint="ci--4081--3--6--n--e100e79615-k8s-coredns--668d6bf9bc--djgpg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--e100e79615-k8s-coredns--668d6bf9bc--djgpg-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"dd4b2cd2-cd64-4cf4-9264-84814b92189d", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 26, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-e100e79615", ContainerID:"d09761f24b595a6b2a20a5cb08b5cddef9d9513e128604764bf33689024a5d00", Pod:"coredns-668d6bf9bc-djgpg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie77eb6d5c1d", MAC:"d2:93:7d:ee:db:62", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:27:15.057145 containerd[1501]: 2026-01-17 00:27:15.053 [INFO][4653] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d09761f24b595a6b2a20a5cb08b5cddef9d9513e128604764bf33689024a5d00" Namespace="kube-system" Pod="coredns-668d6bf9bc-djgpg" WorkloadEndpoint="ci--4081--3--6--n--e100e79615-k8s-coredns--668d6bf9bc--djgpg-eth0" Jan 17 00:27:15.089617 containerd[1501]: time="2026-01-17T00:27:15.085729348Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:27:15.089617 containerd[1501]: time="2026-01-17T00:27:15.086439628Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:27:15.089617 containerd[1501]: time="2026-01-17T00:27:15.086451528Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:27:15.089617 containerd[1501]: time="2026-01-17T00:27:15.086694415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:27:15.111824 systemd[1]: Started cri-containerd-d09761f24b595a6b2a20a5cb08b5cddef9d9513e128604764bf33689024a5d00.scope - libcontainer container d09761f24b595a6b2a20a5cb08b5cddef9d9513e128604764bf33689024a5d00. Jan 17 00:27:15.122436 systemd-networkd[1401]: caliee465a71775: Link UP Jan 17 00:27:15.127344 systemd-networkd[1401]: caliee465a71775: Gained carrier Jan 17 00:27:15.171168 containerd[1501]: 2026-01-17 00:27:14.853 [INFO][4677] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--e100e79615-k8s-goldmane--666569f655--ssv4k-eth0 goldmane-666569f655- calico-system a05bf132-cecb-477a-a941-f502759ced80 1007 0 2026-01-17 00:26:44 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-6-n-e100e79615 goldmane-666569f655-ssv4k eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] caliee465a71775 [] [] }} ContainerID="5dcd227185601bde83f8cba299823a0eb040d9b249d2d53ac67aa892f31514b0" Namespace="calico-system" Pod="goldmane-666569f655-ssv4k" WorkloadEndpoint="ci--4081--3--6--n--e100e79615-k8s-goldmane--666569f655--ssv4k-" Jan 17 00:27:15.171168 containerd[1501]: 2026-01-17 00:27:14.853 [INFO][4677] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5dcd227185601bde83f8cba299823a0eb040d9b249d2d53ac67aa892f31514b0" Namespace="calico-system" Pod="goldmane-666569f655-ssv4k" WorkloadEndpoint="ci--4081--3--6--n--e100e79615-k8s-goldmane--666569f655--ssv4k-eth0" Jan 17 00:27:15.171168 containerd[1501]: 2026-01-17 00:27:14.919 [INFO][4701] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5dcd227185601bde83f8cba299823a0eb040d9b249d2d53ac67aa892f31514b0" HandleID="k8s-pod-network.5dcd227185601bde83f8cba299823a0eb040d9b249d2d53ac67aa892f31514b0" Workload="ci--4081--3--6--n--e100e79615-k8s-goldmane--666569f655--ssv4k-eth0" Jan 17 00:27:15.171168 containerd[1501]: 2026-01-17 00:27:14.921 [INFO][4701] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5dcd227185601bde83f8cba299823a0eb040d9b249d2d53ac67aa892f31514b0" HandleID="k8s-pod-network.5dcd227185601bde83f8cba299823a0eb040d9b249d2d53ac67aa892f31514b0" Workload="ci--4081--3--6--n--e100e79615-k8s-goldmane--666569f655--ssv4k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032dc90), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-e100e79615", "pod":"goldmane-666569f655-ssv4k", "timestamp":"2026-01-17 00:27:14.919039501 +0000 UTC"}, Hostname:"ci-4081-3-6-n-e100e79615", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:27:15.171168 containerd[1501]: 2026-01-17 00:27:14.921 [INFO][4701] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:27:15.171168 containerd[1501]: 2026-01-17 00:27:15.000 [INFO][4701] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:27:15.171168 containerd[1501]: 2026-01-17 00:27:15.001 [INFO][4701] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-e100e79615' Jan 17 00:27:15.171168 containerd[1501]: 2026-01-17 00:27:15.016 [INFO][4701] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5dcd227185601bde83f8cba299823a0eb040d9b249d2d53ac67aa892f31514b0" host="ci-4081-3-6-n-e100e79615" Jan 17 00:27:15.171168 containerd[1501]: 2026-01-17 00:27:15.037 [INFO][4701] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-e100e79615" Jan 17 00:27:15.171168 containerd[1501]: 2026-01-17 00:27:15.049 [INFO][4701] ipam/ipam.go 511: Trying affinity for 192.168.114.128/26 host="ci-4081-3-6-n-e100e79615" Jan 17 00:27:15.171168 containerd[1501]: 2026-01-17 00:27:15.058 [INFO][4701] ipam/ipam.go 158: Attempting to load block cidr=192.168.114.128/26 host="ci-4081-3-6-n-e100e79615" Jan 17 00:27:15.171168 containerd[1501]: 2026-01-17 00:27:15.062 [INFO][4701] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.114.128/26 host="ci-4081-3-6-n-e100e79615" Jan 17 00:27:15.171168 containerd[1501]: 2026-01-17 00:27:15.062 [INFO][4701] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.114.128/26 handle="k8s-pod-network.5dcd227185601bde83f8cba299823a0eb040d9b249d2d53ac67aa892f31514b0" host="ci-4081-3-6-n-e100e79615" Jan 17 00:27:15.171168 containerd[1501]: 2026-01-17 00:27:15.064 [INFO][4701] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5dcd227185601bde83f8cba299823a0eb040d9b249d2d53ac67aa892f31514b0 Jan 17 00:27:15.171168 containerd[1501]: 2026-01-17 00:27:15.072 [INFO][4701] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.114.128/26 handle="k8s-pod-network.5dcd227185601bde83f8cba299823a0eb040d9b249d2d53ac67aa892f31514b0" host="ci-4081-3-6-n-e100e79615" Jan 17 00:27:15.171168 containerd[1501]: 2026-01-17 00:27:15.097 [INFO][4701] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.114.135/26] block=192.168.114.128/26 handle="k8s-pod-network.5dcd227185601bde83f8cba299823a0eb040d9b249d2d53ac67aa892f31514b0" host="ci-4081-3-6-n-e100e79615" Jan 17 00:27:15.171168 containerd[1501]: 2026-01-17 00:27:15.097 [INFO][4701] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.114.135/26] handle="k8s-pod-network.5dcd227185601bde83f8cba299823a0eb040d9b249d2d53ac67aa892f31514b0" host="ci-4081-3-6-n-e100e79615" Jan 17 00:27:15.171168 containerd[1501]: 2026-01-17 00:27:15.097 [INFO][4701] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:27:15.171168 containerd[1501]: 2026-01-17 00:27:15.097 [INFO][4701] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.114.135/26] IPv6=[] ContainerID="5dcd227185601bde83f8cba299823a0eb040d9b249d2d53ac67aa892f31514b0" HandleID="k8s-pod-network.5dcd227185601bde83f8cba299823a0eb040d9b249d2d53ac67aa892f31514b0" Workload="ci--4081--3--6--n--e100e79615-k8s-goldmane--666569f655--ssv4k-eth0" Jan 17 00:27:15.171683 containerd[1501]: 2026-01-17 00:27:15.105 [INFO][4677] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5dcd227185601bde83f8cba299823a0eb040d9b249d2d53ac67aa892f31514b0" Namespace="calico-system" Pod="goldmane-666569f655-ssv4k" WorkloadEndpoint="ci--4081--3--6--n--e100e79615-k8s-goldmane--666569f655--ssv4k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--e100e79615-k8s-goldmane--666569f655--ssv4k-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"a05bf132-cecb-477a-a941-f502759ced80", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 26, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-e100e79615", ContainerID:"", Pod:"goldmane-666569f655-ssv4k", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.114.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliee465a71775", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:27:15.171683 containerd[1501]: 2026-01-17 00:27:15.106 [INFO][4677] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.135/32] ContainerID="5dcd227185601bde83f8cba299823a0eb040d9b249d2d53ac67aa892f31514b0" Namespace="calico-system" Pod="goldmane-666569f655-ssv4k" WorkloadEndpoint="ci--4081--3--6--n--e100e79615-k8s-goldmane--666569f655--ssv4k-eth0" Jan 17 00:27:15.171683 containerd[1501]: 2026-01-17 00:27:15.106 [INFO][4677] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliee465a71775 ContainerID="5dcd227185601bde83f8cba299823a0eb040d9b249d2d53ac67aa892f31514b0" Namespace="calico-system" Pod="goldmane-666569f655-ssv4k" WorkloadEndpoint="ci--4081--3--6--n--e100e79615-k8s-goldmane--666569f655--ssv4k-eth0" Jan 17 00:27:15.171683 containerd[1501]: 2026-01-17 00:27:15.138 [INFO][4677] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5dcd227185601bde83f8cba299823a0eb040d9b249d2d53ac67aa892f31514b0" Namespace="calico-system" Pod="goldmane-666569f655-ssv4k" WorkloadEndpoint="ci--4081--3--6--n--e100e79615-k8s-goldmane--666569f655--ssv4k-eth0" Jan 17 00:27:15.171683 containerd[1501]: 2026-01-17 00:27:15.143 [INFO][4677] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5dcd227185601bde83f8cba299823a0eb040d9b249d2d53ac67aa892f31514b0" Namespace="calico-system" Pod="goldmane-666569f655-ssv4k" WorkloadEndpoint="ci--4081--3--6--n--e100e79615-k8s-goldmane--666569f655--ssv4k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--e100e79615-k8s-goldmane--666569f655--ssv4k-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"a05bf132-cecb-477a-a941-f502759ced80", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 26, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-e100e79615", ContainerID:"5dcd227185601bde83f8cba299823a0eb040d9b249d2d53ac67aa892f31514b0", Pod:"goldmane-666569f655-ssv4k", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.114.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliee465a71775", MAC:"5e:19:33:33:27:a7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:27:15.171683 containerd[1501]: 2026-01-17 00:27:15.166 [INFO][4677] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5dcd227185601bde83f8cba299823a0eb040d9b249d2d53ac67aa892f31514b0" Namespace="calico-system" Pod="goldmane-666569f655-ssv4k" WorkloadEndpoint="ci--4081--3--6--n--e100e79615-k8s-goldmane--666569f655--ssv4k-eth0" Jan 17 00:27:15.199264 containerd[1501]: time="2026-01-17T00:27:15.198522720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-djgpg,Uid:dd4b2cd2-cd64-4cf4-9264-84814b92189d,Namespace:kube-system,Attempt:1,} returns sandbox id \"d09761f24b595a6b2a20a5cb08b5cddef9d9513e128604764bf33689024a5d00\"" Jan 17 00:27:15.221122 containerd[1501]: time="2026-01-17T00:27:15.220586847Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:27:15.221122 containerd[1501]: time="2026-01-17T00:27:15.220820673Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:27:15.221122 containerd[1501]: time="2026-01-17T00:27:15.220833624Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:27:15.221755 containerd[1501]: time="2026-01-17T00:27:15.221621315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:27:15.223285 containerd[1501]: time="2026-01-17T00:27:15.223211180Z" level=info msg="CreateContainer within sandbox \"d09761f24b595a6b2a20a5cb08b5cddef9d9513e128604764bf33689024a5d00\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:27:15.251706 containerd[1501]: time="2026-01-17T00:27:15.251624314Z" level=info msg="CreateContainer within sandbox \"d09761f24b595a6b2a20a5cb08b5cddef9d9513e128604764bf33689024a5d00\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c23696d03b8aa429ef897243f1d9d65477b6a0429def317ddb0efe475716a96a\"" Jan 17 00:27:15.254015 containerd[1501]: time="2026-01-17T00:27:15.253866377Z" level=info msg="StartContainer for \"c23696d03b8aa429ef897243f1d9d65477b6a0429def317ddb0efe475716a96a\"" Jan 17 00:27:15.256339 systemd-networkd[1401]: cali944594bddaf: Link UP Jan 17 00:27:15.267376 systemd-networkd[1401]: cali944594bddaf: Gained carrier Jan 17 00:27:15.272299 systemd[1]: Started cri-containerd-5dcd227185601bde83f8cba299823a0eb040d9b249d2d53ac67aa892f31514b0.scope - libcontainer container 5dcd227185601bde83f8cba299823a0eb040d9b249d2d53ac67aa892f31514b0. Jan 17 00:27:15.301321 containerd[1501]: 2026-01-17 00:27:14.833 [INFO][4662] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--e100e79615-k8s-csi--node--driver--wn7sn-eth0 csi-node-driver- calico-system eb95b785-13b8-4aa9-b43b-38efbd205ceb 1006 0 2026-01-17 00:26:46 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-6-n-e100e79615 csi-node-driver-wn7sn eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali944594bddaf [] [] }} ContainerID="015108f60831000de48bc1c0a597e595eaa552810a965c78a4b6fae78027d0fb" Namespace="calico-system" Pod="csi-node-driver-wn7sn" WorkloadEndpoint="ci--4081--3--6--n--e100e79615-k8s-csi--node--driver--wn7sn-" Jan 17 00:27:15.301321 containerd[1501]: 2026-01-17 00:27:14.834 [INFO][4662] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="015108f60831000de48bc1c0a597e595eaa552810a965c78a4b6fae78027d0fb" Namespace="calico-system" Pod="csi-node-driver-wn7sn" WorkloadEndpoint="ci--4081--3--6--n--e100e79615-k8s-csi--node--driver--wn7sn-eth0" Jan 17 00:27:15.301321 containerd[1501]: 2026-01-17 00:27:14.978 [INFO][4693] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="015108f60831000de48bc1c0a597e595eaa552810a965c78a4b6fae78027d0fb" HandleID="k8s-pod-network.015108f60831000de48bc1c0a597e595eaa552810a965c78a4b6fae78027d0fb" Workload="ci--4081--3--6--n--e100e79615-k8s-csi--node--driver--wn7sn-eth0" Jan 17 00:27:15.301321 containerd[1501]: 2026-01-17 00:27:14.979 [INFO][4693] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="015108f60831000de48bc1c0a597e595eaa552810a965c78a4b6fae78027d0fb" HandleID="k8s-pod-network.015108f60831000de48bc1c0a597e595eaa552810a965c78a4b6fae78027d0fb" Workload="ci--4081--3--6--n--e100e79615-k8s-csi--node--driver--wn7sn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024fc90), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-e100e79615", "pod":"csi-node-driver-wn7sn", "timestamp":"2026-01-17 00:27:14.978249078 +0000 UTC"}, Hostname:"ci-4081-3-6-n-e100e79615", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:27:15.301321 containerd[1501]: 2026-01-17 00:27:14.980 [INFO][4693] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:27:15.301321 containerd[1501]: 2026-01-17 00:27:15.098 [INFO][4693] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:27:15.301321 containerd[1501]: 2026-01-17 00:27:15.098 [INFO][4693] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-e100e79615' Jan 17 00:27:15.301321 containerd[1501]: 2026-01-17 00:27:15.116 [INFO][4693] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.015108f60831000de48bc1c0a597e595eaa552810a965c78a4b6fae78027d0fb" host="ci-4081-3-6-n-e100e79615" Jan 17 00:27:15.301321 containerd[1501]: 2026-01-17 00:27:15.145 [INFO][4693] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-e100e79615" Jan 17 00:27:15.301321 containerd[1501]: 2026-01-17 00:27:15.159 [INFO][4693] ipam/ipam.go 511: Trying affinity for 192.168.114.128/26 host="ci-4081-3-6-n-e100e79615" Jan 17 00:27:15.301321 containerd[1501]: 2026-01-17 00:27:15.165 [INFO][4693] ipam/ipam.go 158: Attempting to load block cidr=192.168.114.128/26 host="ci-4081-3-6-n-e100e79615" Jan 17 00:27:15.301321 containerd[1501]: 2026-01-17 00:27:15.176 [INFO][4693] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.114.128/26 host="ci-4081-3-6-n-e100e79615" Jan 17 00:27:15.301321 containerd[1501]: 2026-01-17 00:27:15.176 [INFO][4693] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.114.128/26 handle="k8s-pod-network.015108f60831000de48bc1c0a597e595eaa552810a965c78a4b6fae78027d0fb" host="ci-4081-3-6-n-e100e79615" Jan 17 00:27:15.301321 containerd[1501]: 2026-01-17 00:27:15.180 [INFO][4693] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.015108f60831000de48bc1c0a597e595eaa552810a965c78a4b6fae78027d0fb Jan 17 00:27:15.301321 containerd[1501]: 2026-01-17 00:27:15.187 [INFO][4693] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.114.128/26 handle="k8s-pod-network.015108f60831000de48bc1c0a597e595eaa552810a965c78a4b6fae78027d0fb" host="ci-4081-3-6-n-e100e79615" Jan 17 00:27:15.301321 containerd[1501]: 2026-01-17 00:27:15.198 [INFO][4693] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.114.136/26] block=192.168.114.128/26 handle="k8s-pod-network.015108f60831000de48bc1c0a597e595eaa552810a965c78a4b6fae78027d0fb" host="ci-4081-3-6-n-e100e79615" Jan 17 00:27:15.301321 containerd[1501]: 2026-01-17 00:27:15.198 [INFO][4693] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.114.136/26] handle="k8s-pod-network.015108f60831000de48bc1c0a597e595eaa552810a965c78a4b6fae78027d0fb" host="ci-4081-3-6-n-e100e79615" Jan 17 00:27:15.301321 containerd[1501]: 2026-01-17 00:27:15.198 [INFO][4693] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:27:15.301321 containerd[1501]: 2026-01-17 00:27:15.198 [INFO][4693] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.114.136/26] IPv6=[] ContainerID="015108f60831000de48bc1c0a597e595eaa552810a965c78a4b6fae78027d0fb" HandleID="k8s-pod-network.015108f60831000de48bc1c0a597e595eaa552810a965c78a4b6fae78027d0fb" Workload="ci--4081--3--6--n--e100e79615-k8s-csi--node--driver--wn7sn-eth0" Jan 17 00:27:15.303800 containerd[1501]: 2026-01-17 00:27:15.226 [INFO][4662] cni-plugin/k8s.go 418: Populated endpoint ContainerID="015108f60831000de48bc1c0a597e595eaa552810a965c78a4b6fae78027d0fb" Namespace="calico-system" Pod="csi-node-driver-wn7sn" WorkloadEndpoint="ci--4081--3--6--n--e100e79615-k8s-csi--node--driver--wn7sn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--e100e79615-k8s-csi--node--driver--wn7sn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"eb95b785-13b8-4aa9-b43b-38efbd205ceb", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 26, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-e100e79615", ContainerID:"", Pod:"csi-node-driver-wn7sn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.114.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali944594bddaf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:27:15.303800 containerd[1501]: 2026-01-17 00:27:15.227 [INFO][4662] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.136/32] ContainerID="015108f60831000de48bc1c0a597e595eaa552810a965c78a4b6fae78027d0fb" Namespace="calico-system" Pod="csi-node-driver-wn7sn" WorkloadEndpoint="ci--4081--3--6--n--e100e79615-k8s-csi--node--driver--wn7sn-eth0" Jan 17 00:27:15.303800 containerd[1501]: 2026-01-17 00:27:15.227 [INFO][4662] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali944594bddaf ContainerID="015108f60831000de48bc1c0a597e595eaa552810a965c78a4b6fae78027d0fb" Namespace="calico-system" Pod="csi-node-driver-wn7sn" WorkloadEndpoint="ci--4081--3--6--n--e100e79615-k8s-csi--node--driver--wn7sn-eth0" Jan 17 00:27:15.303800 containerd[1501]: 2026-01-17 00:27:15.259 [INFO][4662] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="015108f60831000de48bc1c0a597e595eaa552810a965c78a4b6fae78027d0fb" Namespace="calico-system" Pod="csi-node-driver-wn7sn" WorkloadEndpoint="ci--4081--3--6--n--e100e79615-k8s-csi--node--driver--wn7sn-eth0" Jan 17 00:27:15.303800 containerd[1501]: 2026-01-17 00:27:15.277 [INFO][4662] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="015108f60831000de48bc1c0a597e595eaa552810a965c78a4b6fae78027d0fb" Namespace="calico-system" Pod="csi-node-driver-wn7sn" WorkloadEndpoint="ci--4081--3--6--n--e100e79615-k8s-csi--node--driver--wn7sn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--e100e79615-k8s-csi--node--driver--wn7sn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"eb95b785-13b8-4aa9-b43b-38efbd205ceb", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 26, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-e100e79615", ContainerID:"015108f60831000de48bc1c0a597e595eaa552810a965c78a4b6fae78027d0fb", Pod:"csi-node-driver-wn7sn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.114.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali944594bddaf", MAC:"72:1e:74:62:39:c7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:27:15.303800 containerd[1501]: 2026-01-17 00:27:15.293 [INFO][4662] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="015108f60831000de48bc1c0a597e595eaa552810a965c78a4b6fae78027d0fb" Namespace="calico-system" Pod="csi-node-driver-wn7sn" WorkloadEndpoint="ci--4081--3--6--n--e100e79615-k8s-csi--node--driver--wn7sn-eth0" Jan 17 00:27:15.341178 systemd[1]: Started cri-containerd-c23696d03b8aa429ef897243f1d9d65477b6a0429def317ddb0efe475716a96a.scope - libcontainer container c23696d03b8aa429ef897243f1d9d65477b6a0429def317ddb0efe475716a96a. Jan 17 00:27:15.371913 containerd[1501]: time="2026-01-17T00:27:15.370535447Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:27:15.371913 containerd[1501]: time="2026-01-17T00:27:15.370592278Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:27:15.371913 containerd[1501]: time="2026-01-17T00:27:15.370605588Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:27:15.371913 containerd[1501]: time="2026-01-17T00:27:15.370688361Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:27:15.413797 containerd[1501]: time="2026-01-17T00:27:15.413666992Z" level=info msg="StartContainer for \"c23696d03b8aa429ef897243f1d9d65477b6a0429def317ddb0efe475716a96a\" returns successfully" Jan 17 00:27:15.417640 systemd[1]: Started cri-containerd-015108f60831000de48bc1c0a597e595eaa552810a965c78a4b6fae78027d0fb.scope - libcontainer container 015108f60831000de48bc1c0a597e595eaa552810a965c78a4b6fae78027d0fb. Jan 17 00:27:15.471197 containerd[1501]: time="2026-01-17T00:27:15.471157118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-ssv4k,Uid:a05bf132-cecb-477a-a941-f502759ced80,Namespace:calico-system,Attempt:1,} returns sandbox id \"5dcd227185601bde83f8cba299823a0eb040d9b249d2d53ac67aa892f31514b0\"" Jan 17 00:27:15.473497 containerd[1501]: time="2026-01-17T00:27:15.473294968Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:27:15.515466 containerd[1501]: time="2026-01-17T00:27:15.515340333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wn7sn,Uid:eb95b785-13b8-4aa9-b43b-38efbd205ceb,Namespace:calico-system,Attempt:1,} returns sandbox id \"015108f60831000de48bc1c0a597e595eaa552810a965c78a4b6fae78027d0fb\"" Jan 17 00:27:15.814437 systemd-networkd[1401]: cali20426f0244b: Gained IPv6LL Jan 17 00:27:15.900751 containerd[1501]: time="2026-01-17T00:27:15.900279610Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:27:15.902782 containerd[1501]: time="2026-01-17T00:27:15.902677307Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:27:15.902782 containerd[1501]: time="2026-01-17T00:27:15.902739269Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:27:15.903422 kubelet[2562]: E0117 00:27:15.903242 2562 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:27:15.903422 kubelet[2562]: E0117 00:27:15.903306 2562 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:27:15.904921 kubelet[2562]: E0117 00:27:15.903561 2562 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bn44h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-ssv4k_calico-system(a05bf132-cecb-477a-a941-f502759ced80): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:27:15.905053 containerd[1501]: time="2026-01-17T00:27:15.904238110Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:27:15.905734 kubelet[2562]: E0117 00:27:15.905661 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ssv4k" podUID="a05bf132-cecb-477a-a941-f502759ced80" Jan 17 00:27:15.934635 kubelet[2562]: I0117 00:27:15.933835 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-djgpg" podStartSLOduration=43.933803427 podStartE2EDuration="43.933803427s" podCreationTimestamp="2026-01-17 00:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:27:15.933773956 +0000 UTC m=+50.568611381" watchObservedRunningTime="2026-01-17 00:27:15.933803427 +0000 UTC m=+50.568640802" Jan 17 00:27:16.349777 containerd[1501]: time="2026-01-17T00:27:16.349564186Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:27:16.351351 containerd[1501]: time="2026-01-17T00:27:16.351179510Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:27:16.351351 containerd[1501]: time="2026-01-17T00:27:16.351246882Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:27:16.351593 kubelet[2562]: E0117 00:27:16.351533 2562 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:27:16.351657 kubelet[2562]: E0117 00:27:16.351603 2562 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:27:16.351854 kubelet[2562]: E0117 00:27:16.351798 2562 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-knchd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wn7sn_calico-system(eb95b785-13b8-4aa9-b43b-38efbd205ceb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:27:16.356698 containerd[1501]: time="2026-01-17T00:27:16.356644489Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:27:16.389423 systemd-networkd[1401]: caliee465a71775: Gained IPv6LL Jan 17 00:27:16.454113 systemd-networkd[1401]: calie77eb6d5c1d: Gained IPv6LL Jan 17 00:27:16.771176 containerd[1501]: time="2026-01-17T00:27:16.771078655Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:27:16.772985 containerd[1501]: time="2026-01-17T00:27:16.772911485Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:27:16.773235 containerd[1501]: time="2026-01-17T00:27:16.772935616Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:27:16.773506 kubelet[2562]: E0117 00:27:16.773446 2562 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:27:16.773613 kubelet[2562]: E0117 00:27:16.773534 2562 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:27:16.773867 kubelet[2562]: E0117 00:27:16.773730 2562 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-knchd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wn7sn_calico-system(eb95b785-13b8-4aa9-b43b-38efbd205ceb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:27:16.775257 kubelet[2562]: E0117 00:27:16.775179 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wn7sn" podUID="eb95b785-13b8-4aa9-b43b-38efbd205ceb" Jan 17 00:27:16.910336 kubelet[2562]: E0117 00:27:16.910264 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wn7sn" podUID="eb95b785-13b8-4aa9-b43b-38efbd205ceb" Jan 17 00:27:16.910876 kubelet[2562]: E0117 00:27:16.910378 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ssv4k" podUID="a05bf132-cecb-477a-a941-f502759ced80" Jan 17 00:27:17.094491 systemd-networkd[1401]: cali944594bddaf: Gained IPv6LL Jan 17 00:27:24.518689 containerd[1501]: time="2026-01-17T00:27:24.518050429Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:27:24.944974 containerd[1501]: time="2026-01-17T00:27:24.944583857Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:27:24.946249 containerd[1501]: time="2026-01-17T00:27:24.946193222Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:27:24.946401 containerd[1501]: time="2026-01-17T00:27:24.946218753Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:27:24.946846 kubelet[2562]: E0117 00:27:24.946618 2562 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:27:24.946846 kubelet[2562]: E0117 00:27:24.946675 2562 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:27:24.946846 kubelet[2562]: E0117 00:27:24.946799 2562 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:4cfe1eccd3c14c02813c65fb803c84dd,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-scctk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7f57f5f859-j9rxk_calico-system(95fdc174-63e2-499b-8b79-a226c39e6eaf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:27:24.949732 containerd[1501]: time="2026-01-17T00:27:24.949694852Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:27:25.382374 containerd[1501]: time="2026-01-17T00:27:25.382280897Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:27:25.383983 containerd[1501]: time="2026-01-17T00:27:25.383778339Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:27:25.384221 containerd[1501]: time="2026-01-17T00:27:25.383925543Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:27:25.384510 kubelet[2562]: E0117 00:27:25.384314 2562 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:27:25.384597 kubelet[2562]: E0117 00:27:25.384541 2562 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:27:25.385331 kubelet[2562]: E0117 00:27:25.384822 2562 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-scctk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7f57f5f859-j9rxk_calico-system(95fdc174-63e2-499b-8b79-a226c39e6eaf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:27:25.386361 kubelet[2562]: E0117 00:27:25.386218 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7f57f5f859-j9rxk" podUID="95fdc174-63e2-499b-8b79-a226c39e6eaf" Jan 17 00:27:25.518982 containerd[1501]: time="2026-01-17T00:27:25.518626882Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:27:25.523025 containerd[1501]: time="2026-01-17T00:27:25.521818761Z" level=info msg="StopPodSandbox for \"b9a908f1fd378b1d4d36a6616aa607fb6d749c5a1e70bfe5a158f7cf5fe0cba4\"" Jan 17 00:27:25.656015 containerd[1501]: 2026-01-17 00:27:25.591 [WARNING][4929] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="b9a908f1fd378b1d4d36a6616aa607fb6d749c5a1e70bfe5a158f7cf5fe0cba4" WorkloadEndpoint="ci--4081--3--6--n--e100e79615-k8s-whisker--ddfc459bc--95rsc-eth0" Jan 17 00:27:25.656015 containerd[1501]: 2026-01-17 00:27:25.591 [INFO][4929] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b9a908f1fd378b1d4d36a6616aa607fb6d749c5a1e70bfe5a158f7cf5fe0cba4" Jan 17 00:27:25.656015 containerd[1501]: 2026-01-17 00:27:25.591 [INFO][4929] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b9a908f1fd378b1d4d36a6616aa607fb6d749c5a1e70bfe5a158f7cf5fe0cba4" iface="eth0" netns="" Jan 17 00:27:25.656015 containerd[1501]: 2026-01-17 00:27:25.591 [INFO][4929] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b9a908f1fd378b1d4d36a6616aa607fb6d749c5a1e70bfe5a158f7cf5fe0cba4" Jan 17 00:27:25.656015 containerd[1501]: 2026-01-17 00:27:25.591 [INFO][4929] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b9a908f1fd378b1d4d36a6616aa607fb6d749c5a1e70bfe5a158f7cf5fe0cba4" Jan 17 00:27:25.656015 containerd[1501]: 2026-01-17 00:27:25.641 [INFO][4938] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b9a908f1fd378b1d4d36a6616aa607fb6d749c5a1e70bfe5a158f7cf5fe0cba4" HandleID="k8s-pod-network.b9a908f1fd378b1d4d36a6616aa607fb6d749c5a1e70bfe5a158f7cf5fe0cba4" Workload="ci--4081--3--6--n--e100e79615-k8s-whisker--ddfc459bc--95rsc-eth0" Jan 17 00:27:25.656015 containerd[1501]: 2026-01-17 00:27:25.642 [INFO][4938] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:27:25.656015 containerd[1501]: 2026-01-17 00:27:25.642 [INFO][4938] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:27:25.656015 containerd[1501]: 2026-01-17 00:27:25.649 [WARNING][4938] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b9a908f1fd378b1d4d36a6616aa607fb6d749c5a1e70bfe5a158f7cf5fe0cba4" HandleID="k8s-pod-network.b9a908f1fd378b1d4d36a6616aa607fb6d749c5a1e70bfe5a158f7cf5fe0cba4" Workload="ci--4081--3--6--n--e100e79615-k8s-whisker--ddfc459bc--95rsc-eth0" Jan 17 00:27:25.656015 containerd[1501]: 2026-01-17 00:27:25.649 [INFO][4938] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b9a908f1fd378b1d4d36a6616aa607fb6d749c5a1e70bfe5a158f7cf5fe0cba4" HandleID="k8s-pod-network.b9a908f1fd378b1d4d36a6616aa607fb6d749c5a1e70bfe5a158f7cf5fe0cba4" Workload="ci--4081--3--6--n--e100e79615-k8s-whisker--ddfc459bc--95rsc-eth0" Jan 17 00:27:25.656015 containerd[1501]: 2026-01-17 00:27:25.651 [INFO][4938] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:27:25.656015 containerd[1501]: 2026-01-17 00:27:25.653 [INFO][4929] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b9a908f1fd378b1d4d36a6616aa607fb6d749c5a1e70bfe5a158f7cf5fe0cba4" Jan 17 00:27:25.656557 containerd[1501]: time="2026-01-17T00:27:25.656525690Z" level=info msg="TearDown network for sandbox \"b9a908f1fd378b1d4d36a6616aa607fb6d749c5a1e70bfe5a158f7cf5fe0cba4\" successfully" Jan 17 00:27:25.656727 containerd[1501]: time="2026-01-17T00:27:25.656711694Z" level=info msg="StopPodSandbox for \"b9a908f1fd378b1d4d36a6616aa607fb6d749c5a1e70bfe5a158f7cf5fe0cba4\" returns successfully" Jan 17 00:27:25.658154 containerd[1501]: time="2026-01-17T00:27:25.658087194Z" level=info msg="RemovePodSandbox for \"b9a908f1fd378b1d4d36a6616aa607fb6d749c5a1e70bfe5a158f7cf5fe0cba4\"" Jan 17 00:27:25.658233 containerd[1501]: time="2026-01-17T00:27:25.658164716Z" level=info msg="Forcibly stopping sandbox \"b9a908f1fd378b1d4d36a6616aa607fb6d749c5a1e70bfe5a158f7cf5fe0cba4\"" Jan 17 00:27:25.752138 containerd[1501]: 2026-01-17 00:27:25.698 [WARNING][4952] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="b9a908f1fd378b1d4d36a6616aa607fb6d749c5a1e70bfe5a158f7cf5fe0cba4" WorkloadEndpoint="ci--4081--3--6--n--e100e79615-k8s-whisker--ddfc459bc--95rsc-eth0" Jan 17 00:27:25.752138 containerd[1501]: 2026-01-17 00:27:25.698 [INFO][4952] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b9a908f1fd378b1d4d36a6616aa607fb6d749c5a1e70bfe5a158f7cf5fe0cba4" Jan 17 00:27:25.752138 containerd[1501]: 2026-01-17 00:27:25.698 [INFO][4952] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b9a908f1fd378b1d4d36a6616aa607fb6d749c5a1e70bfe5a158f7cf5fe0cba4" iface="eth0" netns="" Jan 17 00:27:25.752138 containerd[1501]: 2026-01-17 00:27:25.698 [INFO][4952] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b9a908f1fd378b1d4d36a6616aa607fb6d749c5a1e70bfe5a158f7cf5fe0cba4" Jan 17 00:27:25.752138 containerd[1501]: 2026-01-17 00:27:25.698 [INFO][4952] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b9a908f1fd378b1d4d36a6616aa607fb6d749c5a1e70bfe5a158f7cf5fe0cba4" Jan 17 00:27:25.752138 containerd[1501]: 2026-01-17 00:27:25.739 [INFO][4959] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b9a908f1fd378b1d4d36a6616aa607fb6d749c5a1e70bfe5a158f7cf5fe0cba4" HandleID="k8s-pod-network.b9a908f1fd378b1d4d36a6616aa607fb6d749c5a1e70bfe5a158f7cf5fe0cba4" Workload="ci--4081--3--6--n--e100e79615-k8s-whisker--ddfc459bc--95rsc-eth0" Jan 17 00:27:25.752138 containerd[1501]: 2026-01-17 00:27:25.740 [INFO][4959] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:27:25.752138 containerd[1501]: 2026-01-17 00:27:25.740 [INFO][4959] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:27:25.752138 containerd[1501]: 2026-01-17 00:27:25.745 [WARNING][4959] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b9a908f1fd378b1d4d36a6616aa607fb6d749c5a1e70bfe5a158f7cf5fe0cba4" HandleID="k8s-pod-network.b9a908f1fd378b1d4d36a6616aa607fb6d749c5a1e70bfe5a158f7cf5fe0cba4" Workload="ci--4081--3--6--n--e100e79615-k8s-whisker--ddfc459bc--95rsc-eth0" Jan 17 00:27:25.752138 containerd[1501]: 2026-01-17 00:27:25.745 [INFO][4959] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b9a908f1fd378b1d4d36a6616aa607fb6d749c5a1e70bfe5a158f7cf5fe0cba4" HandleID="k8s-pod-network.b9a908f1fd378b1d4d36a6616aa607fb6d749c5a1e70bfe5a158f7cf5fe0cba4" Workload="ci--4081--3--6--n--e100e79615-k8s-whisker--ddfc459bc--95rsc-eth0" Jan 17 00:27:25.752138 containerd[1501]: 2026-01-17 00:27:25.747 [INFO][4959] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:27:25.752138 containerd[1501]: 2026-01-17 00:27:25.748 [INFO][4952] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b9a908f1fd378b1d4d36a6616aa607fb6d749c5a1e70bfe5a158f7cf5fe0cba4" Jan 17 00:27:25.752578 containerd[1501]: time="2026-01-17T00:27:25.752249642Z" level=info msg="TearDown network for sandbox \"b9a908f1fd378b1d4d36a6616aa607fb6d749c5a1e70bfe5a158f7cf5fe0cba4\" successfully" Jan 17 00:27:25.759430 containerd[1501]: time="2026-01-17T00:27:25.759383188Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b9a908f1fd378b1d4d36a6616aa607fb6d749c5a1e70bfe5a158f7cf5fe0cba4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:27:25.759513 containerd[1501]: time="2026-01-17T00:27:25.759464660Z" level=info msg="RemovePodSandbox \"b9a908f1fd378b1d4d36a6616aa607fb6d749c5a1e70bfe5a158f7cf5fe0cba4\" returns successfully" Jan 17 00:27:25.764040 containerd[1501]: time="2026-01-17T00:27:25.763893668Z" level=info msg="StopPodSandbox for \"3d92bb12f39a75e0bf578ea5270c93418c3fb227f6c5f2b52594ba64f87d76d8\"" Jan 17 00:27:25.874293 containerd[1501]: 2026-01-17 00:27:25.840 [WARNING][4973] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3d92bb12f39a75e0bf578ea5270c93418c3fb227f6c5f2b52594ba64f87d76d8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--e100e79615-k8s-coredns--668d6bf9bc--mgc4q-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c92915ad-48c4-496c-93e3-f83efa51b583", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 26, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-e100e79615", ContainerID:"b0b232bb2895138a077c20c711c1cb8ed8db3f4fc1410689b977067eff0ac7e0", Pod:"coredns-668d6bf9bc-mgc4q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali20426f0244b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:27:25.874293 containerd[1501]: 2026-01-17 00:27:25.841 [INFO][4973] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3d92bb12f39a75e0bf578ea5270c93418c3fb227f6c5f2b52594ba64f87d76d8" Jan 17 00:27:25.874293 containerd[1501]: 2026-01-17 00:27:25.841 [INFO][4973] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3d92bb12f39a75e0bf578ea5270c93418c3fb227f6c5f2b52594ba64f87d76d8" iface="eth0" netns="" Jan 17 00:27:25.874293 containerd[1501]: 2026-01-17 00:27:25.841 [INFO][4973] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3d92bb12f39a75e0bf578ea5270c93418c3fb227f6c5f2b52594ba64f87d76d8" Jan 17 00:27:25.874293 containerd[1501]: 2026-01-17 00:27:25.841 [INFO][4973] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3d92bb12f39a75e0bf578ea5270c93418c3fb227f6c5f2b52594ba64f87d76d8" Jan 17 00:27:25.874293 containerd[1501]: 2026-01-17 00:27:25.862 [INFO][4981] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3d92bb12f39a75e0bf578ea5270c93418c3fb227f6c5f2b52594ba64f87d76d8" HandleID="k8s-pod-network.3d92bb12f39a75e0bf578ea5270c93418c3fb227f6c5f2b52594ba64f87d76d8" Workload="ci--4081--3--6--n--e100e79615-k8s-coredns--668d6bf9bc--mgc4q-eth0" Jan 17 00:27:25.874293 containerd[1501]: 2026-01-17 00:27:25.863 [INFO][4981] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:27:25.874293 containerd[1501]: 2026-01-17 00:27:25.863 [INFO][4981] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:27:25.874293 containerd[1501]: 2026-01-17 00:27:25.868 [WARNING][4981] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3d92bb12f39a75e0bf578ea5270c93418c3fb227f6c5f2b52594ba64f87d76d8" HandleID="k8s-pod-network.3d92bb12f39a75e0bf578ea5270c93418c3fb227f6c5f2b52594ba64f87d76d8" Workload="ci--4081--3--6--n--e100e79615-k8s-coredns--668d6bf9bc--mgc4q-eth0" Jan 17 00:27:25.874293 containerd[1501]: 2026-01-17 00:27:25.868 [INFO][4981] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3d92bb12f39a75e0bf578ea5270c93418c3fb227f6c5f2b52594ba64f87d76d8" HandleID="k8s-pod-network.3d92bb12f39a75e0bf578ea5270c93418c3fb227f6c5f2b52594ba64f87d76d8" Workload="ci--4081--3--6--n--e100e79615-k8s-coredns--668d6bf9bc--mgc4q-eth0" Jan 17 00:27:25.874293 containerd[1501]: 2026-01-17 00:27:25.870 [INFO][4981] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:27:25.874293 containerd[1501]: 2026-01-17 00:27:25.872 [INFO][4973] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3d92bb12f39a75e0bf578ea5270c93418c3fb227f6c5f2b52594ba64f87d76d8" Jan 17 00:27:25.874703 containerd[1501]: time="2026-01-17T00:27:25.874350073Z" level=info msg="TearDown network for sandbox \"3d92bb12f39a75e0bf578ea5270c93418c3fb227f6c5f2b52594ba64f87d76d8\" successfully" Jan 17 00:27:25.874703 containerd[1501]: time="2026-01-17T00:27:25.874384404Z" level=info msg="StopPodSandbox for \"3d92bb12f39a75e0bf578ea5270c93418c3fb227f6c5f2b52594ba64f87d76d8\" returns successfully" Jan 17 00:27:25.875377 containerd[1501]: time="2026-01-17T00:27:25.875322004Z" level=info msg="RemovePodSandbox for \"3d92bb12f39a75e0bf578ea5270c93418c3fb227f6c5f2b52594ba64f87d76d8\"" Jan 17 00:27:25.875377 containerd[1501]: time="2026-01-17T00:27:25.875363965Z" level=info msg="Forcibly stopping sandbox \"3d92bb12f39a75e0bf578ea5270c93418c3fb227f6c5f2b52594ba64f87d76d8\"" Jan 17 00:27:25.955657 containerd[1501]: 2026-01-17 00:27:25.910 [WARNING][4995] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3d92bb12f39a75e0bf578ea5270c93418c3fb227f6c5f2b52594ba64f87d76d8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--e100e79615-k8s-coredns--668d6bf9bc--mgc4q-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c92915ad-48c4-496c-93e3-f83efa51b583", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 26, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-e100e79615", ContainerID:"b0b232bb2895138a077c20c711c1cb8ed8db3f4fc1410689b977067eff0ac7e0", Pod:"coredns-668d6bf9bc-mgc4q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali20426f0244b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:27:25.955657 containerd[1501]: 2026-01-17 00:27:25.910 [INFO][4995] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3d92bb12f39a75e0bf578ea5270c93418c3fb227f6c5f2b52594ba64f87d76d8" Jan 17 00:27:25.955657 containerd[1501]: 2026-01-17 00:27:25.910 [INFO][4995] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3d92bb12f39a75e0bf578ea5270c93418c3fb227f6c5f2b52594ba64f87d76d8" iface="eth0" netns="" Jan 17 00:27:25.955657 containerd[1501]: 2026-01-17 00:27:25.910 [INFO][4995] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3d92bb12f39a75e0bf578ea5270c93418c3fb227f6c5f2b52594ba64f87d76d8" Jan 17 00:27:25.955657 containerd[1501]: 2026-01-17 00:27:25.910 [INFO][4995] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3d92bb12f39a75e0bf578ea5270c93418c3fb227f6c5f2b52594ba64f87d76d8" Jan 17 00:27:25.955657 containerd[1501]: 2026-01-17 00:27:25.938 [INFO][5002] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3d92bb12f39a75e0bf578ea5270c93418c3fb227f6c5f2b52594ba64f87d76d8" HandleID="k8s-pod-network.3d92bb12f39a75e0bf578ea5270c93418c3fb227f6c5f2b52594ba64f87d76d8" Workload="ci--4081--3--6--n--e100e79615-k8s-coredns--668d6bf9bc--mgc4q-eth0" Jan 17 00:27:25.955657 containerd[1501]: 2026-01-17 00:27:25.938 [INFO][5002] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:27:25.955657 containerd[1501]: 2026-01-17 00:27:25.938 [INFO][5002] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:27:25.955657 containerd[1501]: 2026-01-17 00:27:25.946 [WARNING][5002] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3d92bb12f39a75e0bf578ea5270c93418c3fb227f6c5f2b52594ba64f87d76d8" HandleID="k8s-pod-network.3d92bb12f39a75e0bf578ea5270c93418c3fb227f6c5f2b52594ba64f87d76d8" Workload="ci--4081--3--6--n--e100e79615-k8s-coredns--668d6bf9bc--mgc4q-eth0" Jan 17 00:27:25.955657 containerd[1501]: 2026-01-17 00:27:25.946 [INFO][5002] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3d92bb12f39a75e0bf578ea5270c93418c3fb227f6c5f2b52594ba64f87d76d8" HandleID="k8s-pod-network.3d92bb12f39a75e0bf578ea5270c93418c3fb227f6c5f2b52594ba64f87d76d8" Workload="ci--4081--3--6--n--e100e79615-k8s-coredns--668d6bf9bc--mgc4q-eth0" Jan 17 00:27:25.955657 containerd[1501]: 2026-01-17 00:27:25.950 [INFO][5002] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:27:25.955657 containerd[1501]: 2026-01-17 00:27:25.952 [INFO][4995] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3d92bb12f39a75e0bf578ea5270c93418c3fb227f6c5f2b52594ba64f87d76d8" Jan 17 00:27:25.955657 containerd[1501]: time="2026-01-17T00:27:25.955590408Z" level=info msg="TearDown network for sandbox \"3d92bb12f39a75e0bf578ea5270c93418c3fb227f6c5f2b52594ba64f87d76d8\" successfully" Jan 17 00:27:25.960684 containerd[1501]: time="2026-01-17T00:27:25.960579377Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:27:25.961855 containerd[1501]: time="2026-01-17T00:27:25.961618959Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3d92bb12f39a75e0bf578ea5270c93418c3fb227f6c5f2b52594ba64f87d76d8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:27:25.961855 containerd[1501]: time="2026-01-17T00:27:25.961703381Z" level=info msg="RemovePodSandbox \"3d92bb12f39a75e0bf578ea5270c93418c3fb227f6c5f2b52594ba64f87d76d8\" returns successfully" Jan 17 00:27:25.962579 containerd[1501]: time="2026-01-17T00:27:25.962546939Z" level=info msg="StopPodSandbox for \"449644b9235453a357ba716e41402bd70e63edc2368ae781bf798daffcfedbc6\"" Jan 17 00:27:25.963022 containerd[1501]: time="2026-01-17T00:27:25.962976480Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:27:25.963122 containerd[1501]: time="2026-01-17T00:27:25.963059421Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:27:25.963406 kubelet[2562]: E0117 00:27:25.963358 2562 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:27:25.964588 kubelet[2562]: E0117 00:27:25.963894 2562 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:27:25.964588 kubelet[2562]: E0117 00:27:25.964202 2562 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kz5jx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6675cb976f-qgmnw_calico-apiserver(7ea2b3c0-00a9-42a4-a1ac-e5bd2a459fee): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:27:25.966452 kubelet[2562]: E0117 00:27:25.966147 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6675cb976f-qgmnw" podUID="7ea2b3c0-00a9-42a4-a1ac-e5bd2a459fee" Jan 17 00:27:26.043484 containerd[1501]: 2026-01-17 00:27:26.003 [WARNING][5016] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="449644b9235453a357ba716e41402bd70e63edc2368ae781bf798daffcfedbc6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--e100e79615-k8s-calico--apiserver--6675cb976f--qgmnw-eth0", GenerateName:"calico-apiserver-6675cb976f-", Namespace:"calico-apiserver", SelfLink:"", UID:"7ea2b3c0-00a9-42a4-a1ac-e5bd2a459fee", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 26, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6675cb976f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-e100e79615", ContainerID:"5e89e2b6d8d94c30a770aac5fb14041e0e62d89c0ac0ba27e8fe21df856ad1ca", Pod:"calico-apiserver-6675cb976f-qgmnw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali00d40863985", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:27:26.043484 containerd[1501]: 2026-01-17 00:27:26.003 [INFO][5016] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="449644b9235453a357ba716e41402bd70e63edc2368ae781bf798daffcfedbc6" Jan 17 00:27:26.043484 containerd[1501]: 2026-01-17 00:27:26.003 [INFO][5016] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="449644b9235453a357ba716e41402bd70e63edc2368ae781bf798daffcfedbc6" iface="eth0" netns="" Jan 17 00:27:26.043484 containerd[1501]: 2026-01-17 00:27:26.003 [INFO][5016] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="449644b9235453a357ba716e41402bd70e63edc2368ae781bf798daffcfedbc6" Jan 17 00:27:26.043484 containerd[1501]: 2026-01-17 00:27:26.003 [INFO][5016] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="449644b9235453a357ba716e41402bd70e63edc2368ae781bf798daffcfedbc6" Jan 17 00:27:26.043484 containerd[1501]: 2026-01-17 00:27:26.029 [INFO][5023] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="449644b9235453a357ba716e41402bd70e63edc2368ae781bf798daffcfedbc6" HandleID="k8s-pod-network.449644b9235453a357ba716e41402bd70e63edc2368ae781bf798daffcfedbc6" Workload="ci--4081--3--6--n--e100e79615-k8s-calico--apiserver--6675cb976f--qgmnw-eth0" Jan 17 00:27:26.043484 containerd[1501]: 2026-01-17 00:27:26.029 [INFO][5023] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:27:26.043484 containerd[1501]: 2026-01-17 00:27:26.029 [INFO][5023] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:27:26.043484 containerd[1501]: 2026-01-17 00:27:26.036 [WARNING][5023] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="449644b9235453a357ba716e41402bd70e63edc2368ae781bf798daffcfedbc6" HandleID="k8s-pod-network.449644b9235453a357ba716e41402bd70e63edc2368ae781bf798daffcfedbc6" Workload="ci--4081--3--6--n--e100e79615-k8s-calico--apiserver--6675cb976f--qgmnw-eth0" Jan 17 00:27:26.043484 containerd[1501]: 2026-01-17 00:27:26.037 [INFO][5023] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="449644b9235453a357ba716e41402bd70e63edc2368ae781bf798daffcfedbc6" HandleID="k8s-pod-network.449644b9235453a357ba716e41402bd70e63edc2368ae781bf798daffcfedbc6" Workload="ci--4081--3--6--n--e100e79615-k8s-calico--apiserver--6675cb976f--qgmnw-eth0" Jan 17 00:27:26.043484 containerd[1501]: 2026-01-17 00:27:26.038 [INFO][5023] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:27:26.043484 containerd[1501]: 2026-01-17 00:27:26.041 [INFO][5016] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="449644b9235453a357ba716e41402bd70e63edc2368ae781bf798daffcfedbc6" Jan 17 00:27:26.044043 containerd[1501]: time="2026-01-17T00:27:26.043989897Z" level=info msg="TearDown network for sandbox \"449644b9235453a357ba716e41402bd70e63edc2368ae781bf798daffcfedbc6\" successfully" Jan 17 00:27:26.044043 containerd[1501]: time="2026-01-17T00:27:26.044023997Z" level=info msg="StopPodSandbox for \"449644b9235453a357ba716e41402bd70e63edc2368ae781bf798daffcfedbc6\" returns successfully" Jan 17 00:27:26.044633 containerd[1501]: time="2026-01-17T00:27:26.044593500Z" level=info msg="RemovePodSandbox for \"449644b9235453a357ba716e41402bd70e63edc2368ae781bf798daffcfedbc6\"" Jan 17 00:27:26.044633 containerd[1501]: time="2026-01-17T00:27:26.044627151Z" level=info msg="Forcibly stopping sandbox \"449644b9235453a357ba716e41402bd70e63edc2368ae781bf798daffcfedbc6\"" Jan 17 00:27:26.132819 containerd[1501]: 2026-01-17 00:27:26.084 [WARNING][5037] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="449644b9235453a357ba716e41402bd70e63edc2368ae781bf798daffcfedbc6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--e100e79615-k8s-calico--apiserver--6675cb976f--qgmnw-eth0", GenerateName:"calico-apiserver-6675cb976f-", Namespace:"calico-apiserver", SelfLink:"", UID:"7ea2b3c0-00a9-42a4-a1ac-e5bd2a459fee", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 26, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6675cb976f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-e100e79615", ContainerID:"5e89e2b6d8d94c30a770aac5fb14041e0e62d89c0ac0ba27e8fe21df856ad1ca", Pod:"calico-apiserver-6675cb976f-qgmnw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali00d40863985", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:27:26.132819 containerd[1501]: 2026-01-17 00:27:26.085 [INFO][5037] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="449644b9235453a357ba716e41402bd70e63edc2368ae781bf798daffcfedbc6" Jan 17 00:27:26.132819 containerd[1501]: 2026-01-17 00:27:26.085 [INFO][5037] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="449644b9235453a357ba716e41402bd70e63edc2368ae781bf798daffcfedbc6" iface="eth0" netns="" Jan 17 00:27:26.132819 containerd[1501]: 2026-01-17 00:27:26.085 [INFO][5037] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="449644b9235453a357ba716e41402bd70e63edc2368ae781bf798daffcfedbc6" Jan 17 00:27:26.132819 containerd[1501]: 2026-01-17 00:27:26.085 [INFO][5037] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="449644b9235453a357ba716e41402bd70e63edc2368ae781bf798daffcfedbc6" Jan 17 00:27:26.132819 containerd[1501]: 2026-01-17 00:27:26.117 [INFO][5044] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="449644b9235453a357ba716e41402bd70e63edc2368ae781bf798daffcfedbc6" HandleID="k8s-pod-network.449644b9235453a357ba716e41402bd70e63edc2368ae781bf798daffcfedbc6" Workload="ci--4081--3--6--n--e100e79615-k8s-calico--apiserver--6675cb976f--qgmnw-eth0" Jan 17 00:27:26.132819 containerd[1501]: 2026-01-17 00:27:26.117 [INFO][5044] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:27:26.132819 containerd[1501]: 2026-01-17 00:27:26.117 [INFO][5044] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:27:26.132819 containerd[1501]: 2026-01-17 00:27:26.125 [WARNING][5044] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="449644b9235453a357ba716e41402bd70e63edc2368ae781bf798daffcfedbc6" HandleID="k8s-pod-network.449644b9235453a357ba716e41402bd70e63edc2368ae781bf798daffcfedbc6" Workload="ci--4081--3--6--n--e100e79615-k8s-calico--apiserver--6675cb976f--qgmnw-eth0" Jan 17 00:27:26.132819 containerd[1501]: 2026-01-17 00:27:26.125 [INFO][5044] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="449644b9235453a357ba716e41402bd70e63edc2368ae781bf798daffcfedbc6" HandleID="k8s-pod-network.449644b9235453a357ba716e41402bd70e63edc2368ae781bf798daffcfedbc6" Workload="ci--4081--3--6--n--e100e79615-k8s-calico--apiserver--6675cb976f--qgmnw-eth0" Jan 17 00:27:26.132819 containerd[1501]: 2026-01-17 00:27:26.127 [INFO][5044] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:27:26.132819 containerd[1501]: 2026-01-17 00:27:26.130 [INFO][5037] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="449644b9235453a357ba716e41402bd70e63edc2368ae781bf798daffcfedbc6" Jan 17 00:27:26.133371 containerd[1501]: time="2026-01-17T00:27:26.132886744Z" level=info msg="TearDown network for sandbox \"449644b9235453a357ba716e41402bd70e63edc2368ae781bf798daffcfedbc6\" successfully" Jan 17 00:27:26.138442 containerd[1501]: time="2026-01-17T00:27:26.138381883Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"449644b9235453a357ba716e41402bd70e63edc2368ae781bf798daffcfedbc6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:27:26.138582 containerd[1501]: time="2026-01-17T00:27:26.138460425Z" level=info msg="RemovePodSandbox \"449644b9235453a357ba716e41402bd70e63edc2368ae781bf798daffcfedbc6\" returns successfully" Jan 17 00:27:26.139689 containerd[1501]: time="2026-01-17T00:27:26.139219341Z" level=info msg="StopPodSandbox for \"e9e413599c888e558f6b61d8a7e054695b06434e7332c6067770af00b48c784e\"" Jan 17 00:27:26.251666 containerd[1501]: 2026-01-17 00:27:26.195 [WARNING][5058] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e9e413599c888e558f6b61d8a7e054695b06434e7332c6067770af00b48c784e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--e100e79615-k8s-coredns--668d6bf9bc--djgpg-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"dd4b2cd2-cd64-4cf4-9264-84814b92189d", ResourceVersion:"1061", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 26, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-e100e79615", ContainerID:"d09761f24b595a6b2a20a5cb08b5cddef9d9513e128604764bf33689024a5d00", Pod:"coredns-668d6bf9bc-djgpg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie77eb6d5c1d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:27:26.251666 containerd[1501]: 2026-01-17 00:27:26.195 [INFO][5058] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e9e413599c888e558f6b61d8a7e054695b06434e7332c6067770af00b48c784e" Jan 17 00:27:26.251666 containerd[1501]: 2026-01-17 00:27:26.195 [INFO][5058] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e9e413599c888e558f6b61d8a7e054695b06434e7332c6067770af00b48c784e" iface="eth0" netns="" Jan 17 00:27:26.251666 containerd[1501]: 2026-01-17 00:27:26.195 [INFO][5058] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e9e413599c888e558f6b61d8a7e054695b06434e7332c6067770af00b48c784e" Jan 17 00:27:26.251666 containerd[1501]: 2026-01-17 00:27:26.195 [INFO][5058] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e9e413599c888e558f6b61d8a7e054695b06434e7332c6067770af00b48c784e" Jan 17 00:27:26.251666 containerd[1501]: 2026-01-17 00:27:26.232 [INFO][5065] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e9e413599c888e558f6b61d8a7e054695b06434e7332c6067770af00b48c784e" HandleID="k8s-pod-network.e9e413599c888e558f6b61d8a7e054695b06434e7332c6067770af00b48c784e" Workload="ci--4081--3--6--n--e100e79615-k8s-coredns--668d6bf9bc--djgpg-eth0" Jan 17 00:27:26.251666 containerd[1501]: 2026-01-17 00:27:26.233 [INFO][5065] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:27:26.251666 containerd[1501]: 2026-01-17 00:27:26.233 [INFO][5065] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:27:26.251666 containerd[1501]: 2026-01-17 00:27:26.240 [WARNING][5065] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e9e413599c888e558f6b61d8a7e054695b06434e7332c6067770af00b48c784e" HandleID="k8s-pod-network.e9e413599c888e558f6b61d8a7e054695b06434e7332c6067770af00b48c784e" Workload="ci--4081--3--6--n--e100e79615-k8s-coredns--668d6bf9bc--djgpg-eth0" Jan 17 00:27:26.251666 containerd[1501]: 2026-01-17 00:27:26.241 [INFO][5065] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e9e413599c888e558f6b61d8a7e054695b06434e7332c6067770af00b48c784e" HandleID="k8s-pod-network.e9e413599c888e558f6b61d8a7e054695b06434e7332c6067770af00b48c784e" Workload="ci--4081--3--6--n--e100e79615-k8s-coredns--668d6bf9bc--djgpg-eth0" Jan 17 00:27:26.251666 containerd[1501]: 2026-01-17 00:27:26.243 [INFO][5065] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:27:26.251666 containerd[1501]: 2026-01-17 00:27:26.247 [INFO][5058] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e9e413599c888e558f6b61d8a7e054695b06434e7332c6067770af00b48c784e" Jan 17 00:27:26.252248 containerd[1501]: time="2026-01-17T00:27:26.251737245Z" level=info msg="TearDown network for sandbox \"e9e413599c888e558f6b61d8a7e054695b06434e7332c6067770af00b48c784e\" successfully" Jan 17 00:27:26.252248 containerd[1501]: time="2026-01-17T00:27:26.251773786Z" level=info msg="StopPodSandbox for \"e9e413599c888e558f6b61d8a7e054695b06434e7332c6067770af00b48c784e\" returns successfully" Jan 17 00:27:26.252807 containerd[1501]: time="2026-01-17T00:27:26.252713726Z" level=info msg="RemovePodSandbox for \"e9e413599c888e558f6b61d8a7e054695b06434e7332c6067770af00b48c784e\"" Jan 17 00:27:26.252871 containerd[1501]: time="2026-01-17T00:27:26.252818598Z" level=info msg="Forcibly stopping sandbox \"e9e413599c888e558f6b61d8a7e054695b06434e7332c6067770af00b48c784e\"" Jan 17 00:27:26.336523 containerd[1501]: 2026-01-17 00:27:26.302 [WARNING][5079] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e9e413599c888e558f6b61d8a7e054695b06434e7332c6067770af00b48c784e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--e100e79615-k8s-coredns--668d6bf9bc--djgpg-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"dd4b2cd2-cd64-4cf4-9264-84814b92189d", ResourceVersion:"1061", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 26, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-e100e79615", ContainerID:"d09761f24b595a6b2a20a5cb08b5cddef9d9513e128604764bf33689024a5d00", Pod:"coredns-668d6bf9bc-djgpg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie77eb6d5c1d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:27:26.336523 containerd[1501]: 2026-01-17 00:27:26.302 [INFO][5079] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e9e413599c888e558f6b61d8a7e054695b06434e7332c6067770af00b48c784e" Jan 17 00:27:26.336523 containerd[1501]: 2026-01-17 00:27:26.303 [INFO][5079] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e9e413599c888e558f6b61d8a7e054695b06434e7332c6067770af00b48c784e" iface="eth0" netns="" Jan 17 00:27:26.336523 containerd[1501]: 2026-01-17 00:27:26.303 [INFO][5079] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e9e413599c888e558f6b61d8a7e054695b06434e7332c6067770af00b48c784e" Jan 17 00:27:26.336523 containerd[1501]: 2026-01-17 00:27:26.303 [INFO][5079] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e9e413599c888e558f6b61d8a7e054695b06434e7332c6067770af00b48c784e" Jan 17 00:27:26.336523 containerd[1501]: 2026-01-17 00:27:26.324 [INFO][5086] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e9e413599c888e558f6b61d8a7e054695b06434e7332c6067770af00b48c784e" HandleID="k8s-pod-network.e9e413599c888e558f6b61d8a7e054695b06434e7332c6067770af00b48c784e" Workload="ci--4081--3--6--n--e100e79615-k8s-coredns--668d6bf9bc--djgpg-eth0" Jan 17 00:27:26.336523 containerd[1501]: 2026-01-17 00:27:26.324 [INFO][5086] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:27:26.336523 containerd[1501]: 2026-01-17 00:27:26.324 [INFO][5086] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:27:26.336523 containerd[1501]: 2026-01-17 00:27:26.330 [WARNING][5086] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e9e413599c888e558f6b61d8a7e054695b06434e7332c6067770af00b48c784e" HandleID="k8s-pod-network.e9e413599c888e558f6b61d8a7e054695b06434e7332c6067770af00b48c784e" Workload="ci--4081--3--6--n--e100e79615-k8s-coredns--668d6bf9bc--djgpg-eth0" Jan 17 00:27:26.336523 containerd[1501]: 2026-01-17 00:27:26.330 [INFO][5086] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e9e413599c888e558f6b61d8a7e054695b06434e7332c6067770af00b48c784e" HandleID="k8s-pod-network.e9e413599c888e558f6b61d8a7e054695b06434e7332c6067770af00b48c784e" Workload="ci--4081--3--6--n--e100e79615-k8s-coredns--668d6bf9bc--djgpg-eth0" Jan 17 00:27:26.336523 containerd[1501]: 2026-01-17 00:27:26.332 [INFO][5086] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:27:26.336523 containerd[1501]: 2026-01-17 00:27:26.334 [INFO][5079] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e9e413599c888e558f6b61d8a7e054695b06434e7332c6067770af00b48c784e" Jan 17 00:27:26.337024 containerd[1501]: time="2026-01-17T00:27:26.336565975Z" level=info msg="TearDown network for sandbox \"e9e413599c888e558f6b61d8a7e054695b06434e7332c6067770af00b48c784e\" successfully" Jan 17 00:27:26.340845 containerd[1501]: time="2026-01-17T00:27:26.340788956Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e9e413599c888e558f6b61d8a7e054695b06434e7332c6067770af00b48c784e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:27:26.340845 containerd[1501]: time="2026-01-17T00:27:26.340846588Z" level=info msg="RemovePodSandbox \"e9e413599c888e558f6b61d8a7e054695b06434e7332c6067770af00b48c784e\" returns successfully" Jan 17 00:27:26.341823 containerd[1501]: time="2026-01-17T00:27:26.341447740Z" level=info msg="StopPodSandbox for \"9d33cd18386e695ae774c21f623f57628505898b093b56bca4d944a9a7f7b11a\"" Jan 17 00:27:26.409734 containerd[1501]: 2026-01-17 00:27:26.375 [WARNING][5101] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9d33cd18386e695ae774c21f623f57628505898b093b56bca4d944a9a7f7b11a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--e100e79615-k8s-goldmane--666569f655--ssv4k-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"a05bf132-cecb-477a-a941-f502759ced80", ResourceVersion:"1066", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 26, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-e100e79615", ContainerID:"5dcd227185601bde83f8cba299823a0eb040d9b249d2d53ac67aa892f31514b0", Pod:"goldmane-666569f655-ssv4k", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.114.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliee465a71775", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:27:26.409734 containerd[1501]: 2026-01-17 00:27:26.376 [INFO][5101] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9d33cd18386e695ae774c21f623f57628505898b093b56bca4d944a9a7f7b11a" Jan 17 00:27:26.409734 containerd[1501]: 2026-01-17 00:27:26.376 [INFO][5101] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9d33cd18386e695ae774c21f623f57628505898b093b56bca4d944a9a7f7b11a" iface="eth0" netns="" Jan 17 00:27:26.409734 containerd[1501]: 2026-01-17 00:27:26.376 [INFO][5101] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9d33cd18386e695ae774c21f623f57628505898b093b56bca4d944a9a7f7b11a" Jan 17 00:27:26.409734 containerd[1501]: 2026-01-17 00:27:26.376 [INFO][5101] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9d33cd18386e695ae774c21f623f57628505898b093b56bca4d944a9a7f7b11a" Jan 17 00:27:26.409734 containerd[1501]: 2026-01-17 00:27:26.396 [INFO][5108] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9d33cd18386e695ae774c21f623f57628505898b093b56bca4d944a9a7f7b11a" HandleID="k8s-pod-network.9d33cd18386e695ae774c21f623f57628505898b093b56bca4d944a9a7f7b11a" Workload="ci--4081--3--6--n--e100e79615-k8s-goldmane--666569f655--ssv4k-eth0" Jan 17 00:27:26.409734 containerd[1501]: 2026-01-17 00:27:26.396 [INFO][5108] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:27:26.409734 containerd[1501]: 2026-01-17 00:27:26.396 [INFO][5108] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:27:26.409734 containerd[1501]: 2026-01-17 00:27:26.403 [WARNING][5108] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9d33cd18386e695ae774c21f623f57628505898b093b56bca4d944a9a7f7b11a" HandleID="k8s-pod-network.9d33cd18386e695ae774c21f623f57628505898b093b56bca4d944a9a7f7b11a" Workload="ci--4081--3--6--n--e100e79615-k8s-goldmane--666569f655--ssv4k-eth0" Jan 17 00:27:26.409734 containerd[1501]: 2026-01-17 00:27:26.403 [INFO][5108] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9d33cd18386e695ae774c21f623f57628505898b093b56bca4d944a9a7f7b11a" HandleID="k8s-pod-network.9d33cd18386e695ae774c21f623f57628505898b093b56bca4d944a9a7f7b11a" Workload="ci--4081--3--6--n--e100e79615-k8s-goldmane--666569f655--ssv4k-eth0" Jan 17 00:27:26.409734 containerd[1501]: 2026-01-17 00:27:26.405 [INFO][5108] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:27:26.409734 containerd[1501]: 2026-01-17 00:27:26.407 [INFO][5101] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9d33cd18386e695ae774c21f623f57628505898b093b56bca4d944a9a7f7b11a" Jan 17 00:27:26.410190 containerd[1501]: time="2026-01-17T00:27:26.409789687Z" level=info msg="TearDown network for sandbox \"9d33cd18386e695ae774c21f623f57628505898b093b56bca4d944a9a7f7b11a\" successfully" Jan 17 00:27:26.410190 containerd[1501]: time="2026-01-17T00:27:26.409818088Z" level=info msg="StopPodSandbox for \"9d33cd18386e695ae774c21f623f57628505898b093b56bca4d944a9a7f7b11a\" returns successfully" Jan 17 00:27:26.410968 containerd[1501]: time="2026-01-17T00:27:26.410577724Z" level=info msg="RemovePodSandbox for \"9d33cd18386e695ae774c21f623f57628505898b093b56bca4d944a9a7f7b11a\"" Jan 17 00:27:26.410968 containerd[1501]: time="2026-01-17T00:27:26.410616825Z" level=info msg="Forcibly stopping sandbox \"9d33cd18386e695ae774c21f623f57628505898b093b56bca4d944a9a7f7b11a\"" Jan 17 00:27:26.485450 containerd[1501]: 2026-01-17 00:27:26.447 [WARNING][5122] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9d33cd18386e695ae774c21f623f57628505898b093b56bca4d944a9a7f7b11a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--e100e79615-k8s-goldmane--666569f655--ssv4k-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"a05bf132-cecb-477a-a941-f502759ced80", ResourceVersion:"1066", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 26, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-e100e79615", ContainerID:"5dcd227185601bde83f8cba299823a0eb040d9b249d2d53ac67aa892f31514b0", Pod:"goldmane-666569f655-ssv4k", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.114.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliee465a71775", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:27:26.485450 containerd[1501]: 2026-01-17 00:27:26.447 [INFO][5122] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9d33cd18386e695ae774c21f623f57628505898b093b56bca4d944a9a7f7b11a" Jan 17 00:27:26.485450 containerd[1501]: 2026-01-17 00:27:26.447 [INFO][5122] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9d33cd18386e695ae774c21f623f57628505898b093b56bca4d944a9a7f7b11a" iface="eth0" netns="" Jan 17 00:27:26.485450 containerd[1501]: 2026-01-17 00:27:26.447 [INFO][5122] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9d33cd18386e695ae774c21f623f57628505898b093b56bca4d944a9a7f7b11a" Jan 17 00:27:26.485450 containerd[1501]: 2026-01-17 00:27:26.447 [INFO][5122] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9d33cd18386e695ae774c21f623f57628505898b093b56bca4d944a9a7f7b11a" Jan 17 00:27:26.485450 containerd[1501]: 2026-01-17 00:27:26.473 [INFO][5129] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9d33cd18386e695ae774c21f623f57628505898b093b56bca4d944a9a7f7b11a" HandleID="k8s-pod-network.9d33cd18386e695ae774c21f623f57628505898b093b56bca4d944a9a7f7b11a" Workload="ci--4081--3--6--n--e100e79615-k8s-goldmane--666569f655--ssv4k-eth0" Jan 17 00:27:26.485450 containerd[1501]: 2026-01-17 00:27:26.473 [INFO][5129] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:27:26.485450 containerd[1501]: 2026-01-17 00:27:26.473 [INFO][5129] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:27:26.485450 containerd[1501]: 2026-01-17 00:27:26.479 [WARNING][5129] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9d33cd18386e695ae774c21f623f57628505898b093b56bca4d944a9a7f7b11a" HandleID="k8s-pod-network.9d33cd18386e695ae774c21f623f57628505898b093b56bca4d944a9a7f7b11a" Workload="ci--4081--3--6--n--e100e79615-k8s-goldmane--666569f655--ssv4k-eth0" Jan 17 00:27:26.485450 containerd[1501]: 2026-01-17 00:27:26.479 [INFO][5129] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9d33cd18386e695ae774c21f623f57628505898b093b56bca4d944a9a7f7b11a" HandleID="k8s-pod-network.9d33cd18386e695ae774c21f623f57628505898b093b56bca4d944a9a7f7b11a" Workload="ci--4081--3--6--n--e100e79615-k8s-goldmane--666569f655--ssv4k-eth0" Jan 17 00:27:26.485450 containerd[1501]: 2026-01-17 00:27:26.480 [INFO][5129] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:27:26.485450 containerd[1501]: 2026-01-17 00:27:26.482 [INFO][5122] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9d33cd18386e695ae774c21f623f57628505898b093b56bca4d944a9a7f7b11a" Jan 17 00:27:26.486790 containerd[1501]: time="2026-01-17T00:27:26.485899640Z" level=info msg="TearDown network for sandbox \"9d33cd18386e695ae774c21f623f57628505898b093b56bca4d944a9a7f7b11a\" successfully" Jan 17 00:27:26.489956 containerd[1501]: time="2026-01-17T00:27:26.489913516Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9d33cd18386e695ae774c21f623f57628505898b093b56bca4d944a9a7f7b11a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:27:26.489956 containerd[1501]: time="2026-01-17T00:27:26.489962867Z" level=info msg="RemovePodSandbox \"9d33cd18386e695ae774c21f623f57628505898b093b56bca4d944a9a7f7b11a\" returns successfully" Jan 17 00:27:26.490584 containerd[1501]: time="2026-01-17T00:27:26.490542460Z" level=info msg="StopPodSandbox for \"5f1eaa4f504fb148622ab79b04b1b660a317bb48cc4e0e403547fdcf7b80b333\"" Jan 17 00:27:26.524869 containerd[1501]: time="2026-01-17T00:27:26.524536170Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:27:26.570880 containerd[1501]: 2026-01-17 00:27:26.523 [WARNING][5143] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5f1eaa4f504fb148622ab79b04b1b660a317bb48cc4e0e403547fdcf7b80b333" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--e100e79615-k8s-calico--kube--controllers--64b8756fdc--7qc6h-eth0", GenerateName:"calico-kube-controllers-64b8756fdc-", Namespace:"calico-system", SelfLink:"", UID:"481372c3-ef6e-46bf-86cb-78fea87a79f9", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 26, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64b8756fdc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-e100e79615", ContainerID:"9d9e65ef6f72f349cdb809aa7a0981ef0b933898fadc4ab7271dd997743c8d7e", Pod:"calico-kube-controllers-64b8756fdc-7qc6h", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.114.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4fc9e1807c8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:27:26.570880 containerd[1501]: 2026-01-17 00:27:26.524 [INFO][5143] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5f1eaa4f504fb148622ab79b04b1b660a317bb48cc4e0e403547fdcf7b80b333" Jan 17 00:27:26.570880 containerd[1501]: 2026-01-17 00:27:26.525 [INFO][5143] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5f1eaa4f504fb148622ab79b04b1b660a317bb48cc4e0e403547fdcf7b80b333" iface="eth0" netns="" Jan 17 00:27:26.570880 containerd[1501]: 2026-01-17 00:27:26.525 [INFO][5143] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5f1eaa4f504fb148622ab79b04b1b660a317bb48cc4e0e403547fdcf7b80b333" Jan 17 00:27:26.570880 containerd[1501]: 2026-01-17 00:27:26.525 [INFO][5143] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5f1eaa4f504fb148622ab79b04b1b660a317bb48cc4e0e403547fdcf7b80b333" Jan 17 00:27:26.570880 containerd[1501]: 2026-01-17 00:27:26.558 [INFO][5150] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5f1eaa4f504fb148622ab79b04b1b660a317bb48cc4e0e403547fdcf7b80b333" HandleID="k8s-pod-network.5f1eaa4f504fb148622ab79b04b1b660a317bb48cc4e0e403547fdcf7b80b333" Workload="ci--4081--3--6--n--e100e79615-k8s-calico--kube--controllers--64b8756fdc--7qc6h-eth0" Jan 17 00:27:26.570880 containerd[1501]: 2026-01-17 00:27:26.558 [INFO][5150] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:27:26.570880 containerd[1501]: 2026-01-17 00:27:26.558 [INFO][5150] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:27:26.570880 containerd[1501]: 2026-01-17 00:27:26.564 [WARNING][5150] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5f1eaa4f504fb148622ab79b04b1b660a317bb48cc4e0e403547fdcf7b80b333" HandleID="k8s-pod-network.5f1eaa4f504fb148622ab79b04b1b660a317bb48cc4e0e403547fdcf7b80b333" Workload="ci--4081--3--6--n--e100e79615-k8s-calico--kube--controllers--64b8756fdc--7qc6h-eth0" Jan 17 00:27:26.570880 containerd[1501]: 2026-01-17 00:27:26.564 [INFO][5150] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5f1eaa4f504fb148622ab79b04b1b660a317bb48cc4e0e403547fdcf7b80b333" HandleID="k8s-pod-network.5f1eaa4f504fb148622ab79b04b1b660a317bb48cc4e0e403547fdcf7b80b333" Workload="ci--4081--3--6--n--e100e79615-k8s-calico--kube--controllers--64b8756fdc--7qc6h-eth0" Jan 17 00:27:26.570880 containerd[1501]: 2026-01-17 00:27:26.566 [INFO][5150] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:27:26.570880 containerd[1501]: 2026-01-17 00:27:26.568 [INFO][5143] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5f1eaa4f504fb148622ab79b04b1b660a317bb48cc4e0e403547fdcf7b80b333" Jan 17 00:27:26.570880 containerd[1501]: time="2026-01-17T00:27:26.570738501Z" level=info msg="TearDown network for sandbox \"5f1eaa4f504fb148622ab79b04b1b660a317bb48cc4e0e403547fdcf7b80b333\" successfully" Jan 17 00:27:26.570880 containerd[1501]: time="2026-01-17T00:27:26.570777422Z" level=info msg="StopPodSandbox for \"5f1eaa4f504fb148622ab79b04b1b660a317bb48cc4e0e403547fdcf7b80b333\" returns successfully" Jan 17 00:27:26.571586 containerd[1501]: time="2026-01-17T00:27:26.571424516Z" level=info msg="RemovePodSandbox for \"5f1eaa4f504fb148622ab79b04b1b660a317bb48cc4e0e403547fdcf7b80b333\"" Jan 17 00:27:26.571586 containerd[1501]: time="2026-01-17T00:27:26.571460716Z" level=info msg="Forcibly stopping sandbox \"5f1eaa4f504fb148622ab79b04b1b660a317bb48cc4e0e403547fdcf7b80b333\"" Jan 17 00:27:26.650998 containerd[1501]: 2026-01-17 00:27:26.613 [WARNING][5164] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5f1eaa4f504fb148622ab79b04b1b660a317bb48cc4e0e403547fdcf7b80b333" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--e100e79615-k8s-calico--kube--controllers--64b8756fdc--7qc6h-eth0", GenerateName:"calico-kube-controllers-64b8756fdc-", Namespace:"calico-system", SelfLink:"", UID:"481372c3-ef6e-46bf-86cb-78fea87a79f9", ResourceVersion:"1101", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 26, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64b8756fdc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-e100e79615", ContainerID:"9d9e65ef6f72f349cdb809aa7a0981ef0b933898fadc4ab7271dd997743c8d7e", Pod:"calico-kube-controllers-64b8756fdc-7qc6h", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.114.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4fc9e1807c8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:27:26.650998 containerd[1501]: 2026-01-17 00:27:26.613 [INFO][5164] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5f1eaa4f504fb148622ab79b04b1b660a317bb48cc4e0e403547fdcf7b80b333" Jan 17 00:27:26.650998 containerd[1501]: 2026-01-17 00:27:26.613 [INFO][5164] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5f1eaa4f504fb148622ab79b04b1b660a317bb48cc4e0e403547fdcf7b80b333" iface="eth0" netns="" Jan 17 00:27:26.650998 containerd[1501]: 2026-01-17 00:27:26.613 [INFO][5164] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5f1eaa4f504fb148622ab79b04b1b660a317bb48cc4e0e403547fdcf7b80b333" Jan 17 00:27:26.650998 containerd[1501]: 2026-01-17 00:27:26.613 [INFO][5164] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5f1eaa4f504fb148622ab79b04b1b660a317bb48cc4e0e403547fdcf7b80b333" Jan 17 00:27:26.650998 containerd[1501]: 2026-01-17 00:27:26.637 [INFO][5171] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5f1eaa4f504fb148622ab79b04b1b660a317bb48cc4e0e403547fdcf7b80b333" HandleID="k8s-pod-network.5f1eaa4f504fb148622ab79b04b1b660a317bb48cc4e0e403547fdcf7b80b333" Workload="ci--4081--3--6--n--e100e79615-k8s-calico--kube--controllers--64b8756fdc--7qc6h-eth0" Jan 17 00:27:26.650998 containerd[1501]: 2026-01-17 00:27:26.637 [INFO][5171] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:27:26.650998 containerd[1501]: 2026-01-17 00:27:26.637 [INFO][5171] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:27:26.650998 containerd[1501]: 2026-01-17 00:27:26.643 [WARNING][5171] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5f1eaa4f504fb148622ab79b04b1b660a317bb48cc4e0e403547fdcf7b80b333" HandleID="k8s-pod-network.5f1eaa4f504fb148622ab79b04b1b660a317bb48cc4e0e403547fdcf7b80b333" Workload="ci--4081--3--6--n--e100e79615-k8s-calico--kube--controllers--64b8756fdc--7qc6h-eth0" Jan 17 00:27:26.650998 containerd[1501]: 2026-01-17 00:27:26.643 [INFO][5171] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5f1eaa4f504fb148622ab79b04b1b660a317bb48cc4e0e403547fdcf7b80b333" HandleID="k8s-pod-network.5f1eaa4f504fb148622ab79b04b1b660a317bb48cc4e0e403547fdcf7b80b333" Workload="ci--4081--3--6--n--e100e79615-k8s-calico--kube--controllers--64b8756fdc--7qc6h-eth0" Jan 17 00:27:26.650998 containerd[1501]: 2026-01-17 00:27:26.645 [INFO][5171] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:27:26.650998 containerd[1501]: 2026-01-17 00:27:26.647 [INFO][5164] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5f1eaa4f504fb148622ab79b04b1b660a317bb48cc4e0e403547fdcf7b80b333" Jan 17 00:27:26.650998 containerd[1501]: time="2026-01-17T00:27:26.649570753Z" level=info msg="TearDown network for sandbox \"5f1eaa4f504fb148622ab79b04b1b660a317bb48cc4e0e403547fdcf7b80b333\" successfully" Jan 17 00:27:26.653239 containerd[1501]: time="2026-01-17T00:27:26.653188260Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5f1eaa4f504fb148622ab79b04b1b660a317bb48cc4e0e403547fdcf7b80b333\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:27:26.653239 containerd[1501]: time="2026-01-17T00:27:26.653243141Z" level=info msg="RemovePodSandbox \"5f1eaa4f504fb148622ab79b04b1b660a317bb48cc4e0e403547fdcf7b80b333\" returns successfully" Jan 17 00:27:26.653857 containerd[1501]: time="2026-01-17T00:27:26.653811233Z" level=info msg="StopPodSandbox for \"30102ecbd68f8dad1b7eebc6b63bf07d8791c72fae20f544db97065f0c52e549\"" Jan 17 00:27:26.718048 containerd[1501]: 2026-01-17 00:27:26.685 [WARNING][5185] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="30102ecbd68f8dad1b7eebc6b63bf07d8791c72fae20f544db97065f0c52e549" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--e100e79615-k8s-csi--node--driver--wn7sn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"eb95b785-13b8-4aa9-b43b-38efbd205ceb", ResourceVersion:"1056", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 26, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-e100e79615", ContainerID:"015108f60831000de48bc1c0a597e595eaa552810a965c78a4b6fae78027d0fb", Pod:"csi-node-driver-wn7sn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.114.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali944594bddaf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:27:26.718048 containerd[1501]: 2026-01-17 00:27:26.685 [INFO][5185] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="30102ecbd68f8dad1b7eebc6b63bf07d8791c72fae20f544db97065f0c52e549" Jan 17 00:27:26.718048 containerd[1501]: 2026-01-17 00:27:26.685 [INFO][5185] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="30102ecbd68f8dad1b7eebc6b63bf07d8791c72fae20f544db97065f0c52e549" iface="eth0" netns="" Jan 17 00:27:26.718048 containerd[1501]: 2026-01-17 00:27:26.685 [INFO][5185] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="30102ecbd68f8dad1b7eebc6b63bf07d8791c72fae20f544db97065f0c52e549" Jan 17 00:27:26.718048 containerd[1501]: 2026-01-17 00:27:26.685 [INFO][5185] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="30102ecbd68f8dad1b7eebc6b63bf07d8791c72fae20f544db97065f0c52e549" Jan 17 00:27:26.718048 containerd[1501]: 2026-01-17 00:27:26.706 [INFO][5193] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="30102ecbd68f8dad1b7eebc6b63bf07d8791c72fae20f544db97065f0c52e549" HandleID="k8s-pod-network.30102ecbd68f8dad1b7eebc6b63bf07d8791c72fae20f544db97065f0c52e549" Workload="ci--4081--3--6--n--e100e79615-k8s-csi--node--driver--wn7sn-eth0" Jan 17 00:27:26.718048 containerd[1501]: 2026-01-17 00:27:26.706 [INFO][5193] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:27:26.718048 containerd[1501]: 2026-01-17 00:27:26.706 [INFO][5193] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:27:26.718048 containerd[1501]: 2026-01-17 00:27:26.712 [WARNING][5193] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="30102ecbd68f8dad1b7eebc6b63bf07d8791c72fae20f544db97065f0c52e549" HandleID="k8s-pod-network.30102ecbd68f8dad1b7eebc6b63bf07d8791c72fae20f544db97065f0c52e549" Workload="ci--4081--3--6--n--e100e79615-k8s-csi--node--driver--wn7sn-eth0" Jan 17 00:27:26.718048 containerd[1501]: 2026-01-17 00:27:26.712 [INFO][5193] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="30102ecbd68f8dad1b7eebc6b63bf07d8791c72fae20f544db97065f0c52e549" HandleID="k8s-pod-network.30102ecbd68f8dad1b7eebc6b63bf07d8791c72fae20f544db97065f0c52e549" Workload="ci--4081--3--6--n--e100e79615-k8s-csi--node--driver--wn7sn-eth0" Jan 17 00:27:26.718048 containerd[1501]: 2026-01-17 00:27:26.713 [INFO][5193] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:27:26.718048 containerd[1501]: 2026-01-17 00:27:26.715 [INFO][5185] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="30102ecbd68f8dad1b7eebc6b63bf07d8791c72fae20f544db97065f0c52e549" Jan 17 00:27:26.718550 containerd[1501]: time="2026-01-17T00:27:26.718157925Z" level=info msg="TearDown network for sandbox \"30102ecbd68f8dad1b7eebc6b63bf07d8791c72fae20f544db97065f0c52e549\" successfully" Jan 17 00:27:26.718550 containerd[1501]: time="2026-01-17T00:27:26.718183335Z" level=info msg="StopPodSandbox for \"30102ecbd68f8dad1b7eebc6b63bf07d8791c72fae20f544db97065f0c52e549\" returns successfully" Jan 17 00:27:26.718754 containerd[1501]: time="2026-01-17T00:27:26.718729077Z" level=info msg="RemovePodSandbox for \"30102ecbd68f8dad1b7eebc6b63bf07d8791c72fae20f544db97065f0c52e549\"" Jan 17 00:27:26.718778 containerd[1501]: time="2026-01-17T00:27:26.718764047Z" level=info msg="Forcibly stopping sandbox \"30102ecbd68f8dad1b7eebc6b63bf07d8791c72fae20f544db97065f0c52e549\"" Jan 17 00:27:26.785117 containerd[1501]: 2026-01-17 00:27:26.753 [WARNING][5208] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="30102ecbd68f8dad1b7eebc6b63bf07d8791c72fae20f544db97065f0c52e549" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--e100e79615-k8s-csi--node--driver--wn7sn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"eb95b785-13b8-4aa9-b43b-38efbd205ceb", ResourceVersion:"1056", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 26, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-e100e79615", ContainerID:"015108f60831000de48bc1c0a597e595eaa552810a965c78a4b6fae78027d0fb", Pod:"csi-node-driver-wn7sn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.114.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali944594bddaf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:27:26.785117 containerd[1501]: 2026-01-17 00:27:26.753 [INFO][5208] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="30102ecbd68f8dad1b7eebc6b63bf07d8791c72fae20f544db97065f0c52e549" Jan 17 00:27:26.785117 containerd[1501]: 2026-01-17 00:27:26.753 [INFO][5208] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="30102ecbd68f8dad1b7eebc6b63bf07d8791c72fae20f544db97065f0c52e549" iface="eth0" netns="" Jan 17 00:27:26.785117 containerd[1501]: 2026-01-17 00:27:26.753 [INFO][5208] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="30102ecbd68f8dad1b7eebc6b63bf07d8791c72fae20f544db97065f0c52e549" Jan 17 00:27:26.785117 containerd[1501]: 2026-01-17 00:27:26.753 [INFO][5208] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="30102ecbd68f8dad1b7eebc6b63bf07d8791c72fae20f544db97065f0c52e549" Jan 17 00:27:26.785117 containerd[1501]: 2026-01-17 00:27:26.772 [INFO][5215] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="30102ecbd68f8dad1b7eebc6b63bf07d8791c72fae20f544db97065f0c52e549" HandleID="k8s-pod-network.30102ecbd68f8dad1b7eebc6b63bf07d8791c72fae20f544db97065f0c52e549" Workload="ci--4081--3--6--n--e100e79615-k8s-csi--node--driver--wn7sn-eth0" Jan 17 00:27:26.785117 containerd[1501]: 2026-01-17 00:27:26.772 [INFO][5215] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:27:26.785117 containerd[1501]: 2026-01-17 00:27:26.772 [INFO][5215] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:27:26.785117 containerd[1501]: 2026-01-17 00:27:26.778 [WARNING][5215] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="30102ecbd68f8dad1b7eebc6b63bf07d8791c72fae20f544db97065f0c52e549" HandleID="k8s-pod-network.30102ecbd68f8dad1b7eebc6b63bf07d8791c72fae20f544db97065f0c52e549" Workload="ci--4081--3--6--n--e100e79615-k8s-csi--node--driver--wn7sn-eth0" Jan 17 00:27:26.785117 containerd[1501]: 2026-01-17 00:27:26.778 [INFO][5215] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="30102ecbd68f8dad1b7eebc6b63bf07d8791c72fae20f544db97065f0c52e549" HandleID="k8s-pod-network.30102ecbd68f8dad1b7eebc6b63bf07d8791c72fae20f544db97065f0c52e549" Workload="ci--4081--3--6--n--e100e79615-k8s-csi--node--driver--wn7sn-eth0" Jan 17 00:27:26.785117 containerd[1501]: 2026-01-17 00:27:26.780 [INFO][5215] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:27:26.785117 containerd[1501]: 2026-01-17 00:27:26.782 [INFO][5208] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="30102ecbd68f8dad1b7eebc6b63bf07d8791c72fae20f544db97065f0c52e549" Jan 17 00:27:26.785117 containerd[1501]: time="2026-01-17T00:27:26.785186043Z" level=info msg="TearDown network for sandbox \"30102ecbd68f8dad1b7eebc6b63bf07d8791c72fae20f544db97065f0c52e549\" successfully" Jan 17 00:27:26.790724 containerd[1501]: time="2026-01-17T00:27:26.790677661Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"30102ecbd68f8dad1b7eebc6b63bf07d8791c72fae20f544db97065f0c52e549\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:27:26.790893 containerd[1501]: time="2026-01-17T00:27:26.790747632Z" level=info msg="RemovePodSandbox \"30102ecbd68f8dad1b7eebc6b63bf07d8791c72fae20f544db97065f0c52e549\" returns successfully" Jan 17 00:27:26.791490 containerd[1501]: time="2026-01-17T00:27:26.791463488Z" level=info msg="StopPodSandbox for \"5fe36568b56f37ae0885938ce6eb85e2b2da230eaf0ab30919938732eed7ec67\"" Jan 17 00:27:26.868137 containerd[1501]: 2026-01-17 00:27:26.830 [WARNING][5229] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5fe36568b56f37ae0885938ce6eb85e2b2da230eaf0ab30919938732eed7ec67" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--e100e79615-k8s-calico--apiserver--6675cb976f--s7mzt-eth0", GenerateName:"calico-apiserver-6675cb976f-", Namespace:"calico-apiserver", SelfLink:"", UID:"73c589a5-8e71-425c-a060-0cf6cb3ed239", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 26, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6675cb976f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-e100e79615", ContainerID:"3b3d0c8239b43fedba5b1fbb351d2a99ec60b34a3f2bcbffeeeac12b9fa8cf44", Pod:"calico-apiserver-6675cb976f-s7mzt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4e9d4212d78", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:27:26.868137 containerd[1501]: 2026-01-17 00:27:26.831 [INFO][5229] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5fe36568b56f37ae0885938ce6eb85e2b2da230eaf0ab30919938732eed7ec67" Jan 17 00:27:26.868137 containerd[1501]: 2026-01-17 00:27:26.831 [INFO][5229] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5fe36568b56f37ae0885938ce6eb85e2b2da230eaf0ab30919938732eed7ec67" iface="eth0" netns="" Jan 17 00:27:26.868137 containerd[1501]: 2026-01-17 00:27:26.831 [INFO][5229] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5fe36568b56f37ae0885938ce6eb85e2b2da230eaf0ab30919938732eed7ec67" Jan 17 00:27:26.868137 containerd[1501]: 2026-01-17 00:27:26.831 [INFO][5229] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5fe36568b56f37ae0885938ce6eb85e2b2da230eaf0ab30919938732eed7ec67" Jan 17 00:27:26.868137 containerd[1501]: 2026-01-17 00:27:26.856 [INFO][5236] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5fe36568b56f37ae0885938ce6eb85e2b2da230eaf0ab30919938732eed7ec67" HandleID="k8s-pod-network.5fe36568b56f37ae0885938ce6eb85e2b2da230eaf0ab30919938732eed7ec67" Workload="ci--4081--3--6--n--e100e79615-k8s-calico--apiserver--6675cb976f--s7mzt-eth0" Jan 17 00:27:26.868137 containerd[1501]: 2026-01-17 00:27:26.856 [INFO][5236] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:27:26.868137 containerd[1501]: 2026-01-17 00:27:26.856 [INFO][5236] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:27:26.868137 containerd[1501]: 2026-01-17 00:27:26.862 [WARNING][5236] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5fe36568b56f37ae0885938ce6eb85e2b2da230eaf0ab30919938732eed7ec67" HandleID="k8s-pod-network.5fe36568b56f37ae0885938ce6eb85e2b2da230eaf0ab30919938732eed7ec67" Workload="ci--4081--3--6--n--e100e79615-k8s-calico--apiserver--6675cb976f--s7mzt-eth0" Jan 17 00:27:26.868137 containerd[1501]: 2026-01-17 00:27:26.862 [INFO][5236] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5fe36568b56f37ae0885938ce6eb85e2b2da230eaf0ab30919938732eed7ec67" HandleID="k8s-pod-network.5fe36568b56f37ae0885938ce6eb85e2b2da230eaf0ab30919938732eed7ec67" Workload="ci--4081--3--6--n--e100e79615-k8s-calico--apiserver--6675cb976f--s7mzt-eth0" Jan 17 00:27:26.868137 containerd[1501]: 2026-01-17 00:27:26.863 [INFO][5236] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:27:26.868137 containerd[1501]: 2026-01-17 00:27:26.865 [INFO][5229] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5fe36568b56f37ae0885938ce6eb85e2b2da230eaf0ab30919938732eed7ec67" Jan 17 00:27:26.869452 containerd[1501]: time="2026-01-17T00:27:26.868907130Z" level=info msg="TearDown network for sandbox \"5fe36568b56f37ae0885938ce6eb85e2b2da230eaf0ab30919938732eed7ec67\" successfully" Jan 17 00:27:26.869452 containerd[1501]: time="2026-01-17T00:27:26.868937270Z" level=info msg="StopPodSandbox for \"5fe36568b56f37ae0885938ce6eb85e2b2da230eaf0ab30919938732eed7ec67\" returns successfully" Jan 17 00:27:26.869831 containerd[1501]: time="2026-01-17T00:27:26.869802869Z" level=info msg="RemovePodSandbox for \"5fe36568b56f37ae0885938ce6eb85e2b2da230eaf0ab30919938732eed7ec67\"" Jan 17 00:27:26.869899 containerd[1501]: time="2026-01-17T00:27:26.869838079Z" level=info msg="Forcibly stopping sandbox \"5fe36568b56f37ae0885938ce6eb85e2b2da230eaf0ab30919938732eed7ec67\"" Jan 17 00:27:26.934216 containerd[1501]: 2026-01-17 00:27:26.901 [WARNING][5250] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5fe36568b56f37ae0885938ce6eb85e2b2da230eaf0ab30919938732eed7ec67" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--e100e79615-k8s-calico--apiserver--6675cb976f--s7mzt-eth0", GenerateName:"calico-apiserver-6675cb976f-", Namespace:"calico-apiserver", SelfLink:"", UID:"73c589a5-8e71-425c-a060-0cf6cb3ed239", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 26, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6675cb976f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-e100e79615", ContainerID:"3b3d0c8239b43fedba5b1fbb351d2a99ec60b34a3f2bcbffeeeac12b9fa8cf44", Pod:"calico-apiserver-6675cb976f-s7mzt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4e9d4212d78", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:27:26.934216 containerd[1501]: 2026-01-17 00:27:26.902 [INFO][5250] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5fe36568b56f37ae0885938ce6eb85e2b2da230eaf0ab30919938732eed7ec67" Jan 17 00:27:26.934216 containerd[1501]: 2026-01-17 00:27:26.902 [INFO][5250] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5fe36568b56f37ae0885938ce6eb85e2b2da230eaf0ab30919938732eed7ec67" iface="eth0" netns="" Jan 17 00:27:26.934216 containerd[1501]: 2026-01-17 00:27:26.902 [INFO][5250] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5fe36568b56f37ae0885938ce6eb85e2b2da230eaf0ab30919938732eed7ec67" Jan 17 00:27:26.934216 containerd[1501]: 2026-01-17 00:27:26.902 [INFO][5250] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5fe36568b56f37ae0885938ce6eb85e2b2da230eaf0ab30919938732eed7ec67" Jan 17 00:27:26.934216 containerd[1501]: 2026-01-17 00:27:26.922 [INFO][5258] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5fe36568b56f37ae0885938ce6eb85e2b2da230eaf0ab30919938732eed7ec67" HandleID="k8s-pod-network.5fe36568b56f37ae0885938ce6eb85e2b2da230eaf0ab30919938732eed7ec67" Workload="ci--4081--3--6--n--e100e79615-k8s-calico--apiserver--6675cb976f--s7mzt-eth0" Jan 17 00:27:26.934216 containerd[1501]: 2026-01-17 00:27:26.922 [INFO][5258] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:27:26.934216 containerd[1501]: 2026-01-17 00:27:26.923 [INFO][5258] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:27:26.934216 containerd[1501]: 2026-01-17 00:27:26.928 [WARNING][5258] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5fe36568b56f37ae0885938ce6eb85e2b2da230eaf0ab30919938732eed7ec67" HandleID="k8s-pod-network.5fe36568b56f37ae0885938ce6eb85e2b2da230eaf0ab30919938732eed7ec67" Workload="ci--4081--3--6--n--e100e79615-k8s-calico--apiserver--6675cb976f--s7mzt-eth0" Jan 17 00:27:26.934216 containerd[1501]: 2026-01-17 00:27:26.928 [INFO][5258] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5fe36568b56f37ae0885938ce6eb85e2b2da230eaf0ab30919938732eed7ec67" HandleID="k8s-pod-network.5fe36568b56f37ae0885938ce6eb85e2b2da230eaf0ab30919938732eed7ec67" Workload="ci--4081--3--6--n--e100e79615-k8s-calico--apiserver--6675cb976f--s7mzt-eth0" Jan 17 00:27:26.934216 containerd[1501]: 2026-01-17 00:27:26.929 [INFO][5258] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:27:26.934216 containerd[1501]: 2026-01-17 00:27:26.931 [INFO][5250] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5fe36568b56f37ae0885938ce6eb85e2b2da230eaf0ab30919938732eed7ec67" Jan 17 00:27:26.934216 containerd[1501]: time="2026-01-17T00:27:26.933729701Z" level=info msg="TearDown network for sandbox \"5fe36568b56f37ae0885938ce6eb85e2b2da230eaf0ab30919938732eed7ec67\" successfully" Jan 17 00:27:26.938084 containerd[1501]: time="2026-01-17T00:27:26.938047424Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5fe36568b56f37ae0885938ce6eb85e2b2da230eaf0ab30919938732eed7ec67\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:27:26.938084 containerd[1501]: time="2026-01-17T00:27:26.938087985Z" level=info msg="RemovePodSandbox \"5fe36568b56f37ae0885938ce6eb85e2b2da230eaf0ab30919938732eed7ec67\" returns successfully" Jan 17 00:27:26.963373 containerd[1501]: time="2026-01-17T00:27:26.963277865Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:27:26.964463 containerd[1501]: time="2026-01-17T00:27:26.964422000Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:27:26.964588 containerd[1501]: time="2026-01-17T00:27:26.964497821Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:27:26.964648 kubelet[2562]: E0117 00:27:26.964614 2562 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:27:26.965129 kubelet[2562]: E0117 00:27:26.964664 2562 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:27:26.965129 kubelet[2562]: E0117 00:27:26.964795 2562 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2nkr5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-64b8756fdc-7qc6h_calico-system(481372c3-ef6e-46bf-86cb-78fea87a79f9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:27:26.966036 kubelet[2562]: E0117 00:27:26.965966 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64b8756fdc-7qc6h" podUID="481372c3-ef6e-46bf-86cb-78fea87a79f9" Jan 17 00:27:27.521268 containerd[1501]: time="2026-01-17T00:27:27.520963601Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:27:27.964636 containerd[1501]: time="2026-01-17T00:27:27.964555236Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:27:27.966174 containerd[1501]: time="2026-01-17T00:27:27.966066508Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:27:27.966728 containerd[1501]: time="2026-01-17T00:27:27.966113079Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:27:27.966796 kubelet[2562]: E0117 00:27:27.966448 2562 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:27:27.966796 kubelet[2562]: E0117 00:27:27.966516 2562 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:27:27.966796 kubelet[2562]: E0117 00:27:27.966667 2562 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-knchd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wn7sn_calico-system(eb95b785-13b8-4aa9-b43b-38efbd205ceb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:27:27.970750 containerd[1501]: time="2026-01-17T00:27:27.970431539Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:27:28.398039 containerd[1501]: time="2026-01-17T00:27:28.397881309Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:27:28.399379 containerd[1501]: time="2026-01-17T00:27:28.399312048Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:27:28.399477 containerd[1501]: time="2026-01-17T00:27:28.399410550Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:27:28.399624 kubelet[2562]: E0117 00:27:28.399569 2562 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:27:28.399676 kubelet[2562]: E0117 00:27:28.399626 2562 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:27:28.399751 kubelet[2562]: E0117 00:27:28.399720 2562 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-knchd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wn7sn_calico-system(eb95b785-13b8-4aa9-b43b-38efbd205ceb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:27:28.401228 kubelet[2562]: E0117 00:27:28.401175 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wn7sn" podUID="eb95b785-13b8-4aa9-b43b-38efbd205ceb" Jan 17 00:27:29.520263 containerd[1501]: time="2026-01-17T00:27:29.519747098Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:27:29.949950 containerd[1501]: time="2026-01-17T00:27:29.949724369Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:27:29.951819 containerd[1501]: time="2026-01-17T00:27:29.951530915Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:27:29.951819 containerd[1501]: time="2026-01-17T00:27:29.951576966Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:27:29.952015 kubelet[2562]: E0117 00:27:29.951893 2562 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:27:29.952015 kubelet[2562]: E0117 00:27:29.951958 2562 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:27:29.953220 kubelet[2562]: E0117 00:27:29.953136 2562 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fnczc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6675cb976f-s7mzt_calico-apiserver(73c589a5-8e71-425c-a060-0cf6cb3ed239): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:27:29.954988 containerd[1501]: time="2026-01-17T00:27:29.954653508Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:27:29.955578 kubelet[2562]: E0117 00:27:29.955319 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6675cb976f-s7mzt" podUID="73c589a5-8e71-425c-a060-0cf6cb3ed239" Jan 17 00:27:30.396293 containerd[1501]: time="2026-01-17T00:27:30.396183787Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:27:30.397818 containerd[1501]: time="2026-01-17T00:27:30.397722467Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:27:30.398654 containerd[1501]: time="2026-01-17T00:27:30.397847320Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:27:30.398732 kubelet[2562]: E0117 00:27:30.398150 2562 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:27:30.398732 kubelet[2562]: E0117 00:27:30.398224 2562 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:27:30.398732 kubelet[2562]: E0117 00:27:30.398421 2562 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bn44h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-ssv4k_calico-system(a05bf132-cecb-477a-a941-f502759ced80): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:27:30.400475 kubelet[2562]: E0117 00:27:30.400381 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ssv4k" podUID="a05bf132-cecb-477a-a941-f502759ced80" Jan 17 00:27:37.527158 kubelet[2562]: E0117 00:27:37.524280 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7f57f5f859-j9rxk" podUID="95fdc174-63e2-499b-8b79-a226c39e6eaf" Jan 17 00:27:38.517367 kubelet[2562]: E0117 00:27:38.516987 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64b8756fdc-7qc6h" podUID="481372c3-ef6e-46bf-86cb-78fea87a79f9" Jan 17 00:27:38.861873 systemd[1]: run-containerd-runc-k8s.io-b2e509c4a29e2cbcde88ec8a53450484bfb47d3c744e2e0f0866632a5f60c688-runc.13ctqt.mount: Deactivated successfully. Jan 17 00:27:41.518313 kubelet[2562]: E0117 00:27:41.517606 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6675cb976f-qgmnw" podUID="7ea2b3c0-00a9-42a4-a1ac-e5bd2a459fee" Jan 17 00:27:43.522137 kubelet[2562]: E0117 00:27:43.521325 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wn7sn" podUID="eb95b785-13b8-4aa9-b43b-38efbd205ceb" Jan 17 00:27:43.522137 kubelet[2562]: E0117 00:27:43.521594 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6675cb976f-s7mzt" podUID="73c589a5-8e71-425c-a060-0cf6cb3ed239" Jan 17 00:27:44.516545 kubelet[2562]: E0117 00:27:44.515832 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ssv4k" podUID="a05bf132-cecb-477a-a941-f502759ced80" Jan 17 00:27:50.517637 containerd[1501]: time="2026-01-17T00:27:50.517581345Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:27:50.965591 containerd[1501]: time="2026-01-17T00:27:50.965505862Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:27:50.967175 containerd[1501]: time="2026-01-17T00:27:50.967026722Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:27:50.967175 containerd[1501]: time="2026-01-17T00:27:50.967079452Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:27:50.967565 kubelet[2562]: E0117 00:27:50.967501 2562 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:27:50.967565 kubelet[2562]: E0117 00:27:50.967564 2562 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:27:50.968356 kubelet[2562]: E0117 00:27:50.967703 2562 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:4cfe1eccd3c14c02813c65fb803c84dd,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-scctk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7f57f5f859-j9rxk_calico-system(95fdc174-63e2-499b-8b79-a226c39e6eaf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:27:50.971365 containerd[1501]: time="2026-01-17T00:27:50.970953424Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:27:51.409625 containerd[1501]: time="2026-01-17T00:27:51.409445794Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:27:51.411000 containerd[1501]: time="2026-01-17T00:27:51.410676389Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:27:51.411000 containerd[1501]: time="2026-01-17T00:27:51.410751090Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:27:51.411181 kubelet[2562]: E0117 00:27:51.411136 2562 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:27:51.411226 kubelet[2562]: E0117 00:27:51.411187 2562 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:27:51.412352 kubelet[2562]: E0117 00:27:51.412281 2562 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-scctk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7f57f5f859-j9rxk_calico-system(95fdc174-63e2-499b-8b79-a226c39e6eaf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:27:51.413643 kubelet[2562]: E0117 00:27:51.413593 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7f57f5f859-j9rxk" podUID="95fdc174-63e2-499b-8b79-a226c39e6eaf" Jan 17 00:27:51.521775 containerd[1501]: time="2026-01-17T00:27:51.521709528Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:27:51.967219 containerd[1501]: time="2026-01-17T00:27:51.967134682Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:27:51.968712 containerd[1501]: time="2026-01-17T00:27:51.968659352Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:27:51.968996 containerd[1501]: time="2026-01-17T00:27:51.968759653Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:27:51.969086 kubelet[2562]: E0117 00:27:51.969021 2562 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:27:51.969731 kubelet[2562]: E0117 00:27:51.969089 2562 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:27:51.970507 kubelet[2562]: E0117 00:27:51.969972 2562 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2nkr5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-64b8756fdc-7qc6h_calico-system(481372c3-ef6e-46bf-86cb-78fea87a79f9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:27:51.971283 kubelet[2562]: E0117 00:27:51.971153 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64b8756fdc-7qc6h" podUID="481372c3-ef6e-46bf-86cb-78fea87a79f9" Jan 17 00:27:53.522303 containerd[1501]: time="2026-01-17T00:27:53.522250902Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:27:53.952361 containerd[1501]: time="2026-01-17T00:27:53.952204309Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:27:53.953671 containerd[1501]: time="2026-01-17T00:27:53.953570416Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:27:53.953826 containerd[1501]: time="2026-01-17T00:27:53.953681737Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:27:53.954025 kubelet[2562]: E0117 00:27:53.953896 2562 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:27:53.954025 kubelet[2562]: E0117 00:27:53.953967 2562 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:27:53.955252 kubelet[2562]: E0117 00:27:53.954123 2562 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kz5jx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6675cb976f-qgmnw_calico-apiserver(7ea2b3c0-00a9-42a4-a1ac-e5bd2a459fee): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:27:53.955634 kubelet[2562]: E0117 00:27:53.955562 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6675cb976f-qgmnw" podUID="7ea2b3c0-00a9-42a4-a1ac-e5bd2a459fee" Jan 17 00:27:54.518208 containerd[1501]: time="2026-01-17T00:27:54.517159591Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:27:54.942200 containerd[1501]: time="2026-01-17T00:27:54.942018387Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:27:54.943436 containerd[1501]: time="2026-01-17T00:27:54.943382973Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:27:54.943599 containerd[1501]: time="2026-01-17T00:27:54.943509694Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:27:54.943777 kubelet[2562]: E0117 00:27:54.943708 2562 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:27:54.943838 kubelet[2562]: E0117 00:27:54.943796 2562 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:27:54.944047 kubelet[2562]: E0117 00:27:54.943985 2562 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fnczc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6675cb976f-s7mzt_calico-apiserver(73c589a5-8e71-425c-a060-0cf6cb3ed239): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:27:54.945927 kubelet[2562]: E0117 00:27:54.945838 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6675cb976f-s7mzt" podUID="73c589a5-8e71-425c-a060-0cf6cb3ed239" Jan 17 00:27:57.518227 containerd[1501]: time="2026-01-17T00:27:57.517914289Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:27:57.955491 containerd[1501]: time="2026-01-17T00:27:57.955108003Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:27:57.956715 containerd[1501]: time="2026-01-17T00:27:57.956597680Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:27:57.956715 containerd[1501]: time="2026-01-17T00:27:57.956673841Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:27:57.958072 kubelet[2562]: E0117 00:27:57.956931 2562 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:27:57.958072 kubelet[2562]: E0117 00:27:57.956985 2562 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:27:57.958072 kubelet[2562]: E0117 00:27:57.957086 2562 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-knchd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wn7sn_calico-system(eb95b785-13b8-4aa9-b43b-38efbd205ceb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:27:57.959780 containerd[1501]: time="2026-01-17T00:27:57.959513555Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:27:58.387537 containerd[1501]: time="2026-01-17T00:27:58.387327409Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:27:58.389130 containerd[1501]: time="2026-01-17T00:27:58.388890887Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:27:58.389130 containerd[1501]: time="2026-01-17T00:27:58.388992158Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:27:58.389298 kubelet[2562]: E0117 00:27:58.389162 2562 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:27:58.389298 kubelet[2562]: E0117 00:27:58.389230 2562 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:27:58.390175 kubelet[2562]: E0117 00:27:58.389354 2562 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-knchd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wn7sn_calico-system(eb95b785-13b8-4aa9-b43b-38efbd205ceb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:27:58.390597 kubelet[2562]: E0117 00:27:58.390550 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wn7sn" podUID="eb95b785-13b8-4aa9-b43b-38efbd205ceb" Jan 17 00:27:58.517467 containerd[1501]: time="2026-01-17T00:27:58.517116580Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:27:58.946050 containerd[1501]: time="2026-01-17T00:27:58.945918678Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:27:58.947559 containerd[1501]: time="2026-01-17T00:27:58.947476186Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:27:58.947729 containerd[1501]: time="2026-01-17T00:27:58.947520337Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:27:58.948014 kubelet[2562]: E0117 00:27:58.947938 2562 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:27:58.948014 kubelet[2562]: E0117 00:27:58.948008 2562 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:27:58.948322 kubelet[2562]: E0117 00:27:58.948175 2562 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bn44h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-ssv4k_calico-system(a05bf132-cecb-477a-a941-f502759ced80): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:27:58.950120 kubelet[2562]: E0117 00:27:58.950070 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ssv4k" podUID="a05bf132-cecb-477a-a941-f502759ced80" Jan 17 00:28:03.520238 kubelet[2562]: E0117 00:28:03.519677 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64b8756fdc-7qc6h" podUID="481372c3-ef6e-46bf-86cb-78fea87a79f9" Jan 17 00:28:04.521727 kubelet[2562]: E0117 00:28:04.521657 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7f57f5f859-j9rxk" podUID="95fdc174-63e2-499b-8b79-a226c39e6eaf" Jan 17 00:28:07.519530 kubelet[2562]: E0117 00:28:07.519254 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6675cb976f-qgmnw" podUID="7ea2b3c0-00a9-42a4-a1ac-e5bd2a459fee" Jan 17 00:28:08.518487 kubelet[2562]: E0117 00:28:08.517838 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6675cb976f-s7mzt" podUID="73c589a5-8e71-425c-a060-0cf6cb3ed239" Jan 17 00:28:08.864631 systemd[1]: run-containerd-runc-k8s.io-b2e509c4a29e2cbcde88ec8a53450484bfb47d3c744e2e0f0866632a5f60c688-runc.MGlWm2.mount: Deactivated successfully. Jan 17 00:28:11.522799 kubelet[2562]: E0117 00:28:11.522264 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ssv4k" podUID="a05bf132-cecb-477a-a941-f502759ced80" Jan 17 00:28:12.519015 kubelet[2562]: E0117 00:28:12.518825 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wn7sn" podUID="eb95b785-13b8-4aa9-b43b-38efbd205ceb" Jan 17 00:28:12.694863 systemd[1]: Started sshd@7-135.181.41.243:22-20.161.92.111:56566.service - OpenSSH per-connection server daemon (20.161.92.111:56566). Jan 17 00:28:13.451718 sshd[5336]: Accepted publickey for core from 20.161.92.111 port 56566 ssh2: RSA SHA256:X2mgP45nVkft7Ss8TR9hqcppzZ5HLZCqnkArfSq+OHE Jan 17 00:28:13.457466 sshd[5336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:28:13.465830 systemd-logind[1487]: New session 8 of user core. Jan 17 00:28:13.476425 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 00:28:14.095976 sshd[5336]: pam_unix(sshd:session): session closed for user core Jan 17 00:28:14.102298 systemd[1]: sshd@7-135.181.41.243:22-20.161.92.111:56566.service: Deactivated successfully. Jan 17 00:28:14.102353 systemd-logind[1487]: Session 8 logged out. Waiting for processes to exit. Jan 17 00:28:14.106024 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 00:28:14.108539 systemd-logind[1487]: Removed session 8. Jan 17 00:28:15.521139 kubelet[2562]: E0117 00:28:15.520130 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64b8756fdc-7qc6h" podUID="481372c3-ef6e-46bf-86cb-78fea87a79f9" Jan 17 00:28:17.521319 kubelet[2562]: E0117 00:28:17.521242 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7f57f5f859-j9rxk" podUID="95fdc174-63e2-499b-8b79-a226c39e6eaf" Jan 17 00:28:19.236736 systemd[1]: Started sshd@8-135.181.41.243:22-20.161.92.111:56574.service - OpenSSH per-connection server daemon (20.161.92.111:56574). Jan 17 00:28:20.015133 sshd[5352]: Accepted publickey for core from 20.161.92.111 port 56574 ssh2: RSA SHA256:X2mgP45nVkft7Ss8TR9hqcppzZ5HLZCqnkArfSq+OHE Jan 17 00:28:20.019817 sshd[5352]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:28:20.031945 systemd-logind[1487]: New session 9 of user core. Jan 17 00:28:20.040496 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 00:28:20.659432 sshd[5352]: pam_unix(sshd:session): session closed for user core Jan 17 00:28:20.668610 systemd-logind[1487]: Session 9 logged out. Waiting for processes to exit. Jan 17 00:28:20.669532 systemd[1]: sshd@8-135.181.41.243:22-20.161.92.111:56574.service: Deactivated successfully. Jan 17 00:28:20.676776 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 00:28:20.678441 systemd-logind[1487]: Removed session 9. Jan 17 00:28:21.517547 kubelet[2562]: E0117 00:28:21.516949 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6675cb976f-s7mzt" podUID="73c589a5-8e71-425c-a060-0cf6cb3ed239" Jan 17 00:28:21.519587 kubelet[2562]: E0117 00:28:21.519189 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6675cb976f-qgmnw" podUID="7ea2b3c0-00a9-42a4-a1ac-e5bd2a459fee" Jan 17 00:28:23.519138 kubelet[2562]: E0117 00:28:23.518929 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ssv4k" podUID="a05bf132-cecb-477a-a941-f502759ced80" Jan 17 00:28:25.804540 systemd[1]: Started sshd@9-135.181.41.243:22-20.161.92.111:52368.service - OpenSSH per-connection server daemon (20.161.92.111:52368). Jan 17 00:28:26.576035 sshd[5368]: Accepted publickey for core from 20.161.92.111 port 52368 ssh2: RSA SHA256:X2mgP45nVkft7Ss8TR9hqcppzZ5HLZCqnkArfSq+OHE Jan 17 00:28:26.583491 sshd[5368]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:28:26.597467 systemd-logind[1487]: New session 10 of user core. Jan 17 00:28:26.604427 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 00:28:27.253336 sshd[5368]: pam_unix(sshd:session): session closed for user core Jan 17 00:28:27.257968 systemd-logind[1487]: Session 10 logged out. Waiting for processes to exit. Jan 17 00:28:27.260071 systemd[1]: sshd@9-135.181.41.243:22-20.161.92.111:52368.service: Deactivated successfully. Jan 17 00:28:27.263945 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 00:28:27.267298 systemd-logind[1487]: Removed session 10. Jan 17 00:28:27.393208 systemd[1]: Started sshd@10-135.181.41.243:22-20.161.92.111:52376.service - OpenSSH per-connection server daemon (20.161.92.111:52376). Jan 17 00:28:27.519228 kubelet[2562]: E0117 00:28:27.519053 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wn7sn" podUID="eb95b785-13b8-4aa9-b43b-38efbd205ceb" Jan 17 00:28:28.158289 sshd[5382]: Accepted publickey for core from 20.161.92.111 port 52376 ssh2: RSA SHA256:X2mgP45nVkft7Ss8TR9hqcppzZ5HLZCqnkArfSq+OHE Jan 17 00:28:28.161061 sshd[5382]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:28:28.176191 systemd-logind[1487]: New session 11 of user core. Jan 17 00:28:28.182680 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 00:28:28.844193 sshd[5382]: pam_unix(sshd:session): session closed for user core Jan 17 00:28:28.849632 systemd[1]: sshd@10-135.181.41.243:22-20.161.92.111:52376.service: Deactivated successfully. Jan 17 00:28:28.853018 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 00:28:28.857687 systemd-logind[1487]: Session 11 logged out. Waiting for processes to exit. Jan 17 00:28:28.860045 systemd-logind[1487]: Removed session 11. Jan 17 00:28:28.983294 systemd[1]: Started sshd@11-135.181.41.243:22-20.161.92.111:52384.service - OpenSSH per-connection server daemon (20.161.92.111:52384). Jan 17 00:28:29.519921 kubelet[2562]: E0117 00:28:29.519866 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64b8756fdc-7qc6h" podUID="481372c3-ef6e-46bf-86cb-78fea87a79f9" Jan 17 00:28:29.520936 kubelet[2562]: E0117 00:28:29.520655 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7f57f5f859-j9rxk" podUID="95fdc174-63e2-499b-8b79-a226c39e6eaf" Jan 17 00:28:29.746382 sshd[5397]: Accepted publickey for core from 20.161.92.111 port 52384 ssh2: RSA SHA256:X2mgP45nVkft7Ss8TR9hqcppzZ5HLZCqnkArfSq+OHE Jan 17 00:28:29.749060 sshd[5397]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:28:29.757285 systemd-logind[1487]: New session 12 of user core. Jan 17 00:28:29.761816 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 00:28:30.348456 sshd[5397]: pam_unix(sshd:session): session closed for user core Jan 17 00:28:30.355887 systemd-logind[1487]: Session 12 logged out. Waiting for processes to exit. Jan 17 00:28:30.358248 systemd[1]: sshd@11-135.181.41.243:22-20.161.92.111:52384.service: Deactivated successfully. Jan 17 00:28:30.367000 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 00:28:30.375181 systemd-logind[1487]: Removed session 12. Jan 17 00:28:34.517901 containerd[1501]: time="2026-01-17T00:28:34.517848047Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:28:34.955534 containerd[1501]: time="2026-01-17T00:28:34.955377312Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:28:34.956774 containerd[1501]: time="2026-01-17T00:28:34.956726832Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:28:34.957605 containerd[1501]: time="2026-01-17T00:28:34.956816323Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:28:34.957708 kubelet[2562]: E0117 00:28:34.956999 2562 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:28:34.957708 kubelet[2562]: E0117 00:28:34.957046 2562 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:28:34.957708 kubelet[2562]: E0117 00:28:34.957190 2562 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kz5jx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6675cb976f-qgmnw_calico-apiserver(7ea2b3c0-00a9-42a4-a1ac-e5bd2a459fee): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:28:34.958759 kubelet[2562]: E0117 00:28:34.958711 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6675cb976f-qgmnw" podUID="7ea2b3c0-00a9-42a4-a1ac-e5bd2a459fee" Jan 17 00:28:35.485640 systemd[1]: Started sshd@12-135.181.41.243:22-20.161.92.111:34942.service - OpenSSH per-connection server daemon (20.161.92.111:34942). Jan 17 00:28:35.520111 containerd[1501]: time="2026-01-17T00:28:35.519873050Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:28:35.953253 containerd[1501]: time="2026-01-17T00:28:35.952677615Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:28:35.954155 containerd[1501]: time="2026-01-17T00:28:35.954111996Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:28:35.954249 containerd[1501]: time="2026-01-17T00:28:35.954204357Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:28:35.954564 kubelet[2562]: E0117 00:28:35.954506 2562 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:28:35.954633 kubelet[2562]: E0117 00:28:35.954571 2562 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:28:35.955025 kubelet[2562]: E0117 00:28:35.954696 2562 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fnczc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6675cb976f-s7mzt_calico-apiserver(73c589a5-8e71-425c-a060-0cf6cb3ed239): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:28:35.955847 kubelet[2562]: E0117 00:28:35.955811 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6675cb976f-s7mzt" podUID="73c589a5-8e71-425c-a060-0cf6cb3ed239" Jan 17 00:28:36.237225 sshd[5418]: Accepted publickey for core from 20.161.92.111 port 34942 ssh2: RSA SHA256:X2mgP45nVkft7Ss8TR9hqcppzZ5HLZCqnkArfSq+OHE Jan 17 00:28:36.240620 sshd[5418]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:28:36.247427 systemd-logind[1487]: New session 13 of user core. Jan 17 00:28:36.254311 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 00:28:36.827865 sshd[5418]: pam_unix(sshd:session): session closed for user core Jan 17 00:28:36.837307 systemd[1]: sshd@12-135.181.41.243:22-20.161.92.111:34942.service: Deactivated successfully. Jan 17 00:28:36.840699 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 00:28:36.841922 systemd-logind[1487]: Session 13 logged out. Waiting for processes to exit. Jan 17 00:28:36.843477 systemd-logind[1487]: Removed session 13. Jan 17 00:28:36.963231 systemd[1]: Started sshd@13-135.181.41.243:22-20.161.92.111:34948.service - OpenSSH per-connection server daemon (20.161.92.111:34948). Jan 17 00:28:37.739715 sshd[5433]: Accepted publickey for core from 20.161.92.111 port 34948 ssh2: RSA SHA256:X2mgP45nVkft7Ss8TR9hqcppzZ5HLZCqnkArfSq+OHE Jan 17 00:28:37.741785 sshd[5433]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:28:37.750299 systemd-logind[1487]: New session 14 of user core. Jan 17 00:28:37.755266 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 00:28:38.490887 sshd[5433]: pam_unix(sshd:session): session closed for user core Jan 17 00:28:38.497832 systemd-logind[1487]: Session 14 logged out. Waiting for processes to exit. Jan 17 00:28:38.500697 systemd[1]: sshd@13-135.181.41.243:22-20.161.92.111:34948.service: Deactivated successfully. Jan 17 00:28:38.504044 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 00:28:38.506724 systemd-logind[1487]: Removed session 14. Jan 17 00:28:38.518168 kubelet[2562]: E0117 00:28:38.517615 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ssv4k" podUID="a05bf132-cecb-477a-a941-f502759ced80" Jan 17 00:28:38.625719 systemd[1]: Started sshd@14-135.181.41.243:22-20.161.92.111:34950.service - OpenSSH per-connection server daemon (20.161.92.111:34950). Jan 17 00:28:38.856464 systemd[1]: run-containerd-runc-k8s.io-b2e509c4a29e2cbcde88ec8a53450484bfb47d3c744e2e0f0866632a5f60c688-runc.Z0JcGc.mount: Deactivated successfully. Jan 17 00:28:39.392899 sshd[5444]: Accepted publickey for core from 20.161.92.111 port 34950 ssh2: RSA SHA256:X2mgP45nVkft7Ss8TR9hqcppzZ5HLZCqnkArfSq+OHE Jan 17 00:28:39.395419 sshd[5444]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:28:39.401016 systemd-logind[1487]: New session 15 of user core. Jan 17 00:28:39.411412 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 00:28:39.522459 containerd[1501]: time="2026-01-17T00:28:39.520891215Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:28:39.962464 containerd[1501]: time="2026-01-17T00:28:39.962277585Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:28:39.963918 containerd[1501]: time="2026-01-17T00:28:39.963780246Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:28:39.963918 containerd[1501]: time="2026-01-17T00:28:39.963868247Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:28:39.964577 kubelet[2562]: E0117 00:28:39.964164 2562 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:28:39.964577 kubelet[2562]: E0117 00:28:39.964221 2562 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:28:39.964577 kubelet[2562]: E0117 00:28:39.964319 2562 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-knchd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wn7sn_calico-system(eb95b785-13b8-4aa9-b43b-38efbd205ceb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:28:39.967639 containerd[1501]: time="2026-01-17T00:28:39.967410513Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:28:40.389894 containerd[1501]: time="2026-01-17T00:28:40.389710370Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:28:40.391931 containerd[1501]: time="2026-01-17T00:28:40.391868977Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:28:40.392064 containerd[1501]: time="2026-01-17T00:28:40.391990387Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:28:40.393380 kubelet[2562]: E0117 00:28:40.393311 2562 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:28:40.393537 kubelet[2562]: E0117 00:28:40.393413 2562 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:28:40.393819 kubelet[2562]: E0117 00:28:40.393755 2562 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-knchd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wn7sn_calico-system(eb95b785-13b8-4aa9-b43b-38efbd205ceb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:28:40.394970 kubelet[2562]: E0117 00:28:40.394933 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wn7sn" podUID="eb95b785-13b8-4aa9-b43b-38efbd205ceb" Jan 17 00:28:40.693506 sshd[5444]: pam_unix(sshd:session): session closed for user core Jan 17 00:28:40.698760 systemd[1]: sshd@14-135.181.41.243:22-20.161.92.111:34950.service: Deactivated successfully. Jan 17 00:28:40.702206 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 00:28:40.705180 systemd-logind[1487]: Session 15 logged out. Waiting for processes to exit. Jan 17 00:28:40.708222 systemd-logind[1487]: Removed session 15. Jan 17 00:28:40.836599 systemd[1]: Started sshd@15-135.181.41.243:22-20.161.92.111:34964.service - OpenSSH per-connection server daemon (20.161.92.111:34964). Jan 17 00:28:41.616030 sshd[5486]: Accepted publickey for core from 20.161.92.111 port 34964 ssh2: RSA SHA256:X2mgP45nVkft7Ss8TR9hqcppzZ5HLZCqnkArfSq+OHE Jan 17 00:28:41.617212 sshd[5486]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:28:41.625646 systemd-logind[1487]: New session 16 of user core. Jan 17 00:28:41.631993 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 00:28:42.360389 sshd[5486]: pam_unix(sshd:session): session closed for user core Jan 17 00:28:42.365964 systemd-logind[1487]: Session 16 logged out. Waiting for processes to exit. Jan 17 00:28:42.369458 systemd[1]: sshd@15-135.181.41.243:22-20.161.92.111:34964.service: Deactivated successfully. Jan 17 00:28:42.374017 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 00:28:42.377418 systemd-logind[1487]: Removed session 16. Jan 17 00:28:42.496860 systemd[1]: Started sshd@16-135.181.41.243:22-20.161.92.111:55642.service - OpenSSH per-connection server daemon (20.161.92.111:55642). Jan 17 00:28:43.254055 sshd[5497]: Accepted publickey for core from 20.161.92.111 port 55642 ssh2: RSA SHA256:X2mgP45nVkft7Ss8TR9hqcppzZ5HLZCqnkArfSq+OHE Jan 17 00:28:43.255969 sshd[5497]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:28:43.263536 systemd-logind[1487]: New session 17 of user core. Jan 17 00:28:43.269259 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 00:28:43.520285 containerd[1501]: time="2026-01-17T00:28:43.518324237Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:28:43.885955 sshd[5497]: pam_unix(sshd:session): session closed for user core Jan 17 00:28:43.894802 systemd[1]: sshd@16-135.181.41.243:22-20.161.92.111:55642.service: Deactivated successfully. Jan 17 00:28:43.900181 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 00:28:43.903819 systemd-logind[1487]: Session 17 logged out. Waiting for processes to exit. Jan 17 00:28:43.908750 systemd-logind[1487]: Removed session 17. Jan 17 00:28:43.946730 containerd[1501]: time="2026-01-17T00:28:43.946385188Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:28:43.949344 containerd[1501]: time="2026-01-17T00:28:43.948996567Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:28:43.949344 containerd[1501]: time="2026-01-17T00:28:43.949021968Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:28:43.949643 kubelet[2562]: E0117 00:28:43.949366 2562 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:28:43.949643 kubelet[2562]: E0117 00:28:43.949436 2562 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:28:43.952573 kubelet[2562]: E0117 00:28:43.950474 2562 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2nkr5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-64b8756fdc-7qc6h_calico-system(481372c3-ef6e-46bf-86cb-78fea87a79f9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:28:43.952573 kubelet[2562]: E0117 00:28:43.952223 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64b8756fdc-7qc6h" podUID="481372c3-ef6e-46bf-86cb-78fea87a79f9" Jan 17 00:28:43.954529 containerd[1501]: time="2026-01-17T00:28:43.950724210Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:28:44.400006 containerd[1501]: time="2026-01-17T00:28:44.399841298Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:28:44.402502 containerd[1501]: time="2026-01-17T00:28:44.402160305Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:28:44.402502 containerd[1501]: time="2026-01-17T00:28:44.402203375Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:28:44.403843 kubelet[2562]: E0117 00:28:44.403009 2562 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:28:44.403843 kubelet[2562]: E0117 00:28:44.403091 2562 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:28:44.403843 kubelet[2562]: E0117 00:28:44.403275 2562 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:4cfe1eccd3c14c02813c65fb803c84dd,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-scctk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7f57f5f859-j9rxk_calico-system(95fdc174-63e2-499b-8b79-a226c39e6eaf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:28:44.406826 containerd[1501]: time="2026-01-17T00:28:44.406403296Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:28:44.841493 containerd[1501]: time="2026-01-17T00:28:44.841412910Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:28:44.843111 containerd[1501]: time="2026-01-17T00:28:44.842977681Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:28:44.843310 containerd[1501]: time="2026-01-17T00:28:44.843046022Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:28:44.844209 kubelet[2562]: E0117 00:28:44.843838 2562 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:28:44.844209 kubelet[2562]: E0117 00:28:44.843928 2562 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:28:44.844209 kubelet[2562]: E0117 00:28:44.844118 2562 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-scctk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7f57f5f859-j9rxk_calico-system(95fdc174-63e2-499b-8b79-a226c39e6eaf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:28:44.846179 kubelet[2562]: E0117 00:28:44.846104 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7f57f5f859-j9rxk" podUID="95fdc174-63e2-499b-8b79-a226c39e6eaf" Jan 17 00:28:47.521264 kubelet[2562]: E0117 00:28:47.520877 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6675cb976f-qgmnw" podUID="7ea2b3c0-00a9-42a4-a1ac-e5bd2a459fee" Jan 17 00:28:49.026048 systemd[1]: Started sshd@17-135.181.41.243:22-20.161.92.111:55644.service - OpenSSH per-connection server daemon (20.161.92.111:55644). Jan 17 00:28:49.522125 kubelet[2562]: E0117 00:28:49.520987 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6675cb976f-s7mzt" podUID="73c589a5-8e71-425c-a060-0cf6cb3ed239" Jan 17 00:28:49.522701 containerd[1501]: time="2026-01-17T00:28:49.521903077Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:28:49.778003 sshd[5532]: Accepted publickey for core from 20.161.92.111 port 55644 ssh2: RSA SHA256:X2mgP45nVkft7Ss8TR9hqcppzZ5HLZCqnkArfSq+OHE Jan 17 00:28:49.782021 sshd[5532]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:28:49.790527 systemd-logind[1487]: New session 18 of user core. Jan 17 00:28:49.797356 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 00:28:49.958232 containerd[1501]: time="2026-01-17T00:28:49.958148926Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:28:49.961137 containerd[1501]: time="2026-01-17T00:28:49.960347141Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:28:49.961137 containerd[1501]: time="2026-01-17T00:28:49.960494122Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:28:49.963006 kubelet[2562]: E0117 00:28:49.961524 2562 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:28:49.963006 kubelet[2562]: E0117 00:28:49.961600 2562 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:28:49.963006 kubelet[2562]: E0117 00:28:49.961803 2562 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bn44h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-ssv4k_calico-system(a05bf132-cecb-477a-a941-f502759ced80): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:28:49.964447 kubelet[2562]: E0117 00:28:49.964256 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ssv4k" podUID="a05bf132-cecb-477a-a941-f502759ced80" Jan 17 00:28:50.386338 sshd[5532]: pam_unix(sshd:session): session closed for user core Jan 17 00:28:50.393487 systemd-logind[1487]: Session 18 logged out. Waiting for processes to exit. Jan 17 00:28:50.395729 systemd[1]: sshd@17-135.181.41.243:22-20.161.92.111:55644.service: Deactivated successfully. Jan 17 00:28:50.402159 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 00:28:50.403992 systemd-logind[1487]: Removed session 18. Jan 17 00:28:53.536197 kubelet[2562]: E0117 00:28:53.536068 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wn7sn" podUID="eb95b785-13b8-4aa9-b43b-38efbd205ceb" Jan 17 00:28:55.528521 systemd[1]: Started sshd@18-135.181.41.243:22-20.161.92.111:39388.service - OpenSSH per-connection server daemon (20.161.92.111:39388). Jan 17 00:28:56.287945 sshd[5545]: Accepted publickey for core from 20.161.92.111 port 39388 ssh2: RSA SHA256:X2mgP45nVkft7Ss8TR9hqcppzZ5HLZCqnkArfSq+OHE Jan 17 00:28:56.293697 sshd[5545]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:28:56.300641 systemd-logind[1487]: New session 19 of user core. Jan 17 00:28:56.306508 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 00:28:56.904371 sshd[5545]: pam_unix(sshd:session): session closed for user core Jan 17 00:28:56.909418 systemd[1]: sshd@18-135.181.41.243:22-20.161.92.111:39388.service: Deactivated successfully. Jan 17 00:28:56.913837 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 00:28:56.915838 systemd-logind[1487]: Session 19 logged out. Waiting for processes to exit. Jan 17 00:28:56.921322 systemd-logind[1487]: Removed session 19. Jan 17 00:28:58.518809 kubelet[2562]: E0117 00:28:58.518717 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7f57f5f859-j9rxk" podUID="95fdc174-63e2-499b-8b79-a226c39e6eaf" Jan 17 00:28:58.521651 kubelet[2562]: E0117 00:28:58.521615 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64b8756fdc-7qc6h" podUID="481372c3-ef6e-46bf-86cb-78fea87a79f9" Jan 17 00:29:00.519137 kubelet[2562]: E0117 00:29:00.517703 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6675cb976f-qgmnw" podUID="7ea2b3c0-00a9-42a4-a1ac-e5bd2a459fee" Jan 17 00:29:02.048223 systemd[1]: Started sshd@19-135.181.41.243:22-20.161.92.111:39390.service - OpenSSH per-connection server daemon (20.161.92.111:39390). Jan 17 00:29:02.801899 sshd[5558]: Accepted publickey for core from 20.161.92.111 port 39390 ssh2: RSA SHA256:X2mgP45nVkft7Ss8TR9hqcppzZ5HLZCqnkArfSq+OHE Jan 17 00:29:02.805213 sshd[5558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:29:02.815025 systemd-logind[1487]: New session 20 of user core. Jan 17 00:29:02.822307 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 00:29:03.465363 sshd[5558]: pam_unix(sshd:session): session closed for user core Jan 17 00:29:03.471157 systemd[1]: sshd@19-135.181.41.243:22-20.161.92.111:39390.service: Deactivated successfully. Jan 17 00:29:03.475121 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 00:29:03.476666 systemd-logind[1487]: Session 20 logged out. Waiting for processes to exit. Jan 17 00:29:03.477782 systemd-logind[1487]: Removed session 20. Jan 17 00:29:03.518503 kubelet[2562]: E0117 00:29:03.518365 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6675cb976f-s7mzt" podUID="73c589a5-8e71-425c-a060-0cf6cb3ed239" Jan 17 00:29:04.517008 kubelet[2562]: E0117 00:29:04.516914 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ssv4k" podUID="a05bf132-cecb-477a-a941-f502759ced80" Jan 17 00:29:08.518184 kubelet[2562]: E0117 00:29:08.518089 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wn7sn" podUID="eb95b785-13b8-4aa9-b43b-38efbd205ceb" Jan 17 00:29:08.857795 systemd[1]: run-containerd-runc-k8s.io-b2e509c4a29e2cbcde88ec8a53450484bfb47d3c744e2e0f0866632a5f60c688-runc.iDPJUz.mount: Deactivated successfully. Jan 17 00:29:11.517426 kubelet[2562]: E0117 00:29:11.517320 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7f57f5f859-j9rxk" podUID="95fdc174-63e2-499b-8b79-a226c39e6eaf" Jan 17 00:29:12.516496 kubelet[2562]: E0117 00:29:12.516392 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64b8756fdc-7qc6h" podUID="481372c3-ef6e-46bf-86cb-78fea87a79f9" Jan 17 00:29:15.517720 kubelet[2562]: E0117 00:29:15.517365 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6675cb976f-qgmnw" podUID="7ea2b3c0-00a9-42a4-a1ac-e5bd2a459fee" Jan 17 00:29:17.516136 kubelet[2562]: E0117 00:29:17.515964 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ssv4k" podUID="a05bf132-cecb-477a-a941-f502759ced80" Jan 17 00:29:18.516700 kubelet[2562]: E0117 00:29:18.516642 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6675cb976f-s7mzt" podUID="73c589a5-8e71-425c-a060-0cf6cb3ed239" Jan 17 00:29:19.519166 kubelet[2562]: E0117 00:29:19.519054 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wn7sn" podUID="eb95b785-13b8-4aa9-b43b-38efbd205ceb" Jan 17 00:29:20.751924 systemd[1]: cri-containerd-4a7368710fbc3b9724ce91c66b2daa81c5301d624ed94e66b368e12d835216c1.scope: Deactivated successfully. Jan 17 00:29:20.752410 systemd[1]: cri-containerd-4a7368710fbc3b9724ce91c66b2daa81c5301d624ed94e66b368e12d835216c1.scope: Consumed 22.663s CPU time. Jan 17 00:29:20.788239 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4a7368710fbc3b9724ce91c66b2daa81c5301d624ed94e66b368e12d835216c1-rootfs.mount: Deactivated successfully. Jan 17 00:29:20.794657 containerd[1501]: time="2026-01-17T00:29:20.794546892Z" level=info msg="shim disconnected" id=4a7368710fbc3b9724ce91c66b2daa81c5301d624ed94e66b368e12d835216c1 namespace=k8s.io Jan 17 00:29:20.794657 containerd[1501]: time="2026-01-17T00:29:20.794624000Z" level=warning msg="cleaning up after shim disconnected" id=4a7368710fbc3b9724ce91c66b2daa81c5301d624ed94e66b368e12d835216c1 namespace=k8s.io Jan 17 00:29:20.794657 containerd[1501]: time="2026-01-17T00:29:20.794633580Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:29:20.817078 containerd[1501]: time="2026-01-17T00:29:20.816913200Z" level=warning msg="cleanup warnings time=\"2026-01-17T00:29:20Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 00:29:20.980066 systemd[1]: cri-containerd-6c89fb3ff5d69eb09cc72e8ef33c280ad8fe9c798b34de1c8ff82ab927877cc3.scope: Deactivated successfully. Jan 17 00:29:20.982178 systemd[1]: cri-containerd-6c89fb3ff5d69eb09cc72e8ef33c280ad8fe9c798b34de1c8ff82ab927877cc3.scope: Consumed 4.278s CPU time, 19.2M memory peak, 0B memory swap peak. Jan 17 00:29:21.015144 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c89fb3ff5d69eb09cc72e8ef33c280ad8fe9c798b34de1c8ff82ab927877cc3-rootfs.mount: Deactivated successfully. Jan 17 00:29:21.020907 containerd[1501]: time="2026-01-17T00:29:21.020627363Z" level=info msg="shim disconnected" id=6c89fb3ff5d69eb09cc72e8ef33c280ad8fe9c798b34de1c8ff82ab927877cc3 namespace=k8s.io Jan 17 00:29:21.020907 containerd[1501]: time="2026-01-17T00:29:21.020690641Z" level=warning msg="cleaning up after shim disconnected" id=6c89fb3ff5d69eb09cc72e8ef33c280ad8fe9c798b34de1c8ff82ab927877cc3 namespace=k8s.io Jan 17 00:29:21.020907 containerd[1501]: time="2026-01-17T00:29:21.020700111Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:29:21.192522 kubelet[2562]: E0117 00:29:21.192358 2562 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:59106->10.0.0.2:2379: read: connection timed out" Jan 17 00:29:21.259819 kubelet[2562]: I0117 00:29:21.259773 2562 scope.go:117] "RemoveContainer" containerID="6c89fb3ff5d69eb09cc72e8ef33c280ad8fe9c798b34de1c8ff82ab927877cc3" Jan 17 00:29:21.261818 kubelet[2562]: I0117 00:29:21.261788 2562 scope.go:117] "RemoveContainer" containerID="4a7368710fbc3b9724ce91c66b2daa81c5301d624ed94e66b368e12d835216c1" Jan 17 00:29:21.263539 containerd[1501]: time="2026-01-17T00:29:21.263429245Z" level=info msg="CreateContainer within sandbox \"e2ec6907224adec4998fca1143dbfe421a24143fdc99185f9ef80f211b4a4cc4\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 17 00:29:21.263539 containerd[1501]: time="2026-01-17T00:29:21.263457624Z" level=info msg="CreateContainer within sandbox \"4656f05014facf2bcb0b732454b2c25b2df000fcfc8dd746c7e9f28c0e50aa7e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 17 00:29:21.294412 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1116439318.mount: Deactivated successfully. Jan 17 00:29:21.299582 containerd[1501]: time="2026-01-17T00:29:21.299382968Z" level=info msg="CreateContainer within sandbox \"e2ec6907224adec4998fca1143dbfe421a24143fdc99185f9ef80f211b4a4cc4\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"f604069fcd29fef0f8cb636c6c2d8209198b012f4c29212fb09acc18f03f06cb\"" Jan 17 00:29:21.301951 containerd[1501]: time="2026-01-17T00:29:21.301400801Z" level=info msg="StartContainer for \"f604069fcd29fef0f8cb636c6c2d8209198b012f4c29212fb09acc18f03f06cb\"" Jan 17 00:29:21.301766 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2416397376.mount: Deactivated successfully. Jan 17 00:29:21.312685 containerd[1501]: time="2026-01-17T00:29:21.312397330Z" level=info msg="CreateContainer within sandbox \"4656f05014facf2bcb0b732454b2c25b2df000fcfc8dd746c7e9f28c0e50aa7e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"5f91804a6ca729eb58388775ccfb54bd2ab6e9401fc151e533c9b95d6e345a2a\"" Jan 17 00:29:21.314273 containerd[1501]: time="2026-01-17T00:29:21.314216609Z" level=info msg="StartContainer for \"5f91804a6ca729eb58388775ccfb54bd2ab6e9401fc151e533c9b95d6e345a2a\"" Jan 17 00:29:21.356588 systemd[1]: Started cri-containerd-f604069fcd29fef0f8cb636c6c2d8209198b012f4c29212fb09acc18f03f06cb.scope - libcontainer container f604069fcd29fef0f8cb636c6c2d8209198b012f4c29212fb09acc18f03f06cb. Jan 17 00:29:21.370365 systemd[1]: Started cri-containerd-5f91804a6ca729eb58388775ccfb54bd2ab6e9401fc151e533c9b95d6e345a2a.scope - libcontainer container 5f91804a6ca729eb58388775ccfb54bd2ab6e9401fc151e533c9b95d6e345a2a. Jan 17 00:29:21.416514 containerd[1501]: time="2026-01-17T00:29:21.415602431Z" level=info msg="StartContainer for \"f604069fcd29fef0f8cb636c6c2d8209198b012f4c29212fb09acc18f03f06cb\" returns successfully" Jan 17 00:29:21.448290 containerd[1501]: time="2026-01-17T00:29:21.448066763Z" level=info msg="StartContainer for \"5f91804a6ca729eb58388775ccfb54bd2ab6e9401fc151e533c9b95d6e345a2a\" returns successfully" Jan 17 00:29:21.708805 kubelet[2562]: E0117 00:29:21.708509 2562 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:58944->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{whisker-7f57f5f859-j9rxk.188b5d167cbe10dd calico-system 1618 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-system,Name:whisker-7f57f5f859-j9rxk,UID:95fdc174-63e2-499b-8b79-a226c39e6eaf,APIVersion:v1,ResourceVersion:915,FieldPath:spec.containers{whisker},},Reason:BackOff,Message:Back-off pulling image \"ghcr.io/flatcar/calico/whisker:v3.30.4\",Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-e100e79615,},FirstTimestamp:2026-01-17 00:27:09 +0000 UTC,LastTimestamp:2026-01-17 00:29:11.51647999 +0000 UTC m=+166.151317365,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-e100e79615,}" Jan 17 00:29:21.709249 kubelet[2562]: I0117 00:29:21.709191 2562 status_manager.go:890] "Failed to get status for pod" podUID="95fdc174-63e2-499b-8b79-a226c39e6eaf" pod="calico-system/whisker-7f57f5f859-j9rxk" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:59038->10.0.0.2:2379: read: connection timed out" Jan 17 00:29:24.517011 kubelet[2562]: E0117 00:29:24.516963 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64b8756fdc-7qc6h" podUID="481372c3-ef6e-46bf-86cb-78fea87a79f9" Jan 17 00:29:26.261061 systemd[1]: cri-containerd-8707839f21e908cb57af658aa552afe7b026f45b956a241270d4382f7e2dcec1.scope: Deactivated successfully. Jan 17 00:29:26.262969 systemd[1]: cri-containerd-8707839f21e908cb57af658aa552afe7b026f45b956a241270d4382f7e2dcec1.scope: Consumed 2.222s CPU time, 16.0M memory peak, 0B memory swap peak. Jan 17 00:29:26.314590 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8707839f21e908cb57af658aa552afe7b026f45b956a241270d4382f7e2dcec1-rootfs.mount: Deactivated successfully. Jan 17 00:29:26.329054 containerd[1501]: time="2026-01-17T00:29:26.328589750Z" level=info msg="shim disconnected" id=8707839f21e908cb57af658aa552afe7b026f45b956a241270d4382f7e2dcec1 namespace=k8s.io Jan 17 00:29:26.329054 containerd[1501]: time="2026-01-17T00:29:26.328722076Z" level=warning msg="cleaning up after shim disconnected" id=8707839f21e908cb57af658aa552afe7b026f45b956a241270d4382f7e2dcec1 namespace=k8s.io Jan 17 00:29:26.329054 containerd[1501]: time="2026-01-17T00:29:26.328740286Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:29:26.517084 kubelet[2562]: E0117 00:29:26.516895 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7f57f5f859-j9rxk" podUID="95fdc174-63e2-499b-8b79-a226c39e6eaf" Jan 17 00:29:27.285723 kubelet[2562]: I0117 00:29:27.285669 2562 scope.go:117] "RemoveContainer" containerID="8707839f21e908cb57af658aa552afe7b026f45b956a241270d4382f7e2dcec1" Jan 17 00:29:27.288169 containerd[1501]: time="2026-01-17T00:29:27.288057306Z" level=info msg="CreateContainer within sandbox \"7a9c3e1571c14b88478e9edb65259c2436fa29d4c92b3f1dd2e25e3a541d5052\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 17 00:29:27.308942 containerd[1501]: time="2026-01-17T00:29:27.308863463Z" level=info msg="CreateContainer within sandbox \"7a9c3e1571c14b88478e9edb65259c2436fa29d4c92b3f1dd2e25e3a541d5052\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"539a684c7ae44508d8ad0b3d244817f835d674b70fe2c9fafba2537254f403db\"" Jan 17 00:29:27.312988 containerd[1501]: time="2026-01-17T00:29:27.311372858Z" level=info msg="StartContainer for \"539a684c7ae44508d8ad0b3d244817f835d674b70fe2c9fafba2537254f403db\"" Jan 17 00:29:27.387389 systemd[1]: Started cri-containerd-539a684c7ae44508d8ad0b3d244817f835d674b70fe2c9fafba2537254f403db.scope - libcontainer container 539a684c7ae44508d8ad0b3d244817f835d674b70fe2c9fafba2537254f403db. Jan 17 00:29:27.471432 containerd[1501]: time="2026-01-17T00:29:27.471323875Z" level=info msg="StartContainer for \"539a684c7ae44508d8ad0b3d244817f835d674b70fe2c9fafba2537254f403db\" returns successfully" Jan 17 00:29:28.517459 kubelet[2562]: E0117 00:29:28.517394 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6675cb976f-qgmnw" podUID="7ea2b3c0-00a9-42a4-a1ac-e5bd2a459fee" Jan 17 00:29:29.516647 kubelet[2562]: E0117 00:29:29.516555 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ssv4k" podUID="a05bf132-cecb-477a-a941-f502759ced80" Jan 17 00:29:31.193940 kubelet[2562]: E0117 00:29:31.193588 2562 controller.go:195] "Failed to update lease" err="Put \"https://135.181.41.243:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-e100e79615?timeout=10s\": context deadline exceeded" Jan 17 00:29:32.516770 kubelet[2562]: E0117 00:29:32.516705 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6675cb976f-s7mzt" podUID="73c589a5-8e71-425c-a060-0cf6cb3ed239" Jan 17 00:29:32.668721 systemd[1]: cri-containerd-f604069fcd29fef0f8cb636c6c2d8209198b012f4c29212fb09acc18f03f06cb.scope: Deactivated successfully. Jan 17 00:29:32.693895 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f604069fcd29fef0f8cb636c6c2d8209198b012f4c29212fb09acc18f03f06cb-rootfs.mount: Deactivated successfully. Jan 17 00:29:32.700136 containerd[1501]: time="2026-01-17T00:29:32.700012099Z" level=info msg="shim disconnected" id=f604069fcd29fef0f8cb636c6c2d8209198b012f4c29212fb09acc18f03f06cb namespace=k8s.io Jan 17 00:29:32.700136 containerd[1501]: time="2026-01-17T00:29:32.700124477Z" level=warning msg="cleaning up after shim disconnected" id=f604069fcd29fef0f8cb636c6c2d8209198b012f4c29212fb09acc18f03f06cb namespace=k8s.io Jan 17 00:29:32.700136 containerd[1501]: time="2026-01-17T00:29:32.700135376Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:29:33.307502 kubelet[2562]: I0117 00:29:33.307456 2562 scope.go:117] "RemoveContainer" containerID="4a7368710fbc3b9724ce91c66b2daa81c5301d624ed94e66b368e12d835216c1" Jan 17 00:29:33.310152 kubelet[2562]: I0117 00:29:33.308141 2562 scope.go:117] "RemoveContainer" containerID="f604069fcd29fef0f8cb636c6c2d8209198b012f4c29212fb09acc18f03f06cb" Jan 17 00:29:33.310152 kubelet[2562]: E0117 00:29:33.308377 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-7dcd859c48-r7fg4_tigera-operator(5fc9e7dd-de14-4dbd-b66b-e2afbd21fa00)\"" pod="tigera-operator/tigera-operator-7dcd859c48-r7fg4" podUID="5fc9e7dd-de14-4dbd-b66b-e2afbd21fa00" Jan 17 00:29:33.312734 containerd[1501]: time="2026-01-17T00:29:33.312679578Z" level=info msg="RemoveContainer for \"4a7368710fbc3b9724ce91c66b2daa81c5301d624ed94e66b368e12d835216c1\"" Jan 17 00:29:33.320268 containerd[1501]: time="2026-01-17T00:29:33.320199623Z" level=info msg="RemoveContainer for \"4a7368710fbc3b9724ce91c66b2daa81c5301d624ed94e66b368e12d835216c1\" returns successfully" Jan 17 00:29:34.516674 kubelet[2562]: E0117 00:29:34.516606 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wn7sn" podUID="eb95b785-13b8-4aa9-b43b-38efbd205ceb" Jan 17 00:29:38.516739 kubelet[2562]: E0117 00:29:38.516661 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64b8756fdc-7qc6h" podUID="481372c3-ef6e-46bf-86cb-78fea87a79f9" Jan 17 00:29:38.872643 systemd[1]: run-containerd-runc-k8s.io-b2e509c4a29e2cbcde88ec8a53450484bfb47d3c744e2e0f0866632a5f60c688-runc.VR7Xcy.mount: Deactivated successfully. Jan 17 00:29:40.517955 kubelet[2562]: E0117 00:29:40.517805 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7f57f5f859-j9rxk" podUID="95fdc174-63e2-499b-8b79-a226c39e6eaf" Jan 17 00:29:41.194542 kubelet[2562]: E0117 00:29:41.194455 2562 controller.go:195] "Failed to update lease" err="Put \"https://135.181.41.243:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-e100e79615?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 17 00:29:41.516536 kubelet[2562]: E0117 00:29:41.516445 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ssv4k" podUID="a05bf132-cecb-477a-a941-f502759ced80"