Jan 24 00:55:58.125088 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 23 22:35:12 -00 2026 Jan 24 00:55:58.125105 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:55:58.125114 kernel: BIOS-provided physical RAM map: Jan 24 00:55:58.125119 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 24 00:55:58.125123 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ed3efff] usable Jan 24 00:55:58.125128 kernel: BIOS-e820: [mem 0x000000007ed3f000-0x000000007edfffff] reserved Jan 24 00:55:58.125133 kernel: BIOS-e820: [mem 0x000000007ee00000-0x000000007f8ecfff] usable Jan 24 00:55:58.125137 kernel: BIOS-e820: [mem 0x000000007f8ed000-0x000000007f9ecfff] reserved Jan 24 00:55:58.125142 kernel: BIOS-e820: [mem 0x000000007f9ed000-0x000000007faecfff] type 20 Jan 24 00:55:58.125146 kernel: BIOS-e820: [mem 0x000000007faed000-0x000000007fb6cfff] reserved Jan 24 00:55:58.125150 kernel: BIOS-e820: [mem 0x000000007fb6d000-0x000000007fb7efff] ACPI data Jan 24 00:55:58.125157 kernel: BIOS-e820: [mem 0x000000007fb7f000-0x000000007fbfefff] ACPI NVS Jan 24 00:55:58.125162 kernel: BIOS-e820: [mem 0x000000007fbff000-0x000000007ff7bfff] usable Jan 24 00:55:58.125166 kernel: BIOS-e820: [mem 0x000000007ff7c000-0x000000007fffffff] reserved Jan 24 00:55:58.125171 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 24 00:55:58.125176 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 24 00:55:58.125183 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 24 00:55:58.125188 kernel: BIOS-e820: [mem 0x0000000100000000-0x0000000179ffffff] usable Jan 24 00:55:58.125192 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 24 00:55:58.125197 kernel: NX (Execute Disable) protection: active Jan 24 00:55:58.125201 kernel: APIC: Static calls initialized Jan 24 00:55:58.125206 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Jan 24 00:55:58.125211 kernel: efi: SMBIOS=0x7f988000 SMBIOS 3.0=0x7f986000 ACPI=0x7fb7e000 ACPI 2.0=0x7fb7e014 MEMATTR=0x7e01b198 Jan 24 00:55:58.125215 kernel: efi: Remove mem135: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jan 24 00:55:58.125220 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jan 24 00:55:58.125225 kernel: SMBIOS 3.0.0 present. Jan 24 00:55:58.125230 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Jan 24 00:55:58.125234 kernel: Hypervisor detected: KVM Jan 24 00:55:58.125241 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 24 00:55:58.125246 kernel: kvm-clock: using sched offset of 12490132375 cycles Jan 24 00:55:58.125251 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 24 00:55:58.125256 kernel: tsc: Detected 2400.000 MHz processor Jan 24 00:55:58.125261 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 24 00:55:58.125266 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 24 00:55:58.125270 kernel: last_pfn = 0x17a000 max_arch_pfn = 0x10000000000 Jan 24 00:55:58.125275 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 24 00:55:58.125280 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 24 00:55:58.125287 kernel: last_pfn = 0x7ff7c max_arch_pfn = 0x10000000000 Jan 24 00:55:58.125292 kernel: Using GB pages for direct mapping Jan 24 00:55:58.125296 kernel: Secure boot disabled Jan 24 00:55:58.125304 kernel: ACPI: Early table checksum verification disabled Jan 24 00:55:58.125309 kernel: ACPI: RSDP 0x000000007FB7E014 000024 (v02 BOCHS ) Jan 24 00:55:58.125314 kernel: ACPI: XSDT 0x000000007FB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 24 00:55:58.125319 kernel: ACPI: FACP 0x000000007FB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:55:58.125327 kernel: ACPI: DSDT 0x000000007FB7A000 002443 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:55:58.125332 kernel: ACPI: FACS 0x000000007FBDD000 000040 Jan 24 00:55:58.125337 kernel: ACPI: APIC 0x000000007FB78000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:55:58.125342 kernel: ACPI: HPET 0x000000007FB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:55:58.125347 kernel: ACPI: MCFG 0x000000007FB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:55:58.125351 kernel: ACPI: WAET 0x000000007FB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:55:58.125356 kernel: ACPI: BGRT 0x000000007FB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 24 00:55:58.125364 kernel: ACPI: Reserving FACP table memory at [mem 0x7fb79000-0x7fb790f3] Jan 24 00:55:58.125369 kernel: ACPI: Reserving DSDT table memory at [mem 0x7fb7a000-0x7fb7c442] Jan 24 00:55:58.125374 kernel: ACPI: Reserving FACS table memory at [mem 0x7fbdd000-0x7fbdd03f] Jan 24 00:55:58.125378 kernel: ACPI: Reserving APIC table memory at [mem 0x7fb78000-0x7fb7807f] Jan 24 00:55:58.125383 kernel: ACPI: Reserving HPET table memory at [mem 0x7fb77000-0x7fb77037] Jan 24 00:55:58.125388 kernel: ACPI: Reserving MCFG table memory at [mem 0x7fb76000-0x7fb7603b] Jan 24 00:55:58.125393 kernel: ACPI: Reserving WAET table memory at [mem 0x7fb75000-0x7fb75027] Jan 24 00:55:58.125398 kernel: ACPI: Reserving BGRT table memory at [mem 0x7fb74000-0x7fb74037] Jan 24 00:55:58.125403 kernel: No NUMA configuration found Jan 24 00:55:58.125410 kernel: Faking a node at [mem 0x0000000000000000-0x0000000179ffffff] Jan 24 00:55:58.125415 kernel: NODE_DATA(0) allocated [mem 0x179ffa000-0x179ffffff] Jan 24 00:55:58.125420 kernel: Zone ranges: Jan 24 00:55:58.125425 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 24 00:55:58.125430 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 24 00:55:58.125435 kernel: Normal [mem 0x0000000100000000-0x0000000179ffffff] Jan 24 00:55:58.125440 kernel: Movable zone start for each node Jan 24 00:55:58.125445 kernel: Early memory node ranges Jan 24 00:55:58.125450 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 24 00:55:58.125455 kernel: node 0: [mem 0x0000000000100000-0x000000007ed3efff] Jan 24 00:55:58.125462 kernel: node 0: [mem 0x000000007ee00000-0x000000007f8ecfff] Jan 24 00:55:58.125467 kernel: node 0: [mem 0x000000007fbff000-0x000000007ff7bfff] Jan 24 00:55:58.125472 kernel: node 0: [mem 0x0000000100000000-0x0000000179ffffff] Jan 24 00:55:58.125477 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x0000000179ffffff] Jan 24 00:55:58.125482 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 24 00:55:58.125487 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 24 00:55:58.125492 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jan 24 00:55:58.125497 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 24 00:55:58.125502 kernel: On node 0, zone Normal: 132 pages in unavailable ranges Jan 24 00:55:58.125509 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Jan 24 00:55:58.125514 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 24 00:55:58.125519 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 24 00:55:58.125524 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 24 00:55:58.125529 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 24 00:55:58.125534 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 24 00:55:58.125551 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 24 00:55:58.125556 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 24 00:55:58.125561 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 24 00:55:58.125568 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 24 00:55:58.125573 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 24 00:55:58.125578 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 24 00:55:58.125583 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 24 00:55:58.125588 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Jan 24 00:55:58.125593 kernel: Booting paravirtualized kernel on KVM Jan 24 00:55:58.125598 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 24 00:55:58.125603 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 24 00:55:58.125608 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Jan 24 00:55:58.125615 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Jan 24 00:55:58.125620 kernel: pcpu-alloc: [0] 0 1 Jan 24 00:55:58.125625 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 24 00:55:58.125631 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:55:58.125636 kernel: random: crng init done Jan 24 00:55:58.125641 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 24 00:55:58.125646 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 24 00:55:58.125651 kernel: Fallback order for Node 0: 0 Jan 24 00:55:58.125658 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1004632 Jan 24 00:55:58.125663 kernel: Policy zone: Normal Jan 24 00:55:58.125668 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 24 00:55:58.125673 kernel: software IO TLB: area num 2. Jan 24 00:55:58.125678 kernel: Memory: 3827772K/4091168K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 263192K reserved, 0K cma-reserved) Jan 24 00:55:58.125683 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 24 00:55:58.125688 kernel: ftrace: allocating 37989 entries in 149 pages Jan 24 00:55:58.125693 kernel: ftrace: allocated 149 pages with 4 groups Jan 24 00:55:58.125698 kernel: Dynamic Preempt: voluntary Jan 24 00:55:58.125703 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 24 00:55:58.125710 kernel: rcu: RCU event tracing is enabled. Jan 24 00:55:58.125716 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 24 00:55:58.125721 kernel: Trampoline variant of Tasks RCU enabled. Jan 24 00:55:58.125739 kernel: Rude variant of Tasks RCU enabled. Jan 24 00:55:58.125747 kernel: Tracing variant of Tasks RCU enabled. Jan 24 00:55:58.125752 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 24 00:55:58.125758 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 24 00:55:58.125763 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 24 00:55:58.125768 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 24 00:55:58.125773 kernel: Console: colour dummy device 80x25 Jan 24 00:55:58.125778 kernel: printk: console [tty0] enabled Jan 24 00:55:58.125784 kernel: printk: console [ttyS0] enabled Jan 24 00:55:58.125791 kernel: ACPI: Core revision 20230628 Jan 24 00:55:58.125797 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 24 00:55:58.125803 kernel: APIC: Switch to symmetric I/O mode setup Jan 24 00:55:58.125808 kernel: x2apic enabled Jan 24 00:55:58.125813 kernel: APIC: Switched APIC routing to: physical x2apic Jan 24 00:55:58.125821 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 24 00:55:58.125826 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 24 00:55:58.125831 kernel: Calibrating delay loop (skipped) preset value.. 4800.00 BogoMIPS (lpj=2400000) Jan 24 00:55:58.125836 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 24 00:55:58.125841 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 24 00:55:58.125847 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 24 00:55:58.125852 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 24 00:55:58.125857 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jan 24 00:55:58.125865 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 24 00:55:58.125870 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 24 00:55:58.125875 kernel: active return thunk: srso_alias_return_thunk Jan 24 00:55:58.125881 kernel: Speculative Return Stack Overflow: Mitigation: Safe RET Jan 24 00:55:58.125886 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 24 00:55:58.125891 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 24 00:55:58.125896 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 24 00:55:58.125901 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 24 00:55:58.125906 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 24 00:55:58.125914 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 24 00:55:58.125919 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 24 00:55:58.125924 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 24 00:55:58.125929 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 24 00:55:58.125935 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 24 00:55:58.125940 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 24 00:55:58.125945 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 24 00:55:58.125950 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 24 00:55:58.125955 kernel: x86/fpu: xstate_offset[9]: 2432, xstate_sizes[9]: 8 Jan 24 00:55:58.125963 kernel: x86/fpu: Enabled xstate features 0x2e7, context size is 2440 bytes, using 'compacted' format. Jan 24 00:55:58.125968 kernel: Freeing SMP alternatives memory: 32K Jan 24 00:55:58.125973 kernel: pid_max: default: 32768 minimum: 301 Jan 24 00:55:58.125978 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 24 00:55:58.125984 kernel: landlock: Up and running. Jan 24 00:55:58.125989 kernel: SELinux: Initializing. Jan 24 00:55:58.125994 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 24 00:55:58.125999 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 24 00:55:58.126004 kernel: smpboot: CPU0: AMD EPYC-Genoa Processor (family: 0x19, model: 0x11, stepping: 0x0) Jan 24 00:55:58.126012 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 24 00:55:58.126017 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 24 00:55:58.126022 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 24 00:55:58.126028 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 24 00:55:58.126033 kernel: ... version: 0 Jan 24 00:55:58.126038 kernel: ... bit width: 48 Jan 24 00:55:58.126043 kernel: ... generic registers: 6 Jan 24 00:55:58.126048 kernel: ... value mask: 0000ffffffffffff Jan 24 00:55:58.126053 kernel: ... max period: 00007fffffffffff Jan 24 00:55:58.126061 kernel: ... fixed-purpose events: 0 Jan 24 00:55:58.126066 kernel: ... event mask: 000000000000003f Jan 24 00:55:58.126071 kernel: signal: max sigframe size: 3376 Jan 24 00:55:58.126076 kernel: rcu: Hierarchical SRCU implementation. Jan 24 00:55:58.126082 kernel: rcu: Max phase no-delay instances is 400. Jan 24 00:55:58.126087 kernel: smp: Bringing up secondary CPUs ... Jan 24 00:55:58.126092 kernel: smpboot: x86: Booting SMP configuration: Jan 24 00:55:58.126097 kernel: .... node #0, CPUs: #1 Jan 24 00:55:58.126102 kernel: smp: Brought up 1 node, 2 CPUs Jan 24 00:55:58.126110 kernel: smpboot: Max logical packages: 1 Jan 24 00:55:58.126115 kernel: smpboot: Total of 2 processors activated (9600.00 BogoMIPS) Jan 24 00:55:58.126120 kernel: devtmpfs: initialized Jan 24 00:55:58.126125 kernel: x86/mm: Memory block size: 128MB Jan 24 00:55:58.126131 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7fb7f000-0x7fbfefff] (524288 bytes) Jan 24 00:55:58.126136 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 24 00:55:58.126141 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 24 00:55:58.126146 kernel: pinctrl core: initialized pinctrl subsystem Jan 24 00:55:58.126152 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 24 00:55:58.126159 kernel: audit: initializing netlink subsys (disabled) Jan 24 00:55:58.126164 kernel: audit: type=2000 audit(1769216157.285:1): state=initialized audit_enabled=0 res=1 Jan 24 00:55:58.126169 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 24 00:55:58.126175 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 24 00:55:58.126180 kernel: cpuidle: using governor menu Jan 24 00:55:58.126185 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 24 00:55:58.126190 kernel: dca service started, version 1.12.1 Jan 24 00:55:58.126195 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Jan 24 00:55:58.126201 kernel: PCI: Using configuration type 1 for base access Jan 24 00:55:58.126208 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 24 00:55:58.126213 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 24 00:55:58.126218 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 24 00:55:58.126224 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 24 00:55:58.126229 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 24 00:55:58.126234 kernel: ACPI: Added _OSI(Module Device) Jan 24 00:55:58.126239 kernel: ACPI: Added _OSI(Processor Device) Jan 24 00:55:58.126244 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 24 00:55:58.126250 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 24 00:55:58.126257 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 24 00:55:58.126262 kernel: ACPI: Interpreter enabled Jan 24 00:55:58.126267 kernel: ACPI: PM: (supports S0 S5) Jan 24 00:55:58.126272 kernel: ACPI: Using IOAPIC for interrupt routing Jan 24 00:55:58.126278 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 24 00:55:58.126283 kernel: PCI: Using E820 reservations for host bridge windows Jan 24 00:55:58.126288 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 24 00:55:58.126293 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 24 00:55:58.126447 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 24 00:55:58.129588 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 24 00:55:58.129703 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 24 00:55:58.129711 kernel: PCI host bridge to bus 0000:00 Jan 24 00:55:58.129821 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 24 00:55:58.129912 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 24 00:55:58.130000 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 24 00:55:58.130090 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xdfffffff window] Jan 24 00:55:58.130176 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jan 24 00:55:58.130262 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc7ffffffff window] Jan 24 00:55:58.130348 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 24 00:55:58.130456 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 24 00:55:58.130623 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Jan 24 00:55:58.130731 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80000000-0x807fffff pref] Jan 24 00:55:58.130832 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc060500000-0xc060503fff 64bit pref] Jan 24 00:55:58.130927 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8138a000-0x8138afff] Jan 24 00:55:58.131023 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jan 24 00:55:58.131119 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Jan 24 00:55:58.131215 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 24 00:55:58.131316 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 24 00:55:58.131412 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x81389000-0x81389fff] Jan 24 00:55:58.131516 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 24 00:55:58.134576 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x81388000-0x81388fff] Jan 24 00:55:58.134694 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 24 00:55:58.134830 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x81387000-0x81387fff] Jan 24 00:55:58.134942 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 24 00:55:58.135044 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x81386000-0x81386fff] Jan 24 00:55:58.135146 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 24 00:55:58.135242 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x81385000-0x81385fff] Jan 24 00:55:58.135343 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 24 00:55:58.135439 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x81384000-0x81384fff] Jan 24 00:55:58.135550 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 24 00:55:58.135648 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x81383000-0x81383fff] Jan 24 00:55:58.135768 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 24 00:55:58.135863 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x81382000-0x81382fff] Jan 24 00:55:58.135964 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Jan 24 00:55:58.136060 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x81381000-0x81381fff] Jan 24 00:55:58.136159 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 24 00:55:58.136254 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 24 00:55:58.136358 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 24 00:55:58.136453 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x6040-0x605f] Jan 24 00:55:58.137374 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0x81380000-0x81380fff] Jan 24 00:55:58.137526 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 24 00:55:58.137663 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6000-0x603f] Jan 24 00:55:58.137786 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Jan 24 00:55:58.137893 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x81200000-0x81200fff] Jan 24 00:55:58.137993 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xc060000000-0xc060003fff 64bit pref] Jan 24 00:55:58.138092 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 24 00:55:58.138188 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 24 00:55:58.138284 kernel: pci 0000:00:02.0: bridge window [mem 0x81200000-0x812fffff] Jan 24 00:55:58.138379 kernel: pci 0000:00:02.0: bridge window [mem 0xc060000000-0xc0600fffff 64bit pref] Jan 24 00:55:58.138485 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 24 00:55:58.138605 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x81100000-0x81103fff 64bit] Jan 24 00:55:58.138700 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 24 00:55:58.138820 kernel: pci 0000:00:02.1: bridge window [mem 0x81100000-0x811fffff] Jan 24 00:55:58.138927 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Jan 24 00:55:58.139027 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x81000000-0x81000fff] Jan 24 00:55:58.139127 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xc060100000-0xc060103fff 64bit pref] Jan 24 00:55:58.139222 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 24 00:55:58.139320 kernel: pci 0000:00:02.2: bridge window [mem 0x81000000-0x810fffff] Jan 24 00:55:58.139415 kernel: pci 0000:00:02.2: bridge window [mem 0xc060100000-0xc0601fffff 64bit pref] Jan 24 00:55:58.139524 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Jan 24 00:55:58.140107 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xc060200000-0xc060203fff 64bit pref] Jan 24 00:55:58.140211 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 24 00:55:58.140307 kernel: pci 0000:00:02.3: bridge window [mem 0xc060200000-0xc0602fffff 64bit pref] Jan 24 00:55:58.140414 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 24 00:55:58.140521 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x80f00000-0x80f00fff] Jan 24 00:55:58.140682 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xc060300000-0xc060303fff 64bit pref] Jan 24 00:55:58.140789 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 24 00:55:58.140883 kernel: pci 0000:00:02.4: bridge window [mem 0x80f00000-0x80ffffff] Jan 24 00:55:58.140977 kernel: pci 0000:00:02.4: bridge window [mem 0xc060300000-0xc0603fffff 64bit pref] Jan 24 00:55:58.141082 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Jan 24 00:55:58.141181 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x80e00000-0x80e00fff] Jan 24 00:55:58.141283 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xc060400000-0xc060403fff 64bit pref] Jan 24 00:55:58.141378 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 24 00:55:58.141473 kernel: pci 0000:00:02.5: bridge window [mem 0x80e00000-0x80efffff] Jan 24 00:55:58.145609 kernel: pci 0000:00:02.5: bridge window [mem 0xc060400000-0xc0604fffff 64bit pref] Jan 24 00:55:58.145622 kernel: acpiphp: Slot [0] registered Jan 24 00:55:58.145776 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Jan 24 00:55:58.145882 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x80c00000-0x80c00fff] Jan 24 00:55:58.145983 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xc000000000-0xc000003fff 64bit pref] Jan 24 00:55:58.146088 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 24 00:55:58.146184 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 24 00:55:58.146279 kernel: pci 0000:00:02.6: bridge window [mem 0x80c00000-0x80dfffff] Jan 24 00:55:58.146374 kernel: pci 0000:00:02.6: bridge window [mem 0xc000000000-0xc01fffffff 64bit pref] Jan 24 00:55:58.146380 kernel: acpiphp: Slot [0-2] registered Jan 24 00:55:58.146477 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 24 00:55:58.146584 kernel: pci 0000:00:02.7: bridge window [mem 0x80a00000-0x80bfffff] Jan 24 00:55:58.146680 kernel: pci 0000:00:02.7: bridge window [mem 0xc020000000-0xc03fffffff 64bit pref] Jan 24 00:55:58.146691 kernel: acpiphp: Slot [0-3] registered Jan 24 00:55:58.146796 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 24 00:55:58.146890 kernel: pci 0000:00:03.0: bridge window [mem 0x80800000-0x809fffff] Jan 24 00:55:58.146985 kernel: pci 0000:00:03.0: bridge window [mem 0xc040000000-0xc05fffffff 64bit pref] Jan 24 00:55:58.146991 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 24 00:55:58.146997 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 24 00:55:58.147003 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 24 00:55:58.147008 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 24 00:55:58.147016 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 24 00:55:58.147022 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 24 00:55:58.147027 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 24 00:55:58.147033 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 24 00:55:58.147038 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 24 00:55:58.147044 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 24 00:55:58.147049 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 24 00:55:58.147054 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 24 00:55:58.147060 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 24 00:55:58.147067 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 24 00:55:58.147073 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 24 00:55:58.147078 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 24 00:55:58.147084 kernel: iommu: Default domain type: Translated Jan 24 00:55:58.147089 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 24 00:55:58.147094 kernel: efivars: Registered efivars operations Jan 24 00:55:58.147100 kernel: PCI: Using ACPI for IRQ routing Jan 24 00:55:58.147105 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 24 00:55:58.147111 kernel: e820: reserve RAM buffer [mem 0x7ed3f000-0x7fffffff] Jan 24 00:55:58.147118 kernel: e820: reserve RAM buffer [mem 0x7f8ed000-0x7fffffff] Jan 24 00:55:58.147124 kernel: e820: reserve RAM buffer [mem 0x7ff7c000-0x7fffffff] Jan 24 00:55:58.147129 kernel: e820: reserve RAM buffer [mem 0x17a000000-0x17bffffff] Jan 24 00:55:58.147226 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 24 00:55:58.147322 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 24 00:55:58.147416 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 24 00:55:58.147423 kernel: vgaarb: loaded Jan 24 00:55:58.147428 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 24 00:55:58.147434 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 24 00:55:58.147441 kernel: clocksource: Switched to clocksource kvm-clock Jan 24 00:55:58.147447 kernel: VFS: Disk quotas dquot_6.6.0 Jan 24 00:55:58.147452 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 24 00:55:58.147458 kernel: pnp: PnP ACPI init Jan 24 00:55:58.150833 kernel: system 00:04: [mem 0xe0000000-0xefffffff window] has been reserved Jan 24 00:55:58.150847 kernel: pnp: PnP ACPI: found 5 devices Jan 24 00:55:58.150853 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 24 00:55:58.150859 kernel: NET: Registered PF_INET protocol family Jan 24 00:55:58.150883 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 24 00:55:58.150891 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 24 00:55:58.150897 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 24 00:55:58.150902 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 24 00:55:58.150908 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 24 00:55:58.150913 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 24 00:55:58.150919 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 24 00:55:58.150925 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 24 00:55:58.150931 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 24 00:55:58.150939 kernel: NET: Registered PF_XDP protocol family Jan 24 00:55:58.151054 kernel: pci 0000:01:00.0: can't claim BAR 6 [mem 0xfff80000-0xffffffff pref]: no compatible bridge window Jan 24 00:55:58.151159 kernel: pci 0000:07:00.0: can't claim BAR 6 [mem 0xfff80000-0xffffffff pref]: no compatible bridge window Jan 24 00:55:58.151256 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 24 00:55:58.151352 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 24 00:55:58.151448 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 24 00:55:58.151674 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Jan 24 00:55:58.151788 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Jan 24 00:55:58.151886 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Jan 24 00:55:58.151986 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x81280000-0x812fffff pref] Jan 24 00:55:58.152083 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 24 00:55:58.152181 kernel: pci 0000:00:02.0: bridge window [mem 0x81200000-0x812fffff] Jan 24 00:55:58.152276 kernel: pci 0000:00:02.0: bridge window [mem 0xc060000000-0xc0600fffff 64bit pref] Jan 24 00:55:58.152373 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 24 00:55:58.152467 kernel: pci 0000:00:02.1: bridge window [mem 0x81100000-0x811fffff] Jan 24 00:55:58.152974 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 24 00:55:58.153078 kernel: pci 0000:00:02.2: bridge window [mem 0x81000000-0x810fffff] Jan 24 00:55:58.153173 kernel: pci 0000:00:02.2: bridge window [mem 0xc060100000-0xc0601fffff 64bit pref] Jan 24 00:55:58.153271 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 24 00:55:58.153366 kernel: pci 0000:00:02.3: bridge window [mem 0xc060200000-0xc0602fffff 64bit pref] Jan 24 00:55:58.153490 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 24 00:55:58.154211 kernel: pci 0000:00:02.4: bridge window [mem 0x80f00000-0x80ffffff] Jan 24 00:55:58.154323 kernel: pci 0000:00:02.4: bridge window [mem 0xc060300000-0xc0603fffff 64bit pref] Jan 24 00:55:58.154431 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 24 00:55:58.154528 kernel: pci 0000:00:02.5: bridge window [mem 0x80e00000-0x80efffff] Jan 24 00:55:58.154637 kernel: pci 0000:00:02.5: bridge window [mem 0xc060400000-0xc0604fffff 64bit pref] Jan 24 00:55:58.154746 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x80c80000-0x80cfffff pref] Jan 24 00:55:58.154889 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 24 00:55:58.154991 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Jan 24 00:55:58.155088 kernel: pci 0000:00:02.6: bridge window [mem 0x80c00000-0x80dfffff] Jan 24 00:55:58.155182 kernel: pci 0000:00:02.6: bridge window [mem 0xc000000000-0xc01fffffff 64bit pref] Jan 24 00:55:58.155277 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 24 00:55:58.155372 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Jan 24 00:55:58.155466 kernel: pci 0000:00:02.7: bridge window [mem 0x80a00000-0x80bfffff] Jan 24 00:55:58.155622 kernel: pci 0000:00:02.7: bridge window [mem 0xc020000000-0xc03fffffff 64bit pref] Jan 24 00:55:58.155732 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 24 00:55:58.155828 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Jan 24 00:55:58.155926 kernel: pci 0000:00:03.0: bridge window [mem 0x80800000-0x809fffff] Jan 24 00:55:58.156020 kernel: pci 0000:00:03.0: bridge window [mem 0xc040000000-0xc05fffffff 64bit pref] Jan 24 00:55:58.156114 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 24 00:55:58.156205 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 24 00:55:58.156295 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 24 00:55:58.156440 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xdfffffff window] Jan 24 00:55:58.156594 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jan 24 00:55:58.156683 kernel: pci_bus 0000:00: resource 9 [mem 0xc000000000-0xc7ffffffff window] Jan 24 00:55:58.157793 kernel: pci_bus 0000:01: resource 1 [mem 0x81200000-0x812fffff] Jan 24 00:55:58.158066 kernel: pci_bus 0000:01: resource 2 [mem 0xc060000000-0xc0600fffff 64bit pref] Jan 24 00:55:58.158311 kernel: pci_bus 0000:02: resource 1 [mem 0x81100000-0x811fffff] Jan 24 00:55:58.158582 kernel: pci_bus 0000:03: resource 1 [mem 0x81000000-0x810fffff] Jan 24 00:55:58.158859 kernel: pci_bus 0000:03: resource 2 [mem 0xc060100000-0xc0601fffff 64bit pref] Jan 24 00:55:58.159292 kernel: pci_bus 0000:04: resource 2 [mem 0xc060200000-0xc0602fffff 64bit pref] Jan 24 00:55:58.159519 kernel: pci_bus 0000:05: resource 1 [mem 0x80f00000-0x80ffffff] Jan 24 00:55:58.160774 kernel: pci_bus 0000:05: resource 2 [mem 0xc060300000-0xc0603fffff 64bit pref] Jan 24 00:55:58.160891 kernel: pci_bus 0000:06: resource 1 [mem 0x80e00000-0x80efffff] Jan 24 00:55:58.160991 kernel: pci_bus 0000:06: resource 2 [mem 0xc060400000-0xc0604fffff 64bit pref] Jan 24 00:55:58.161089 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Jan 24 00:55:58.161181 kernel: pci_bus 0000:07: resource 1 [mem 0x80c00000-0x80dfffff] Jan 24 00:55:58.161273 kernel: pci_bus 0000:07: resource 2 [mem 0xc000000000-0xc01fffffff 64bit pref] Jan 24 00:55:58.161376 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Jan 24 00:55:58.161469 kernel: pci_bus 0000:08: resource 1 [mem 0x80a00000-0x80bfffff] Jan 24 00:55:58.161577 kernel: pci_bus 0000:08: resource 2 [mem 0xc020000000-0xc03fffffff 64bit pref] Jan 24 00:55:58.161713 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Jan 24 00:55:58.161862 kernel: pci_bus 0000:09: resource 1 [mem 0x80800000-0x809fffff] Jan 24 00:55:58.161957 kernel: pci_bus 0000:09: resource 2 [mem 0xc040000000-0xc05fffffff 64bit pref] Jan 24 00:55:58.161965 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 24 00:55:58.161972 kernel: PCI: CLS 0 bytes, default 64 Jan 24 00:55:58.161978 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 24 00:55:58.161984 kernel: software IO TLB: mapped [mem 0x0000000077ffd000-0x000000007bffd000] (64MB) Jan 24 00:55:58.161989 kernel: Initialise system trusted keyrings Jan 24 00:55:58.161999 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 24 00:55:58.162005 kernel: Key type asymmetric registered Jan 24 00:55:58.162011 kernel: Asymmetric key parser 'x509' registered Jan 24 00:55:58.162017 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 24 00:55:58.162022 kernel: io scheduler mq-deadline registered Jan 24 00:55:58.162028 kernel: io scheduler kyber registered Jan 24 00:55:58.162033 kernel: io scheduler bfq registered Jan 24 00:55:58.162137 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 24 00:55:58.162238 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 24 00:55:58.162340 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 24 00:55:58.162439 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 24 00:55:58.162536 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 24 00:55:58.162694 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 24 00:55:58.162815 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 24 00:55:58.162912 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 24 00:55:58.163009 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 24 00:55:58.163105 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 24 00:55:58.163205 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 24 00:55:58.163299 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 24 00:55:58.163394 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 24 00:55:58.163489 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 24 00:55:58.164114 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 24 00:55:58.164220 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 24 00:55:58.164228 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 24 00:55:58.164325 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Jan 24 00:55:58.164425 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Jan 24 00:55:58.164432 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 24 00:55:58.164438 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Jan 24 00:55:58.164444 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 24 00:55:58.164450 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 24 00:55:58.164456 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 24 00:55:58.164461 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 24 00:55:58.164467 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 24 00:55:58.164594 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 24 00:55:58.164607 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 24 00:55:58.164699 kernel: rtc_cmos 00:03: registered as rtc0 Jan 24 00:55:58.164798 kernel: rtc_cmos 00:03: setting system clock to 2026-01-24T00:55:57 UTC (1769216157) Jan 24 00:55:58.164890 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 24 00:55:58.164900 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 24 00:55:58.164906 kernel: efifb: probing for efifb Jan 24 00:55:58.164912 kernel: efifb: framebuffer at 0x80000000, using 4032k, total 4032k Jan 24 00:55:58.164917 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jan 24 00:55:58.164925 kernel: efifb: scrolling: redraw Jan 24 00:55:58.164931 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 24 00:55:58.164936 kernel: Console: switching to colour frame buffer device 160x50 Jan 24 00:55:58.164942 kernel: fb0: EFI VGA frame buffer device Jan 24 00:55:58.164948 kernel: pstore: Using crash dump compression: deflate Jan 24 00:55:58.164954 kernel: pstore: Registered efi_pstore as persistent store backend Jan 24 00:55:58.164959 kernel: NET: Registered PF_INET6 protocol family Jan 24 00:55:58.164965 kernel: Segment Routing with IPv6 Jan 24 00:55:58.164970 kernel: In-situ OAM (IOAM) with IPv6 Jan 24 00:55:58.164979 kernel: NET: Registered PF_PACKET protocol family Jan 24 00:55:58.164984 kernel: Key type dns_resolver registered Jan 24 00:55:58.164990 kernel: IPI shorthand broadcast: enabled Jan 24 00:55:58.164995 kernel: sched_clock: Marking stable (1507010210, 196088640)->(1742251640, -39152790) Jan 24 00:55:58.165001 kernel: registered taskstats version 1 Jan 24 00:55:58.165007 kernel: Loading compiled-in X.509 certificates Jan 24 00:55:58.165012 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 6e114855f6cf7a40074d93a4383c22d00e384634' Jan 24 00:55:58.165018 kernel: Key type .fscrypt registered Jan 24 00:55:58.165023 kernel: Key type fscrypt-provisioning registered Jan 24 00:55:58.165032 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 24 00:55:58.165038 kernel: ima: Allocated hash algorithm: sha1 Jan 24 00:55:58.165043 kernel: ima: No architecture policies found Jan 24 00:55:58.165049 kernel: clk: Disabling unused clocks Jan 24 00:55:58.165054 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 24 00:55:58.165060 kernel: Write protecting the kernel read-only data: 36864k Jan 24 00:55:58.165065 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 24 00:55:58.165071 kernel: Run /init as init process Jan 24 00:55:58.165076 kernel: with arguments: Jan 24 00:55:58.165085 kernel: /init Jan 24 00:55:58.165091 kernel: with environment: Jan 24 00:55:58.165096 kernel: HOME=/ Jan 24 00:55:58.165102 kernel: TERM=linux Jan 24 00:55:58.165109 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 24 00:55:58.165117 systemd[1]: Detected virtualization kvm. Jan 24 00:55:58.165123 systemd[1]: Detected architecture x86-64. Jan 24 00:55:58.165131 systemd[1]: Running in initrd. Jan 24 00:55:58.165137 systemd[1]: No hostname configured, using default hostname. Jan 24 00:55:58.165143 systemd[1]: Hostname set to . Jan 24 00:55:58.165149 systemd[1]: Initializing machine ID from VM UUID. Jan 24 00:55:58.165155 systemd[1]: Queued start job for default target initrd.target. Jan 24 00:55:58.165161 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:55:58.165167 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:55:58.165173 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 24 00:55:58.165182 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 00:55:58.165188 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 24 00:55:58.165194 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 24 00:55:58.165201 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 24 00:55:58.165207 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 24 00:55:58.165213 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:55:58.165219 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:55:58.165227 systemd[1]: Reached target paths.target - Path Units. Jan 24 00:55:58.165233 systemd[1]: Reached target slices.target - Slice Units. Jan 24 00:55:58.165239 systemd[1]: Reached target swap.target - Swaps. Jan 24 00:55:58.165245 systemd[1]: Reached target timers.target - Timer Units. Jan 24 00:55:58.165251 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 00:55:58.165257 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 00:55:58.165265 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 24 00:55:58.165271 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 24 00:55:58.165279 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:55:58.165285 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 00:55:58.165291 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:55:58.165297 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 00:55:58.165303 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 24 00:55:58.165309 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 00:55:58.165315 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 24 00:55:58.165321 systemd[1]: Starting systemd-fsck-usr.service... Jan 24 00:55:58.165326 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 00:55:58.165335 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 00:55:58.165341 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:55:58.165347 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 24 00:55:58.165371 systemd-journald[188]: Collecting audit messages is disabled. Jan 24 00:55:58.165389 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:55:58.165395 systemd[1]: Finished systemd-fsck-usr.service. Jan 24 00:55:58.165401 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 24 00:55:58.165407 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 00:55:58.165416 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 00:55:58.165422 systemd-journald[188]: Journal started Jan 24 00:55:58.165436 systemd-journald[188]: Runtime Journal (/run/log/journal/96f7f6e19323445984ecfedc9ac7898b) is 8.0M, max 76.3M, 68.3M free. Jan 24 00:55:58.159785 systemd-modules-load[190]: Inserted module 'overlay' Jan 24 00:55:58.173996 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 00:55:58.177164 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:55:58.179536 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:55:58.190783 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:55:58.193738 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 24 00:55:58.195735 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 00:55:58.200628 kernel: Bridge firewalling registered Jan 24 00:55:58.197798 systemd-modules-load[190]: Inserted module 'br_netfilter' Jan 24 00:55:58.200251 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 00:55:58.208904 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:55:58.220322 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:55:58.224792 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:55:58.225904 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:55:58.231674 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 24 00:55:58.237722 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 00:55:58.255440 dracut-cmdline[222]: dracut-dracut-053 Jan 24 00:55:58.260576 dracut-cmdline[222]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:55:58.265470 systemd-resolved[223]: Positive Trust Anchors: Jan 24 00:55:58.266202 systemd-resolved[223]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 00:55:58.266227 systemd-resolved[223]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 00:55:58.268763 systemd-resolved[223]: Defaulting to hostname 'linux'. Jan 24 00:55:58.271380 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 00:55:58.273574 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:55:58.371621 kernel: SCSI subsystem initialized Jan 24 00:55:58.382582 kernel: Loading iSCSI transport class v2.0-870. Jan 24 00:55:58.392574 kernel: iscsi: registered transport (tcp) Jan 24 00:55:58.410386 kernel: iscsi: registered transport (qla4xxx) Jan 24 00:55:58.410449 kernel: QLogic iSCSI HBA Driver Jan 24 00:55:58.469321 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 24 00:55:58.477669 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 24 00:55:58.536622 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 24 00:55:58.536689 kernel: device-mapper: uevent: version 1.0.3 Jan 24 00:55:58.539665 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 24 00:55:58.605614 kernel: raid6: avx512x4 gen() 19120 MB/s Jan 24 00:55:58.624617 kernel: raid6: avx512x2 gen() 24331 MB/s Jan 24 00:55:58.642599 kernel: raid6: avx512x1 gen() 30517 MB/s Jan 24 00:55:58.660609 kernel: raid6: avx2x4 gen() 51572 MB/s Jan 24 00:55:58.678595 kernel: raid6: avx2x2 gen() 54732 MB/s Jan 24 00:55:58.697347 kernel: raid6: avx2x1 gen() 44830 MB/s Jan 24 00:55:58.697432 kernel: raid6: using algorithm avx2x2 gen() 54732 MB/s Jan 24 00:55:58.716458 kernel: raid6: .... xor() 36635 MB/s, rmw enabled Jan 24 00:55:58.716528 kernel: raid6: using avx512x2 recovery algorithm Jan 24 00:55:58.733577 kernel: xor: automatically using best checksumming function avx Jan 24 00:55:58.903577 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 24 00:55:58.924894 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 24 00:55:58.935888 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:55:58.983959 systemd-udevd[408]: Using default interface naming scheme 'v255'. Jan 24 00:55:58.995485 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:55:59.007858 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 24 00:55:59.046012 dracut-pre-trigger[417]: rd.md=0: removing MD RAID activation Jan 24 00:55:59.111865 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 00:55:59.118826 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 00:55:59.240659 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:55:59.251899 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 24 00:55:59.259235 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 24 00:55:59.261148 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 00:55:59.262315 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:55:59.262987 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 00:55:59.271068 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 24 00:55:59.300650 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 24 00:55:59.352577 kernel: cryptd: max_cpu_qlen set to 1000 Jan 24 00:55:59.357584 kernel: scsi host0: Virtio SCSI HBA Jan 24 00:55:59.373570 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 24 00:55:59.377925 kernel: ACPI: bus type USB registered Jan 24 00:55:59.380578 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 00:55:59.380698 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:55:59.381536 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:55:59.382287 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:55:59.382401 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:55:59.383923 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:55:59.394747 kernel: usbcore: registered new interface driver usbfs Jan 24 00:55:59.396848 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:55:59.399588 kernel: libata version 3.00 loaded. Jan 24 00:55:59.398691 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:55:59.398800 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:55:59.402090 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:55:59.411937 kernel: AVX2 version of gcm_enc/dec engaged. Jan 24 00:55:59.411977 kernel: usbcore: registered new interface driver hub Jan 24 00:55:59.416205 kernel: AES CTR mode by8 optimization enabled Jan 24 00:55:59.416231 kernel: ahci 0000:00:1f.2: version 3.0 Jan 24 00:55:59.416714 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 24 00:55:59.428563 kernel: usbcore: registered new device driver usb Jan 24 00:55:59.432454 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:55:59.438009 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:55:59.447369 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 24 00:55:59.447710 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 24 00:55:59.469981 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:55:59.472911 kernel: sd 0:0:0:0: Power-on or device reset occurred Jan 24 00:55:59.475644 kernel: sd 0:0:0:0: [sda] 160006144 512-byte logical blocks: (81.9 GB/76.3 GiB) Jan 24 00:55:59.478579 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 24 00:55:59.481182 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Jan 24 00:55:59.481319 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 24 00:55:59.484579 kernel: scsi host1: ahci Jan 24 00:55:59.488798 kernel: scsi host2: ahci Jan 24 00:55:59.489019 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 24 00:55:59.489031 kernel: scsi host3: ahci Jan 24 00:55:59.489156 kernel: GPT:17805311 != 160006143 Jan 24 00:55:59.489164 kernel: scsi host4: ahci Jan 24 00:55:59.489277 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 24 00:55:59.489286 kernel: GPT:17805311 != 160006143 Jan 24 00:55:59.490564 kernel: scsi host5: ahci Jan 24 00:55:59.490615 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 24 00:55:59.490632 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 24 00:55:59.491974 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 24 00:55:59.500791 kernel: scsi host6: ahci Jan 24 00:55:59.500856 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 24 00:55:59.510952 kernel: ata1: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380100 irq 48 Jan 24 00:55:59.510991 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jan 24 00:55:59.511172 kernel: ata2: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380180 irq 48 Jan 24 00:55:59.511191 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 24 00:55:59.511310 kernel: ata3: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380200 irq 48 Jan 24 00:55:59.511318 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 24 00:55:59.511431 kernel: ata4: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380280 irq 48 Jan 24 00:55:59.511439 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jan 24 00:55:59.511574 kernel: ata5: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380300 irq 48 Jan 24 00:55:59.511583 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jan 24 00:55:59.511708 kernel: ata6: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380380 irq 48 Jan 24 00:55:59.511719 kernel: hub 1-0:1.0: USB hub found Jan 24 00:55:59.530410 kernel: hub 1-0:1.0: 4 ports detected Jan 24 00:55:59.540594 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 24 00:55:59.544674 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (472) Jan 24 00:55:59.546685 kernel: hub 2-0:1.0: USB hub found Jan 24 00:55:59.550554 kernel: hub 2-0:1.0: 4 ports detected Jan 24 00:55:59.551669 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 24 00:55:59.559958 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 24 00:55:59.565566 kernel: BTRFS: device fsid b9d3569e-180c-420c-96ec-490d7c970b80 devid 1 transid 33 /dev/sda3 scanned by (udev-worker) (474) Jan 24 00:55:59.566590 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 24 00:55:59.575781 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jan 24 00:55:59.576503 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 24 00:55:59.581767 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 24 00:55:59.588633 disk-uuid[585]: Primary Header is updated. Jan 24 00:55:59.588633 disk-uuid[585]: Secondary Entries is updated. Jan 24 00:55:59.588633 disk-uuid[585]: Secondary Header is updated. Jan 24 00:55:59.788764 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 24 00:55:59.828370 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 24 00:55:59.828466 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 24 00:55:59.830559 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 24 00:55:59.834599 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 24 00:55:59.844566 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 24 00:55:59.844636 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 24 00:55:59.849855 kernel: ata1.00: applying bridge limits Jan 24 00:55:59.854364 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 24 00:55:59.854580 kernel: ata1.00: configured for UDMA/100 Jan 24 00:55:59.864699 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 24 00:55:59.921027 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 24 00:55:59.921537 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 24 00:55:59.938866 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Jan 24 00:55:59.958997 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 24 00:55:59.971611 kernel: usbcore: registered new interface driver usbhid Jan 24 00:55:59.971873 kernel: usbhid: USB HID core driver Jan 24 00:55:59.985258 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 Jan 24 00:55:59.985328 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jan 24 00:56:00.604603 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 24 00:56:00.609618 disk-uuid[586]: The operation has completed successfully. Jan 24 00:56:00.702434 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 24 00:56:00.702705 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 24 00:56:00.740864 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 24 00:56:00.749691 sh[598]: Success Jan 24 00:56:00.777098 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 24 00:56:00.865679 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 24 00:56:00.879767 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 24 00:56:00.885233 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 24 00:56:00.922654 kernel: BTRFS info (device dm-0): first mount of filesystem b9d3569e-180c-420c-96ec-490d7c970b80 Jan 24 00:56:00.922786 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:56:00.933717 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 24 00:56:00.933796 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 24 00:56:00.938149 kernel: BTRFS info (device dm-0): using free space tree Jan 24 00:56:00.956603 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 24 00:56:00.960025 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 24 00:56:00.962789 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 24 00:56:00.974911 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 24 00:56:00.979199 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 24 00:56:01.002614 kernel: BTRFS info (device sda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:56:01.008836 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:56:01.008884 kernel: BTRFS info (device sda6): using free space tree Jan 24 00:56:01.025332 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 24 00:56:01.025392 kernel: BTRFS info (device sda6): auto enabling async discard Jan 24 00:56:01.048230 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 24 00:56:01.054438 kernel: BTRFS info (device sda6): last unmount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:56:01.064439 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 24 00:56:01.076811 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 24 00:56:01.210475 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 00:56:01.211140 ignition[698]: Ignition 2.19.0 Jan 24 00:56:01.211159 ignition[698]: Stage: fetch-offline Jan 24 00:56:01.211253 ignition[698]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:56:01.211293 ignition[698]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 24 00:56:01.211537 ignition[698]: parsed url from cmdline: "" Jan 24 00:56:01.211589 ignition[698]: no config URL provided Jan 24 00:56:01.211613 ignition[698]: reading system config file "/usr/lib/ignition/user.ign" Jan 24 00:56:01.211649 ignition[698]: no config at "/usr/lib/ignition/user.ign" Jan 24 00:56:01.211669 ignition[698]: failed to fetch config: resource requires networking Jan 24 00:56:01.212140 ignition[698]: Ignition finished successfully Jan 24 00:56:01.221672 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 00:56:01.222591 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 00:56:01.271633 systemd-networkd[783]: lo: Link UP Jan 24 00:56:01.271658 systemd-networkd[783]: lo: Gained carrier Jan 24 00:56:01.277030 systemd-networkd[783]: Enumeration completed Jan 24 00:56:01.277710 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 00:56:01.278073 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:56:01.278080 systemd-networkd[783]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:56:01.280284 systemd-networkd[783]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:56:01.280292 systemd-networkd[783]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:56:01.281459 systemd-networkd[783]: eth0: Link UP Jan 24 00:56:01.281467 systemd-networkd[783]: eth0: Gained carrier Jan 24 00:56:01.281479 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:56:01.282103 systemd[1]: Reached target network.target - Network. Jan 24 00:56:01.286266 systemd-networkd[783]: eth1: Link UP Jan 24 00:56:01.286273 systemd-networkd[783]: eth1: Gained carrier Jan 24 00:56:01.286286 systemd-networkd[783]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:56:01.291829 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 24 00:56:01.327590 ignition[786]: Ignition 2.19.0 Jan 24 00:56:01.327611 ignition[786]: Stage: fetch Jan 24 00:56:01.327906 ignition[786]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:56:01.327928 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 24 00:56:01.329675 systemd-networkd[783]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Jan 24 00:56:01.328065 ignition[786]: parsed url from cmdline: "" Jan 24 00:56:01.328074 ignition[786]: no config URL provided Jan 24 00:56:01.328084 ignition[786]: reading system config file "/usr/lib/ignition/user.ign" Jan 24 00:56:01.328102 ignition[786]: no config at "/usr/lib/ignition/user.ign" Jan 24 00:56:01.328130 ignition[786]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jan 24 00:56:01.328410 ignition[786]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 24 00:56:01.349695 systemd-networkd[783]: eth0: DHCPv4 address 89.167.6.198/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 24 00:56:01.528643 ignition[786]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Jan 24 00:56:01.538510 ignition[786]: GET result: OK Jan 24 00:56:01.538788 ignition[786]: parsing config with SHA512: 0b2a0295a09ab032eb470a3da4ec77443d1974676278d1883e6e1066f14aa2d1df8c3b592dc34df0b24f44f62fceec461fe312178a3c055dd080dca66baaaddd Jan 24 00:56:01.546846 unknown[786]: fetched base config from "system" Jan 24 00:56:01.546865 unknown[786]: fetched base config from "system" Jan 24 00:56:01.547683 ignition[786]: fetch: fetch complete Jan 24 00:56:01.546884 unknown[786]: fetched user config from "hetzner" Jan 24 00:56:01.547695 ignition[786]: fetch: fetch passed Jan 24 00:56:01.547798 ignition[786]: Ignition finished successfully Jan 24 00:56:01.553047 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 24 00:56:01.559795 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 24 00:56:01.600711 ignition[793]: Ignition 2.19.0 Jan 24 00:56:01.600751 ignition[793]: Stage: kargs Jan 24 00:56:01.601031 ignition[793]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:56:01.601072 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 24 00:56:01.602410 ignition[793]: kargs: kargs passed Jan 24 00:56:01.605443 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 24 00:56:01.602494 ignition[793]: Ignition finished successfully Jan 24 00:56:01.614926 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 24 00:56:01.661265 ignition[799]: Ignition 2.19.0 Jan 24 00:56:01.661292 ignition[799]: Stage: disks Jan 24 00:56:01.661644 ignition[799]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:56:01.661667 ignition[799]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 24 00:56:01.663071 ignition[799]: disks: disks passed Jan 24 00:56:01.666144 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 24 00:56:01.663170 ignition[799]: Ignition finished successfully Jan 24 00:56:01.668802 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 24 00:56:01.670897 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 24 00:56:01.672850 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 00:56:01.674641 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 00:56:01.676451 systemd[1]: Reached target basic.target - Basic System. Jan 24 00:56:01.686940 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 24 00:56:01.717436 systemd-fsck[808]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 24 00:56:01.722083 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 24 00:56:01.730131 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 24 00:56:01.848563 kernel: EXT4-fs (sda9): mounted filesystem a752e1f1-ddf3-43b9-88e7-8cc533707c34 r/w with ordered data mode. Quota mode: none. Jan 24 00:56:01.849033 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 24 00:56:01.849933 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 24 00:56:01.856772 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 00:56:01.858948 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 24 00:56:01.860926 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 24 00:56:01.862914 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 24 00:56:01.862943 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 00:56:01.870990 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 24 00:56:01.871585 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (816) Jan 24 00:56:01.873033 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 24 00:56:01.879806 kernel: BTRFS info (device sda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:56:01.879838 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:56:01.879857 kernel: BTRFS info (device sda6): using free space tree Jan 24 00:56:01.890487 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 24 00:56:01.890520 kernel: BTRFS info (device sda6): auto enabling async discard Jan 24 00:56:01.898935 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 00:56:01.928651 coreos-metadata[818]: Jan 24 00:56:01.928 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jan 24 00:56:01.929831 coreos-metadata[818]: Jan 24 00:56:01.929 INFO Fetch successful Jan 24 00:56:01.930227 coreos-metadata[818]: Jan 24 00:56:01.930 INFO wrote hostname ci-4081-3-6-n-32cc93a80b to /sysroot/etc/hostname Jan 24 00:56:01.930705 initrd-setup-root[843]: cut: /sysroot/etc/passwd: No such file or directory Jan 24 00:56:01.933868 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 24 00:56:01.937382 initrd-setup-root[851]: cut: /sysroot/etc/group: No such file or directory Jan 24 00:56:01.941210 initrd-setup-root[858]: cut: /sysroot/etc/shadow: No such file or directory Jan 24 00:56:01.945244 initrd-setup-root[865]: cut: /sysroot/etc/gshadow: No such file or directory Jan 24 00:56:02.027169 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 24 00:56:02.037638 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 24 00:56:02.040913 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 24 00:56:02.045769 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 24 00:56:02.048584 kernel: BTRFS info (device sda6): last unmount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:56:02.072435 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 24 00:56:02.073914 ignition[932]: INFO : Ignition 2.19.0 Jan 24 00:56:02.073914 ignition[932]: INFO : Stage: mount Jan 24 00:56:02.073914 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:56:02.073914 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 24 00:56:02.073914 ignition[932]: INFO : mount: mount passed Jan 24 00:56:02.073914 ignition[932]: INFO : Ignition finished successfully Jan 24 00:56:02.073615 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 24 00:56:02.080637 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 24 00:56:02.085938 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 00:56:02.099581 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (945) Jan 24 00:56:02.104113 kernel: BTRFS info (device sda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:56:02.104158 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:56:02.104171 kernel: BTRFS info (device sda6): using free space tree Jan 24 00:56:02.110870 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 24 00:56:02.110903 kernel: BTRFS info (device sda6): auto enabling async discard Jan 24 00:56:02.113522 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 00:56:02.129752 ignition[961]: INFO : Ignition 2.19.0 Jan 24 00:56:02.130309 ignition[961]: INFO : Stage: files Jan 24 00:56:02.130837 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:56:02.131912 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 24 00:56:02.131912 ignition[961]: DEBUG : files: compiled without relabeling support, skipping Jan 24 00:56:02.132691 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 24 00:56:02.132691 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 24 00:56:02.136293 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 24 00:56:02.136780 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 24 00:56:02.137429 unknown[961]: wrote ssh authorized keys file for user: core Jan 24 00:56:02.137988 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 24 00:56:02.139224 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 24 00:56:02.140071 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 24 00:56:02.377201 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 24 00:56:02.680425 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 24 00:56:02.682694 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 24 00:56:02.682694 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 24 00:56:02.682694 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 24 00:56:02.682694 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 24 00:56:02.682694 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 00:56:02.682694 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 00:56:02.682694 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 00:56:02.682694 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 00:56:02.682694 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 00:56:02.682694 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 00:56:02.692041 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 00:56:02.692041 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 00:56:02.692041 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 00:56:02.692041 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 24 00:56:02.917071 systemd-networkd[783]: eth0: Gained IPv6LL Jan 24 00:56:03.110886 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 24 00:56:03.173215 systemd-networkd[783]: eth1: Gained IPv6LL Jan 24 00:56:03.424310 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 00:56:03.424310 ignition[961]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 24 00:56:03.427966 ignition[961]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 00:56:03.427966 ignition[961]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 00:56:03.427966 ignition[961]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 24 00:56:03.427966 ignition[961]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 24 00:56:03.427966 ignition[961]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 24 00:56:03.427966 ignition[961]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 24 00:56:03.427966 ignition[961]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 24 00:56:03.427966 ignition[961]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Jan 24 00:56:03.427966 ignition[961]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Jan 24 00:56:03.427966 ignition[961]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 24 00:56:03.445244 ignition[961]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 24 00:56:03.445244 ignition[961]: INFO : files: files passed Jan 24 00:56:03.445244 ignition[961]: INFO : Ignition finished successfully Jan 24 00:56:03.432735 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 24 00:56:03.441663 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 24 00:56:03.445850 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 24 00:56:03.449261 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 24 00:56:03.449577 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 24 00:56:03.467826 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:56:03.469417 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:56:03.470593 initrd-setup-root-after-ignition[991]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:56:03.471658 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 00:56:03.474096 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 24 00:56:03.483774 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 24 00:56:03.523620 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 24 00:56:03.523897 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 24 00:56:03.526449 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 24 00:56:03.527657 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 24 00:56:03.529220 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 24 00:56:03.539834 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 24 00:56:03.557721 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 00:56:03.566884 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 24 00:56:03.585135 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:56:03.586436 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:56:03.588961 systemd[1]: Stopped target timers.target - Timer Units. Jan 24 00:56:03.592012 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 24 00:56:03.592171 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 00:56:03.594142 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 24 00:56:03.594722 systemd[1]: Stopped target basic.target - Basic System. Jan 24 00:56:03.595528 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 24 00:56:03.596353 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 00:56:03.597170 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 24 00:56:03.597982 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 24 00:56:03.599018 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 00:56:03.599956 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 24 00:56:03.600875 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 24 00:56:03.602077 systemd[1]: Stopped target swap.target - Swaps. Jan 24 00:56:03.603207 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 24 00:56:03.603335 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 24 00:56:03.604578 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:56:03.605506 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:56:03.606500 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 24 00:56:03.607513 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:56:03.608252 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 24 00:56:03.608373 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 24 00:56:03.609658 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 24 00:56:03.609784 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 00:56:03.610652 systemd[1]: ignition-files.service: Deactivated successfully. Jan 24 00:56:03.610757 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 24 00:56:03.611604 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 24 00:56:03.611703 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 24 00:56:03.628770 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 24 00:56:03.629883 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 24 00:56:03.630000 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:56:03.633726 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 24 00:56:03.634670 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 24 00:56:03.635126 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:56:03.635973 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 24 00:56:03.636055 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 00:56:03.640896 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 24 00:56:03.640998 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 24 00:56:03.644571 ignition[1015]: INFO : Ignition 2.19.0 Jan 24 00:56:03.644571 ignition[1015]: INFO : Stage: umount Jan 24 00:56:03.644571 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:56:03.644571 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 24 00:56:03.650592 ignition[1015]: INFO : umount: umount passed Jan 24 00:56:03.650592 ignition[1015]: INFO : Ignition finished successfully Jan 24 00:56:03.646592 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 24 00:56:03.646702 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 24 00:56:03.647840 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 24 00:56:03.647924 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 24 00:56:03.649274 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 24 00:56:03.649327 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 24 00:56:03.651158 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 24 00:56:03.651207 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 24 00:56:03.652061 systemd[1]: Stopped target network.target - Network. Jan 24 00:56:03.652441 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 24 00:56:03.652489 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 00:56:03.654620 systemd[1]: Stopped target paths.target - Path Units. Jan 24 00:56:03.654972 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 24 00:56:03.658586 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:56:03.658944 systemd[1]: Stopped target slices.target - Slice Units. Jan 24 00:56:03.659253 systemd[1]: Stopped target sockets.target - Socket Units. Jan 24 00:56:03.659664 systemd[1]: iscsid.socket: Deactivated successfully. Jan 24 00:56:03.661581 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 00:56:03.662038 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 24 00:56:03.662426 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 00:56:03.663223 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 24 00:56:03.663275 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 24 00:56:03.664020 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 24 00:56:03.664059 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 24 00:56:03.665279 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 24 00:56:03.667630 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 24 00:56:03.669693 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 24 00:56:03.672076 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 24 00:56:03.672178 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 24 00:56:03.672209 systemd-networkd[783]: eth0: DHCPv6 lease lost Jan 24 00:56:03.675344 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 24 00:56:03.675409 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:56:03.676670 systemd-networkd[783]: eth1: DHCPv6 lease lost Jan 24 00:56:03.677955 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 24 00:56:03.678082 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 24 00:56:03.679340 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 24 00:56:03.679422 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:56:03.688981 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 24 00:56:03.693645 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 24 00:56:03.693721 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 00:56:03.694290 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 24 00:56:03.694336 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:56:03.696294 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 24 00:56:03.696340 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 24 00:56:03.697123 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:56:03.716998 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 24 00:56:03.717155 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:56:03.719219 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 24 00:56:03.719312 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 24 00:56:03.720030 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 24 00:56:03.720137 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 24 00:56:03.721534 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 24 00:56:03.721955 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 24 00:56:03.722399 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 24 00:56:03.722431 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:56:03.723271 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 24 00:56:03.723318 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 24 00:56:03.724656 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 24 00:56:03.724697 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 24 00:56:03.726109 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 00:56:03.726152 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:56:03.727512 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 24 00:56:03.727597 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 24 00:56:03.737708 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 24 00:56:03.738517 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 24 00:56:03.738933 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:56:03.739669 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:56:03.739707 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:56:03.743983 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 24 00:56:03.744092 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 24 00:56:03.745119 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 24 00:56:03.746225 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 24 00:56:03.756326 systemd[1]: Switching root. Jan 24 00:56:03.786828 systemd-journald[188]: Journal stopped Jan 24 00:56:04.920296 systemd-journald[188]: Received SIGTERM from PID 1 (systemd). Jan 24 00:56:04.920362 kernel: SELinux: policy capability network_peer_controls=1 Jan 24 00:56:04.920376 kernel: SELinux: policy capability open_perms=1 Jan 24 00:56:04.920384 kernel: SELinux: policy capability extended_socket_class=1 Jan 24 00:56:04.920393 kernel: SELinux: policy capability always_check_network=0 Jan 24 00:56:04.920401 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 24 00:56:04.920409 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 24 00:56:04.920418 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 24 00:56:04.920426 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 24 00:56:04.920438 kernel: audit: type=1403 audit(1769216163.994:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 24 00:56:04.920450 systemd[1]: Successfully loaded SELinux policy in 84.036ms. Jan 24 00:56:04.920466 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.967ms. Jan 24 00:56:04.920479 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 24 00:56:04.920488 systemd[1]: Detected virtualization kvm. Jan 24 00:56:04.920499 systemd[1]: Detected architecture x86-64. Jan 24 00:56:04.920512 systemd[1]: Detected first boot. Jan 24 00:56:04.920521 systemd[1]: Hostname set to . Jan 24 00:56:04.920530 systemd[1]: Initializing machine ID from VM UUID. Jan 24 00:56:04.921554 zram_generator::config[1057]: No configuration found. Jan 24 00:56:04.921570 systemd[1]: Populated /etc with preset unit settings. Jan 24 00:56:04.921581 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 24 00:56:04.921590 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 24 00:56:04.921605 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 24 00:56:04.921619 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 24 00:56:04.921627 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 24 00:56:04.921636 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 24 00:56:04.921646 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 24 00:56:04.921655 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 24 00:56:04.921664 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 24 00:56:04.921673 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 24 00:56:04.921682 systemd[1]: Created slice user.slice - User and Session Slice. Jan 24 00:56:04.921691 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:56:04.921704 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:56:04.921713 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 24 00:56:04.921723 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 24 00:56:04.921732 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 24 00:56:04.921741 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 00:56:04.921757 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 24 00:56:04.921770 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:56:04.921779 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 24 00:56:04.921790 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 24 00:56:04.921799 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 24 00:56:04.921809 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 24 00:56:04.921817 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:56:04.921826 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 00:56:04.921835 systemd[1]: Reached target slices.target - Slice Units. Jan 24 00:56:04.921844 systemd[1]: Reached target swap.target - Swaps. Jan 24 00:56:04.921856 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 24 00:56:04.921865 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 24 00:56:04.921873 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:56:04.921882 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 00:56:04.921891 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:56:04.921902 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 24 00:56:04.921912 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 24 00:56:04.921921 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 24 00:56:04.921929 systemd[1]: Mounting media.mount - External Media Directory... Jan 24 00:56:04.921941 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:56:04.921950 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 24 00:56:04.921958 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 24 00:56:04.922164 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 24 00:56:04.922178 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 24 00:56:04.922188 systemd[1]: Reached target machines.target - Containers. Jan 24 00:56:04.922197 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 24 00:56:04.922210 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:56:04.922219 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 00:56:04.922231 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 24 00:56:04.922240 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:56:04.922249 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 00:56:04.922259 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:56:04.922268 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 24 00:56:04.922276 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:56:04.922290 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 24 00:56:04.922299 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 24 00:56:04.923578 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 24 00:56:04.923598 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 24 00:56:04.923608 systemd[1]: Stopped systemd-fsck-usr.service. Jan 24 00:56:04.923617 kernel: fuse: init (API version 7.39) Jan 24 00:56:04.923627 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 00:56:04.923640 kernel: loop: module loaded Jan 24 00:56:04.923649 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 00:56:04.923658 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 24 00:56:04.923670 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 24 00:56:04.923679 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 00:56:04.923688 systemd[1]: verity-setup.service: Deactivated successfully. Jan 24 00:56:04.923697 systemd[1]: Stopped verity-setup.service. Jan 24 00:56:04.923706 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:56:04.923735 systemd-journald[1137]: Collecting audit messages is disabled. Jan 24 00:56:04.923772 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 24 00:56:04.923784 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 24 00:56:04.923793 systemd[1]: Mounted media.mount - External Media Directory. Jan 24 00:56:04.923803 systemd-journald[1137]: Journal started Jan 24 00:56:04.923823 systemd-journald[1137]: Runtime Journal (/run/log/journal/96f7f6e19323445984ecfedc9ac7898b) is 8.0M, max 76.3M, 68.3M free. Jan 24 00:56:04.617203 systemd[1]: Queued start job for default target multi-user.target. Jan 24 00:56:04.644847 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 24 00:56:04.645431 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 24 00:56:04.930678 kernel: ACPI: bus type drm_connector registered Jan 24 00:56:04.933571 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 00:56:04.933832 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 24 00:56:04.934320 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 24 00:56:04.934845 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 24 00:56:04.935433 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 24 00:56:04.936871 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:56:04.937512 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 24 00:56:04.937682 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 24 00:56:04.938322 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:56:04.938484 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:56:04.939231 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 00:56:04.939708 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 00:56:04.940351 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:56:04.940911 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:56:04.941580 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 24 00:56:04.941791 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 24 00:56:04.942438 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:56:04.942742 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:56:04.943461 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 00:56:04.944151 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 24 00:56:04.945023 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 24 00:56:04.958482 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 24 00:56:04.966939 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 24 00:56:04.973629 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 24 00:56:04.974130 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 24 00:56:04.974171 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 00:56:04.975475 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 24 00:56:04.985837 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 24 00:56:04.994393 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 24 00:56:04.995126 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:56:04.999806 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 24 00:56:05.008728 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 24 00:56:05.010329 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 00:56:05.018731 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 24 00:56:05.019720 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 00:56:05.023651 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:56:05.032796 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 24 00:56:05.040411 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 24 00:56:05.046278 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 24 00:56:05.046818 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 24 00:56:05.048228 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 24 00:56:05.060478 systemd-journald[1137]: Time spent on flushing to /var/log/journal/96f7f6e19323445984ecfedc9ac7898b is 66.960ms for 1176 entries. Jan 24 00:56:05.060478 systemd-journald[1137]: System Journal (/var/log/journal/96f7f6e19323445984ecfedc9ac7898b) is 8.0M, max 584.8M, 576.8M free. Jan 24 00:56:05.157380 systemd-journald[1137]: Received client request to flush runtime journal. Jan 24 00:56:05.157407 kernel: loop0: detected capacity change from 0 to 8 Jan 24 00:56:05.157419 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 24 00:56:05.086714 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 24 00:56:05.089235 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 24 00:56:05.103228 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 24 00:56:05.159467 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 24 00:56:05.177621 kernel: loop1: detected capacity change from 0 to 140768 Jan 24 00:56:05.183743 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:56:05.190193 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 24 00:56:05.192288 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 24 00:56:05.193302 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:56:05.194161 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 24 00:56:05.207731 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 00:56:05.210820 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 24 00:56:05.250564 kernel: loop2: detected capacity change from 0 to 142488 Jan 24 00:56:05.279074 udevadm[1197]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 24 00:56:05.286812 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. Jan 24 00:56:05.287145 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. Jan 24 00:56:05.293083 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:56:05.307577 kernel: loop3: detected capacity change from 0 to 224512 Jan 24 00:56:05.356604 kernel: loop4: detected capacity change from 0 to 8 Jan 24 00:56:05.365640 kernel: loop5: detected capacity change from 0 to 140768 Jan 24 00:56:05.391568 kernel: loop6: detected capacity change from 0 to 142488 Jan 24 00:56:05.412219 kernel: loop7: detected capacity change from 0 to 224512 Jan 24 00:56:05.427023 (sd-merge)[1203]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Jan 24 00:56:05.428018 (sd-merge)[1203]: Merged extensions into '/usr'. Jan 24 00:56:05.434037 systemd[1]: Reloading requested from client PID 1177 ('systemd-sysext') (unit systemd-sysext.service)... Jan 24 00:56:05.434186 systemd[1]: Reloading... Jan 24 00:56:05.517561 zram_generator::config[1225]: No configuration found. Jan 24 00:56:05.525607 ldconfig[1172]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 24 00:56:05.623489 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:56:05.660127 systemd[1]: Reloading finished in 224 ms. Jan 24 00:56:05.690963 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 24 00:56:05.691944 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 24 00:56:05.695128 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 24 00:56:05.701739 systemd[1]: Starting ensure-sysext.service... Jan 24 00:56:05.703733 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 00:56:05.712788 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:56:05.730419 systemd[1]: Reloading requested from client PID 1273 ('systemctl') (unit ensure-sysext.service)... Jan 24 00:56:05.730438 systemd[1]: Reloading... Jan 24 00:56:05.748627 systemd-udevd[1275]: Using default interface naming scheme 'v255'. Jan 24 00:56:05.750671 systemd-tmpfiles[1274]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 24 00:56:05.751012 systemd-tmpfiles[1274]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 24 00:56:05.752790 systemd-tmpfiles[1274]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 24 00:56:05.753125 systemd-tmpfiles[1274]: ACLs are not supported, ignoring. Jan 24 00:56:05.753200 systemd-tmpfiles[1274]: ACLs are not supported, ignoring. Jan 24 00:56:05.758310 systemd-tmpfiles[1274]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 00:56:05.758326 systemd-tmpfiles[1274]: Skipping /boot Jan 24 00:56:05.772765 systemd-tmpfiles[1274]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 00:56:05.772788 systemd-tmpfiles[1274]: Skipping /boot Jan 24 00:56:05.825616 zram_generator::config[1313]: No configuration found. Jan 24 00:56:05.957567 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jan 24 00:56:05.967581 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:56:06.001568 kernel: ACPI: button: Power Button [PWRF] Jan 24 00:56:06.014573 kernel: mousedev: PS/2 mouse device common for all mice Jan 24 00:56:06.023726 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 24 00:56:06.026594 systemd[1]: Reloading finished in 295 ms. Jan 24 00:56:06.050604 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:56:06.051349 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:56:06.075364 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Jan 24 00:56:06.079574 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input5 Jan 24 00:56:06.084916 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:56:06.087629 kernel: EDAC MC: Ver: 3.0.0 Jan 24 00:56:06.090743 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Jan 24 00:56:06.095860 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 24 00:56:06.105177 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 24 00:56:06.105446 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 24 00:56:06.107390 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 24 00:56:06.107606 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 24 00:56:06.105836 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 24 00:56:06.106314 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:56:06.122613 kernel: Console: switching to colour dummy device 80x25 Jan 24 00:56:06.122701 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Jan 24 00:56:06.117420 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:56:06.120238 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:56:06.126564 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 24 00:56:06.126613 kernel: [drm] features: -context_init Jan 24 00:56:06.130561 kernel: [drm] number of scanouts: 1 Jan 24 00:56:06.130610 kernel: [drm] number of cap sets: 0 Jan 24 00:56:06.131850 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:56:06.132580 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Jan 24 00:56:06.134183 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:56:06.137104 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 24 00:56:06.139682 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 24 00:56:06.139722 kernel: Console: switching to colour frame buffer device 160x50 Jan 24 00:56:06.148220 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 24 00:56:06.161624 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 00:56:06.171789 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 00:56:06.174203 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 24 00:56:06.175093 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:56:06.176504 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:56:06.177797 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:56:06.178733 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:56:06.180591 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:56:06.199709 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:56:06.200393 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:56:06.217607 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:56:06.217825 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:56:06.227717 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:56:06.239798 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 00:56:06.241817 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:56:06.243734 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:56:06.243863 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 00:56:06.248916 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 24 00:56:06.249344 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:56:06.253218 augenrules[1419]: No rules Jan 24 00:56:06.256637 systemd[1]: Finished ensure-sysext.service. Jan 24 00:56:06.257286 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 24 00:56:06.258468 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 24 00:56:06.261116 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1326) Jan 24 00:56:06.264472 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:56:06.264691 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:56:06.268775 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 00:56:06.269831 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 00:56:06.279434 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:56:06.279853 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:56:06.298067 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 24 00:56:06.308210 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 24 00:56:06.310798 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 24 00:56:06.323366 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 24 00:56:06.323512 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 00:56:06.326665 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 24 00:56:06.334850 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 24 00:56:06.344795 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:56:06.345236 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 24 00:56:06.345481 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 24 00:56:06.351872 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 24 00:56:06.361359 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:56:06.361569 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:56:06.368801 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:56:06.387431 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 24 00:56:06.396019 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 24 00:56:06.405186 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 24 00:56:06.431603 lvm[1451]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 24 00:56:06.467691 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 24 00:56:06.470690 systemd-networkd[1402]: lo: Link UP Jan 24 00:56:06.470695 systemd-networkd[1402]: lo: Gained carrier Jan 24 00:56:06.472487 systemd[1]: Reached target time-set.target - System Time Set. Jan 24 00:56:06.476329 systemd-networkd[1402]: Enumeration completed Jan 24 00:56:06.476735 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 00:56:06.479065 systemd-networkd[1402]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:56:06.479071 systemd-networkd[1402]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:56:06.482446 systemd-networkd[1402]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:56:06.482454 systemd-networkd[1402]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:56:06.483059 systemd-networkd[1402]: eth0: Link UP Jan 24 00:56:06.483064 systemd-networkd[1402]: eth0: Gained carrier Jan 24 00:56:06.483075 systemd-networkd[1402]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:56:06.485710 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 24 00:56:06.485822 systemd-networkd[1402]: eth1: Link UP Jan 24 00:56:06.485827 systemd-networkd[1402]: eth1: Gained carrier Jan 24 00:56:06.485842 systemd-networkd[1402]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:56:06.486834 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 24 00:56:06.489333 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:56:06.490406 systemd-resolved[1404]: Positive Trust Anchors: Jan 24 00:56:06.490417 systemd-resolved[1404]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 00:56:06.490439 systemd-resolved[1404]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 00:56:06.498764 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 24 00:56:06.501297 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:56:06.503707 lvm[1458]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 24 00:56:06.505429 systemd-resolved[1404]: Using system hostname 'ci-4081-3-6-n-32cc93a80b'. Jan 24 00:56:06.510315 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 00:56:06.510902 systemd[1]: Reached target network.target - Network. Jan 24 00:56:06.511252 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:56:06.511612 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 00:56:06.512045 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 24 00:56:06.512424 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 24 00:56:06.517173 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 24 00:56:06.517833 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 24 00:56:06.518278 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 24 00:56:06.519743 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 24 00:56:06.519790 systemd[1]: Reached target paths.target - Path Units. Jan 24 00:56:06.521075 systemd[1]: Reached target timers.target - Timer Units. Jan 24 00:56:06.523578 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 24 00:56:06.525607 systemd-networkd[1402]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Jan 24 00:56:06.527216 systemd-timesyncd[1435]: Network configuration changed, trying to establish connection. Jan 24 00:56:06.528354 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 24 00:56:06.535058 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 24 00:56:06.538632 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 24 00:56:06.540726 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 24 00:56:06.541579 systemd-networkd[1402]: eth0: DHCPv4 address 89.167.6.198/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 24 00:56:06.542191 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 00:56:06.542641 systemd-timesyncd[1435]: Network configuration changed, trying to establish connection. Jan 24 00:56:06.544481 systemd[1]: Reached target basic.target - Basic System. Jan 24 00:56:06.545032 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 24 00:56:06.545066 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 24 00:56:06.550792 systemd[1]: Starting containerd.service - containerd container runtime... Jan 24 00:56:06.553680 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 24 00:56:06.557787 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 24 00:56:06.567789 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 24 00:56:06.570312 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 24 00:56:06.572312 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 24 00:56:06.577776 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 24 00:56:06.582686 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 24 00:56:06.586321 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jan 24 00:56:06.589058 coreos-metadata[1464]: Jan 24 00:56:06.588 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jan 24 00:56:06.594744 coreos-metadata[1464]: Jan 24 00:56:06.594 INFO Fetch successful Jan 24 00:56:06.594744 coreos-metadata[1464]: Jan 24 00:56:06.594 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jan 24 00:56:06.594878 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 24 00:56:06.596452 coreos-metadata[1464]: Jan 24 00:56:06.595 INFO Fetch successful Jan 24 00:56:06.598233 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 24 00:56:06.611891 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 24 00:56:06.614088 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 24 00:56:06.615759 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 24 00:56:06.617783 systemd[1]: Starting update-engine.service - Update Engine... Jan 24 00:56:06.624742 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 24 00:56:06.633781 extend-filesystems[1469]: Found loop4 Jan 24 00:56:06.637354 extend-filesystems[1469]: Found loop5 Jan 24 00:56:06.637354 extend-filesystems[1469]: Found loop6 Jan 24 00:56:06.637354 extend-filesystems[1469]: Found loop7 Jan 24 00:56:06.637354 extend-filesystems[1469]: Found sda Jan 24 00:56:06.637354 extend-filesystems[1469]: Found sda1 Jan 24 00:56:06.637354 extend-filesystems[1469]: Found sda2 Jan 24 00:56:06.637354 extend-filesystems[1469]: Found sda3 Jan 24 00:56:06.637354 extend-filesystems[1469]: Found usr Jan 24 00:56:06.637354 extend-filesystems[1469]: Found sda4 Jan 24 00:56:06.637354 extend-filesystems[1469]: Found sda6 Jan 24 00:56:06.637354 extend-filesystems[1469]: Found sda7 Jan 24 00:56:06.637354 extend-filesystems[1469]: Found sda9 Jan 24 00:56:06.637354 extend-filesystems[1469]: Checking size of /dev/sda9 Jan 24 00:56:06.637348 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 24 00:56:06.681796 jq[1468]: false Jan 24 00:56:06.637593 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 24 00:56:06.656154 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 24 00:56:06.656392 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 24 00:56:06.710782 extend-filesystems[1469]: Resized partition /dev/sda9 Jan 24 00:56:06.716771 jq[1478]: true Jan 24 00:56:06.717833 extend-filesystems[1504]: resize2fs 1.47.1 (20-May-2024) Jan 24 00:56:06.722637 tar[1493]: linux-amd64/LICENSE Jan 24 00:56:06.722637 tar[1493]: linux-amd64/helm Jan 24 00:56:06.728943 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 19393531 blocks Jan 24 00:56:06.733070 systemd[1]: motdgen.service: Deactivated successfully. Jan 24 00:56:06.733263 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 24 00:56:06.737706 dbus-daemon[1465]: [system] SELinux support is enabled Jan 24 00:56:06.738245 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 24 00:56:06.744657 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 24 00:56:06.744699 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 24 00:56:06.745302 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 24 00:56:06.745325 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 24 00:56:06.756475 update_engine[1477]: I20260124 00:56:06.756114 1477 main.cc:92] Flatcar Update Engine starting Jan 24 00:56:06.760306 (ntainerd)[1500]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 24 00:56:06.761479 systemd[1]: Started update-engine.service - Update Engine. Jan 24 00:56:06.767065 update_engine[1477]: I20260124 00:56:06.764778 1477 update_check_scheduler.cc:74] Next update check in 11m26s Jan 24 00:56:06.767766 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 24 00:56:06.768768 jq[1507]: true Jan 24 00:56:06.841411 systemd-logind[1476]: New seat seat0. Jan 24 00:56:06.851418 systemd-logind[1476]: Watching system buttons on /dev/input/event2 (Power Button) Jan 24 00:56:06.851439 systemd-logind[1476]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 24 00:56:06.853631 systemd[1]: Started systemd-logind.service - User Login Management. Jan 24 00:56:06.893974 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 24 00:56:06.897260 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 24 00:56:06.924221 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1338) Jan 24 00:56:06.940844 bash[1536]: Updated "/home/core/.ssh/authorized_keys" Jan 24 00:56:06.943105 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 24 00:56:06.949871 systemd[1]: Starting sshkeys.service... Jan 24 00:56:06.971086 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 24 00:56:06.980866 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 24 00:56:07.016927 containerd[1500]: time="2026-01-24T00:56:07.015811789Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 24 00:56:07.018914 coreos-metadata[1544]: Jan 24 00:56:07.018 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jan 24 00:56:07.020053 coreos-metadata[1544]: Jan 24 00:56:07.019 INFO Fetch successful Jan 24 00:56:07.026016 locksmithd[1511]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 24 00:56:07.051892 kernel: EXT4-fs (sda9): resized filesystem to 19393531 Jan 24 00:56:07.051959 containerd[1500]: time="2026-01-24T00:56:07.039389979Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:56:07.051959 containerd[1500]: time="2026-01-24T00:56:07.043206839Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:56:07.051959 containerd[1500]: time="2026-01-24T00:56:07.043238069Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 24 00:56:07.051959 containerd[1500]: time="2026-01-24T00:56:07.043253119Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 24 00:56:07.051907 unknown[1544]: wrote ssh authorized keys file for user: core Jan 24 00:56:07.052303 containerd[1500]: time="2026-01-24T00:56:07.052091489Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 24 00:56:07.052303 containerd[1500]: time="2026-01-24T00:56:07.052119529Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 24 00:56:07.053160 containerd[1500]: time="2026-01-24T00:56:07.053137239Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:56:07.055265 extend-filesystems[1504]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 24 00:56:07.055265 extend-filesystems[1504]: old_desc_blocks = 1, new_desc_blocks = 10 Jan 24 00:56:07.055265 extend-filesystems[1504]: The filesystem on /dev/sda9 is now 19393531 (4k) blocks long. Jan 24 00:56:07.063615 extend-filesystems[1469]: Resized filesystem in /dev/sda9 Jan 24 00:56:07.063615 extend-filesystems[1469]: Found sr0 Jan 24 00:56:07.068805 containerd[1500]: time="2026-01-24T00:56:07.057649859Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:56:07.068805 containerd[1500]: time="2026-01-24T00:56:07.057925269Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:56:07.068805 containerd[1500]: time="2026-01-24T00:56:07.057943689Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 24 00:56:07.068805 containerd[1500]: time="2026-01-24T00:56:07.057958179Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:56:07.068805 containerd[1500]: time="2026-01-24T00:56:07.057967949Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 24 00:56:07.068805 containerd[1500]: time="2026-01-24T00:56:07.058053949Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:56:07.068805 containerd[1500]: time="2026-01-24T00:56:07.058253249Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:56:07.068805 containerd[1500]: time="2026-01-24T00:56:07.058350259Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:56:07.068805 containerd[1500]: time="2026-01-24T00:56:07.058365249Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 24 00:56:07.068805 containerd[1500]: time="2026-01-24T00:56:07.058444179Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 24 00:56:07.068805 containerd[1500]: time="2026-01-24T00:56:07.058482729Z" level=info msg="metadata content store policy set" policy=shared Jan 24 00:56:07.056568 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 24 00:56:07.056780 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 24 00:56:07.070607 containerd[1500]: time="2026-01-24T00:56:07.070525909Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 24 00:56:07.070607 containerd[1500]: time="2026-01-24T00:56:07.070599419Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 24 00:56:07.070651 containerd[1500]: time="2026-01-24T00:56:07.070613019Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 24 00:56:07.070651 containerd[1500]: time="2026-01-24T00:56:07.070625579Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 24 00:56:07.070651 containerd[1500]: time="2026-01-24T00:56:07.070637039Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 24 00:56:07.070977 containerd[1500]: time="2026-01-24T00:56:07.070818269Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 24 00:56:07.072013 containerd[1500]: time="2026-01-24T00:56:07.071108539Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 24 00:56:07.072013 containerd[1500]: time="2026-01-24T00:56:07.071215289Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 24 00:56:07.072013 containerd[1500]: time="2026-01-24T00:56:07.071226169Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 24 00:56:07.072013 containerd[1500]: time="2026-01-24T00:56:07.071235839Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 24 00:56:07.072013 containerd[1500]: time="2026-01-24T00:56:07.071245869Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 24 00:56:07.072013 containerd[1500]: time="2026-01-24T00:56:07.071254629Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 24 00:56:07.072013 containerd[1500]: time="2026-01-24T00:56:07.071263389Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 24 00:56:07.072013 containerd[1500]: time="2026-01-24T00:56:07.071277939Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 24 00:56:07.072013 containerd[1500]: time="2026-01-24T00:56:07.071291229Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 24 00:56:07.072013 containerd[1500]: time="2026-01-24T00:56:07.071300699Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 24 00:56:07.072013 containerd[1500]: time="2026-01-24T00:56:07.071309589Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 24 00:56:07.072013 containerd[1500]: time="2026-01-24T00:56:07.071322809Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 24 00:56:07.072013 containerd[1500]: time="2026-01-24T00:56:07.071339829Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 24 00:56:07.072013 containerd[1500]: time="2026-01-24T00:56:07.071351219Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 24 00:56:07.072197 containerd[1500]: time="2026-01-24T00:56:07.071362309Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 24 00:56:07.072197 containerd[1500]: time="2026-01-24T00:56:07.071371809Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 24 00:56:07.072197 containerd[1500]: time="2026-01-24T00:56:07.071379999Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 24 00:56:07.072197 containerd[1500]: time="2026-01-24T00:56:07.071391139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 24 00:56:07.072197 containerd[1500]: time="2026-01-24T00:56:07.071399029Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 24 00:56:07.072197 containerd[1500]: time="2026-01-24T00:56:07.071407719Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 24 00:56:07.072197 containerd[1500]: time="2026-01-24T00:56:07.071415739Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 24 00:56:07.072197 containerd[1500]: time="2026-01-24T00:56:07.071427449Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 24 00:56:07.072197 containerd[1500]: time="2026-01-24T00:56:07.071435849Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 24 00:56:07.072197 containerd[1500]: time="2026-01-24T00:56:07.071443509Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 24 00:56:07.072197 containerd[1500]: time="2026-01-24T00:56:07.071454359Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 24 00:56:07.072197 containerd[1500]: time="2026-01-24T00:56:07.071464639Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 24 00:56:07.072197 containerd[1500]: time="2026-01-24T00:56:07.071491909Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 24 00:56:07.072197 containerd[1500]: time="2026-01-24T00:56:07.071503889Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 24 00:56:07.072197 containerd[1500]: time="2026-01-24T00:56:07.071513869Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 24 00:56:07.072417 containerd[1500]: time="2026-01-24T00:56:07.071573549Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 24 00:56:07.072417 containerd[1500]: time="2026-01-24T00:56:07.071591989Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 24 00:56:07.072417 containerd[1500]: time="2026-01-24T00:56:07.071599369Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 24 00:56:07.072417 containerd[1500]: time="2026-01-24T00:56:07.071607059Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 24 00:56:07.072417 containerd[1500]: time="2026-01-24T00:56:07.071614079Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 24 00:56:07.072417 containerd[1500]: time="2026-01-24T00:56:07.071622629Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 24 00:56:07.072417 containerd[1500]: time="2026-01-24T00:56:07.071633829Z" level=info msg="NRI interface is disabled by configuration." Jan 24 00:56:07.072417 containerd[1500]: time="2026-01-24T00:56:07.071643919Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 24 00:56:07.072528 containerd[1500]: time="2026-01-24T00:56:07.071859849Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 24 00:56:07.072528 containerd[1500]: time="2026-01-24T00:56:07.071909899Z" level=info msg="Connect containerd service" Jan 24 00:56:07.072528 containerd[1500]: time="2026-01-24T00:56:07.071941729Z" level=info msg="using legacy CRI server" Jan 24 00:56:07.072528 containerd[1500]: time="2026-01-24T00:56:07.071946609Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 24 00:56:07.072528 containerd[1500]: time="2026-01-24T00:56:07.072016369Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 24 00:56:07.072528 containerd[1500]: time="2026-01-24T00:56:07.072460939Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 24 00:56:07.077983 containerd[1500]: time="2026-01-24T00:56:07.075923879Z" level=info msg="Start subscribing containerd event" Jan 24 00:56:07.077983 containerd[1500]: time="2026-01-24T00:56:07.075975239Z" level=info msg="Start recovering state" Jan 24 00:56:07.077983 containerd[1500]: time="2026-01-24T00:56:07.076028189Z" level=info msg="Start event monitor" Jan 24 00:56:07.077983 containerd[1500]: time="2026-01-24T00:56:07.076035899Z" level=info msg="Start snapshots syncer" Jan 24 00:56:07.077983 containerd[1500]: time="2026-01-24T00:56:07.076042659Z" level=info msg="Start cni network conf syncer for default" Jan 24 00:56:07.077983 containerd[1500]: time="2026-01-24T00:56:07.076054659Z" level=info msg="Start streaming server" Jan 24 00:56:07.077983 containerd[1500]: time="2026-01-24T00:56:07.077839959Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 24 00:56:07.077983 containerd[1500]: time="2026-01-24T00:56:07.077893869Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 24 00:56:07.083829 containerd[1500]: time="2026-01-24T00:56:07.083802119Z" level=info msg="containerd successfully booted in 0.071916s" Jan 24 00:56:07.086343 systemd[1]: Started containerd.service - containerd container runtime. Jan 24 00:56:07.103610 update-ssh-keys[1552]: Updated "/home/core/.ssh/authorized_keys" Jan 24 00:56:07.105946 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 24 00:56:07.113093 systemd[1]: Finished sshkeys.service. Jan 24 00:56:07.276269 sshd_keygen[1490]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 24 00:56:07.295235 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 24 00:56:07.304853 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 24 00:56:07.313439 systemd[1]: issuegen.service: Deactivated successfully. Jan 24 00:56:07.313721 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 24 00:56:07.324122 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 24 00:56:07.333919 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 24 00:56:07.344902 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 24 00:56:07.351031 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 24 00:56:07.354056 systemd[1]: Reached target getty.target - Login Prompts. Jan 24 00:56:07.401914 tar[1493]: linux-amd64/README.md Jan 24 00:56:07.412902 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 24 00:56:08.036922 systemd-networkd[1402]: eth1: Gained IPv6LL Jan 24 00:56:08.037972 systemd-timesyncd[1435]: Network configuration changed, trying to establish connection. Jan 24 00:56:08.043061 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 24 00:56:08.045456 systemd[1]: Reached target network-online.target - Network is Online. Jan 24 00:56:08.057008 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:56:08.077637 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 24 00:56:08.134238 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 24 00:56:08.293040 systemd-networkd[1402]: eth0: Gained IPv6LL Jan 24 00:56:08.294139 systemd-timesyncd[1435]: Network configuration changed, trying to establish connection. Jan 24 00:56:09.141810 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:56:09.144362 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 24 00:56:09.149747 systemd[1]: Startup finished in 1.658s (kernel) + 6.182s (initrd) + 5.238s (userspace) = 13.079s. Jan 24 00:56:09.153518 (kubelet)[1594]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:56:10.020307 kubelet[1594]: E0124 00:56:10.020159 1594 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:56:10.026284 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:56:10.026727 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:56:10.027477 systemd[1]: kubelet.service: Consumed 1.387s CPU time. Jan 24 00:56:13.133436 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 24 00:56:13.140032 systemd[1]: Started sshd@0-89.167.6.198:22-20.161.92.111:43226.service - OpenSSH per-connection server daemon (20.161.92.111:43226). Jan 24 00:56:13.930572 sshd[1606]: Accepted publickey for core from 20.161.92.111 port 43226 ssh2: RSA SHA256:OsSs7dy1EZ4NwQ5GvwLn/kngMzUyINAIgjgXHlkMFNw Jan 24 00:56:13.934365 sshd[1606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:56:13.954293 systemd-logind[1476]: New session 1 of user core. Jan 24 00:56:13.956084 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 24 00:56:13.962393 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 24 00:56:14.002802 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 24 00:56:14.017207 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 24 00:56:14.030601 (systemd)[1610]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 24 00:56:14.161714 systemd[1610]: Queued start job for default target default.target. Jan 24 00:56:14.168707 systemd[1610]: Created slice app.slice - User Application Slice. Jan 24 00:56:14.168730 systemd[1610]: Reached target paths.target - Paths. Jan 24 00:56:14.168742 systemd[1610]: Reached target timers.target - Timers. Jan 24 00:56:14.170278 systemd[1610]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 24 00:56:14.205948 systemd[1610]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 24 00:56:14.206208 systemd[1610]: Reached target sockets.target - Sockets. Jan 24 00:56:14.206240 systemd[1610]: Reached target basic.target - Basic System. Jan 24 00:56:14.206322 systemd[1610]: Reached target default.target - Main User Target. Jan 24 00:56:14.206393 systemd[1610]: Startup finished in 163ms. Jan 24 00:56:14.206623 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 24 00:56:14.221947 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 24 00:56:14.776306 systemd[1]: Started sshd@1-89.167.6.198:22-20.161.92.111:43236.service - OpenSSH per-connection server daemon (20.161.92.111:43236). Jan 24 00:56:15.552390 sshd[1621]: Accepted publickey for core from 20.161.92.111 port 43236 ssh2: RSA SHA256:OsSs7dy1EZ4NwQ5GvwLn/kngMzUyINAIgjgXHlkMFNw Jan 24 00:56:15.555440 sshd[1621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:56:15.565667 systemd-logind[1476]: New session 2 of user core. Jan 24 00:56:15.572932 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 24 00:56:16.093372 sshd[1621]: pam_unix(sshd:session): session closed for user core Jan 24 00:56:16.098653 systemd[1]: sshd@1-89.167.6.198:22-20.161.92.111:43236.service: Deactivated successfully. Jan 24 00:56:16.102488 systemd[1]: session-2.scope: Deactivated successfully. Jan 24 00:56:16.105044 systemd-logind[1476]: Session 2 logged out. Waiting for processes to exit. Jan 24 00:56:16.106971 systemd-logind[1476]: Removed session 2. Jan 24 00:56:16.234970 systemd[1]: Started sshd@2-89.167.6.198:22-20.161.92.111:43238.service - OpenSSH per-connection server daemon (20.161.92.111:43238). Jan 24 00:56:17.023551 sshd[1628]: Accepted publickey for core from 20.161.92.111 port 43238 ssh2: RSA SHA256:OsSs7dy1EZ4NwQ5GvwLn/kngMzUyINAIgjgXHlkMFNw Jan 24 00:56:17.026033 sshd[1628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:56:17.034066 systemd-logind[1476]: New session 3 of user core. Jan 24 00:56:17.043951 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 24 00:56:17.555904 sshd[1628]: pam_unix(sshd:session): session closed for user core Jan 24 00:56:17.562329 systemd-logind[1476]: Session 3 logged out. Waiting for processes to exit. Jan 24 00:56:17.564217 systemd[1]: sshd@2-89.167.6.198:22-20.161.92.111:43238.service: Deactivated successfully. Jan 24 00:56:17.567813 systemd[1]: session-3.scope: Deactivated successfully. Jan 24 00:56:17.569296 systemd-logind[1476]: Removed session 3. Jan 24 00:56:17.694049 systemd[1]: Started sshd@3-89.167.6.198:22-20.161.92.111:43246.service - OpenSSH per-connection server daemon (20.161.92.111:43246). Jan 24 00:56:18.470759 sshd[1635]: Accepted publickey for core from 20.161.92.111 port 43246 ssh2: RSA SHA256:OsSs7dy1EZ4NwQ5GvwLn/kngMzUyINAIgjgXHlkMFNw Jan 24 00:56:18.473459 sshd[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:56:18.481165 systemd-logind[1476]: New session 4 of user core. Jan 24 00:56:18.490811 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 24 00:56:19.010617 sshd[1635]: pam_unix(sshd:session): session closed for user core Jan 24 00:56:19.017144 systemd[1]: sshd@3-89.167.6.198:22-20.161.92.111:43246.service: Deactivated successfully. Jan 24 00:56:19.020864 systemd[1]: session-4.scope: Deactivated successfully. Jan 24 00:56:19.021865 systemd-logind[1476]: Session 4 logged out. Waiting for processes to exit. Jan 24 00:56:19.023457 systemd-logind[1476]: Removed session 4. Jan 24 00:56:19.149940 systemd[1]: Started sshd@4-89.167.6.198:22-20.161.92.111:43254.service - OpenSSH per-connection server daemon (20.161.92.111:43254). Jan 24 00:56:19.922528 sshd[1642]: Accepted publickey for core from 20.161.92.111 port 43254 ssh2: RSA SHA256:OsSs7dy1EZ4NwQ5GvwLn/kngMzUyINAIgjgXHlkMFNw Jan 24 00:56:19.925213 sshd[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:56:19.932702 systemd-logind[1476]: New session 5 of user core. Jan 24 00:56:19.939771 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 24 00:56:20.194244 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 24 00:56:20.203897 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:56:20.345404 sudo[1648]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 24 00:56:20.345780 sudo[1648]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:56:20.371524 sudo[1648]: pam_unix(sudo:session): session closed for user root Jan 24 00:56:20.388249 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:56:20.391773 (kubelet)[1655]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:56:20.419979 kubelet[1655]: E0124 00:56:20.419924 1655 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:56:20.423720 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:56:20.423894 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:56:20.495743 sshd[1642]: pam_unix(sshd:session): session closed for user core Jan 24 00:56:20.502151 systemd[1]: sshd@4-89.167.6.198:22-20.161.92.111:43254.service: Deactivated successfully. Jan 24 00:56:20.506111 systemd[1]: session-5.scope: Deactivated successfully. Jan 24 00:56:20.508871 systemd-logind[1476]: Session 5 logged out. Waiting for processes to exit. Jan 24 00:56:20.511303 systemd-logind[1476]: Removed session 5. Jan 24 00:56:20.638057 systemd[1]: Started sshd@5-89.167.6.198:22-20.161.92.111:43264.service - OpenSSH per-connection server daemon (20.161.92.111:43264). Jan 24 00:56:21.416122 sshd[1665]: Accepted publickey for core from 20.161.92.111 port 43264 ssh2: RSA SHA256:OsSs7dy1EZ4NwQ5GvwLn/kngMzUyINAIgjgXHlkMFNw Jan 24 00:56:21.419031 sshd[1665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:56:21.426407 systemd-logind[1476]: New session 6 of user core. Jan 24 00:56:21.437769 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 24 00:56:21.833498 sudo[1669]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 24 00:56:21.834284 sudo[1669]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:56:21.840844 sudo[1669]: pam_unix(sudo:session): session closed for user root Jan 24 00:56:21.852744 sudo[1668]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 24 00:56:21.853409 sudo[1668]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:56:21.875046 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 24 00:56:21.881861 auditctl[1672]: No rules Jan 24 00:56:21.882702 systemd[1]: audit-rules.service: Deactivated successfully. Jan 24 00:56:21.883079 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 24 00:56:21.891071 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 24 00:56:21.954761 augenrules[1690]: No rules Jan 24 00:56:21.957775 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 24 00:56:21.960878 sudo[1668]: pam_unix(sudo:session): session closed for user root Jan 24 00:56:22.084850 sshd[1665]: pam_unix(sshd:session): session closed for user core Jan 24 00:56:22.091510 systemd[1]: sshd@5-89.167.6.198:22-20.161.92.111:43264.service: Deactivated successfully. Jan 24 00:56:22.095425 systemd[1]: session-6.scope: Deactivated successfully. Jan 24 00:56:22.098344 systemd-logind[1476]: Session 6 logged out. Waiting for processes to exit. Jan 24 00:56:22.100500 systemd-logind[1476]: Removed session 6. Jan 24 00:56:22.222986 systemd[1]: Started sshd@6-89.167.6.198:22-20.161.92.111:52512.service - OpenSSH per-connection server daemon (20.161.92.111:52512). Jan 24 00:56:23.001352 sshd[1698]: Accepted publickey for core from 20.161.92.111 port 52512 ssh2: RSA SHA256:OsSs7dy1EZ4NwQ5GvwLn/kngMzUyINAIgjgXHlkMFNw Jan 24 00:56:23.003670 sshd[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:56:23.010607 systemd-logind[1476]: New session 7 of user core. Jan 24 00:56:23.024108 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 24 00:56:23.412060 sudo[1701]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 24 00:56:23.412428 sudo[1701]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:56:23.728783 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 24 00:56:23.746374 (dockerd)[1717]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 24 00:56:24.157458 dockerd[1717]: time="2026-01-24T00:56:24.157280078Z" level=info msg="Starting up" Jan 24 00:56:24.304632 dockerd[1717]: time="2026-01-24T00:56:24.304573558Z" level=info msg="Loading containers: start." Jan 24 00:56:24.421568 kernel: Initializing XFRM netlink socket Jan 24 00:56:24.465861 systemd-timesyncd[1435]: Network configuration changed, trying to establish connection. Jan 24 00:56:24.548410 systemd-networkd[1402]: docker0: Link UP Jan 24 00:56:24.571601 dockerd[1717]: time="2026-01-24T00:56:24.571515068Z" level=info msg="Loading containers: done." Jan 24 00:56:25.803909 systemd-resolved[1404]: Clock change detected. Flushing caches. Jan 24 00:56:25.804510 systemd-timesyncd[1435]: Contacted time server 79.133.44.137:123 (2.flatcar.pool.ntp.org). Jan 24 00:56:25.804574 systemd-timesyncd[1435]: Initial clock synchronization to Sat 2026-01-24 00:56:25.803852 UTC. Jan 24 00:56:25.809967 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2538392288-merged.mount: Deactivated successfully. Jan 24 00:56:25.812589 dockerd[1717]: time="2026-01-24T00:56:25.812525620Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 24 00:56:25.812702 dockerd[1717]: time="2026-01-24T00:56:25.812636050Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 24 00:56:25.812916 dockerd[1717]: time="2026-01-24T00:56:25.812881180Z" level=info msg="Daemon has completed initialization" Jan 24 00:56:25.852867 dockerd[1717]: time="2026-01-24T00:56:25.852713150Z" level=info msg="API listen on /run/docker.sock" Jan 24 00:56:25.853171 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 24 00:56:27.118790 containerd[1500]: time="2026-01-24T00:56:27.118397350Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 24 00:56:27.812106 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1193042719.mount: Deactivated successfully. Jan 24 00:56:29.203704 containerd[1500]: time="2026-01-24T00:56:29.203639969Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:29.205061 containerd[1500]: time="2026-01-24T00:56:29.205002939Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=29070747" Jan 24 00:56:29.205767 containerd[1500]: time="2026-01-24T00:56:29.205691819Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:29.207991 containerd[1500]: time="2026-01-24T00:56:29.207964619Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:29.208766 containerd[1500]: time="2026-01-24T00:56:29.208620509Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 2.090175179s" Jan 24 00:56:29.208766 containerd[1500]: time="2026-01-24T00:56:29.208645799Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 24 00:56:29.209517 containerd[1500]: time="2026-01-24T00:56:29.209473889Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 24 00:56:30.525573 containerd[1500]: time="2026-01-24T00:56:30.525515759Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:30.526847 containerd[1500]: time="2026-01-24T00:56:30.526802439Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24993376" Jan 24 00:56:30.527779 containerd[1500]: time="2026-01-24T00:56:30.527569759Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:30.531312 containerd[1500]: time="2026-01-24T00:56:30.529781739Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:30.531312 containerd[1500]: time="2026-01-24T00:56:30.531167639Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 1.321667s" Jan 24 00:56:30.531312 containerd[1500]: time="2026-01-24T00:56:30.531203229Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 24 00:56:30.532087 containerd[1500]: time="2026-01-24T00:56:30.531863589Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 24 00:56:31.599389 containerd[1500]: time="2026-01-24T00:56:31.599345859Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:31.600599 containerd[1500]: time="2026-01-24T00:56:31.600473199Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19405098" Jan 24 00:56:31.601791 containerd[1500]: time="2026-01-24T00:56:31.601529359Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:31.603717 containerd[1500]: time="2026-01-24T00:56:31.603700819Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:31.604511 containerd[1500]: time="2026-01-24T00:56:31.604493169Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 1.07259825s" Jan 24 00:56:31.604569 containerd[1500]: time="2026-01-24T00:56:31.604558029Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 24 00:56:31.605139 containerd[1500]: time="2026-01-24T00:56:31.605119359Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 24 00:56:31.665318 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 24 00:56:31.671045 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:56:31.873481 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:56:31.893368 (kubelet)[1928]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:56:31.968790 kubelet[1928]: E0124 00:56:31.967825 1928 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:56:31.976455 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:56:31.976891 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:56:32.756526 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2044132733.mount: Deactivated successfully. Jan 24 00:56:33.035257 containerd[1500]: time="2026-01-24T00:56:33.035142459Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:33.036412 containerd[1500]: time="2026-01-24T00:56:33.036286659Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31161927" Jan 24 00:56:33.037553 containerd[1500]: time="2026-01-24T00:56:33.037307419Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:33.040010 containerd[1500]: time="2026-01-24T00:56:33.039210419Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:33.040010 containerd[1500]: time="2026-01-24T00:56:33.039691499Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 1.43455011s" Jan 24 00:56:33.040010 containerd[1500]: time="2026-01-24T00:56:33.039716219Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 24 00:56:33.040260 containerd[1500]: time="2026-01-24T00:56:33.040233479Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 24 00:56:33.571489 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2553886281.mount: Deactivated successfully. Jan 24 00:56:34.605007 containerd[1500]: time="2026-01-24T00:56:34.604937289Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:34.606598 containerd[1500]: time="2026-01-24T00:56:34.606542889Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565335" Jan 24 00:56:34.608151 containerd[1500]: time="2026-01-24T00:56:34.606893369Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:34.610702 containerd[1500]: time="2026-01-24T00:56:34.609627989Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:34.610702 containerd[1500]: time="2026-01-24T00:56:34.610351759Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.57004315s" Jan 24 00:56:34.610702 containerd[1500]: time="2026-01-24T00:56:34.610372419Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 24 00:56:34.610901 containerd[1500]: time="2026-01-24T00:56:34.610888939Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 24 00:56:35.101668 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1944935627.mount: Deactivated successfully. Jan 24 00:56:35.112280 containerd[1500]: time="2026-01-24T00:56:35.112212039Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:35.116851 containerd[1500]: time="2026-01-24T00:56:35.116778279Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321160" Jan 24 00:56:35.121442 containerd[1500]: time="2026-01-24T00:56:35.120091099Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:35.124616 containerd[1500]: time="2026-01-24T00:56:35.124567719Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:35.126100 containerd[1500]: time="2026-01-24T00:56:35.126059309Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 515.11529ms" Jan 24 00:56:35.126323 containerd[1500]: time="2026-01-24T00:56:35.126295879Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 24 00:56:35.127634 containerd[1500]: time="2026-01-24T00:56:35.127589679Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 24 00:56:35.700532 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount343748418.mount: Deactivated successfully. Jan 24 00:56:37.546009 containerd[1500]: time="2026-01-24T00:56:37.545956589Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:37.547114 containerd[1500]: time="2026-01-24T00:56:37.546939889Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682132" Jan 24 00:56:37.548016 containerd[1500]: time="2026-01-24T00:56:37.547770289Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:37.549963 containerd[1500]: time="2026-01-24T00:56:37.549942819Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:37.550753 containerd[1500]: time="2026-01-24T00:56:37.550714729Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.42308916s" Jan 24 00:56:37.550819 containerd[1500]: time="2026-01-24T00:56:37.550808419Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 24 00:56:41.810326 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:56:41.824209 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:56:41.882556 systemd[1]: Reloading requested from client PID 2080 ('systemctl') (unit session-7.scope)... Jan 24 00:56:41.882579 systemd[1]: Reloading... Jan 24 00:56:42.002770 zram_generator::config[2120]: No configuration found. Jan 24 00:56:42.081582 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:56:42.142200 systemd[1]: Reloading finished in 258 ms. Jan 24 00:56:42.191513 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:56:42.195313 (kubelet)[2166]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 24 00:56:42.196405 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:56:42.197157 systemd[1]: kubelet.service: Deactivated successfully. Jan 24 00:56:42.197555 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:56:42.202418 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:56:42.359897 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:56:42.372940 (kubelet)[2177]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 24 00:56:42.415680 kubelet[2177]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:56:42.415680 kubelet[2177]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 24 00:56:42.415680 kubelet[2177]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:56:42.415680 kubelet[2177]: I0124 00:56:42.415500 2177 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 24 00:56:42.707496 kubelet[2177]: I0124 00:56:42.707448 2177 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 24 00:56:42.707496 kubelet[2177]: I0124 00:56:42.707487 2177 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 24 00:56:42.707895 kubelet[2177]: I0124 00:56:42.707874 2177 server.go:954] "Client rotation is on, will bootstrap in background" Jan 24 00:56:42.733176 kubelet[2177]: E0124 00:56:42.733136 2177 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://89.167.6.198:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 89.167.6.198:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:56:42.739115 kubelet[2177]: I0124 00:56:42.739097 2177 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 24 00:56:42.748121 kubelet[2177]: E0124 00:56:42.748099 2177 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 24 00:56:42.748280 kubelet[2177]: I0124 00:56:42.748197 2177 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 24 00:56:42.752000 kubelet[2177]: I0124 00:56:42.751987 2177 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 24 00:56:42.754049 kubelet[2177]: I0124 00:56:42.753993 2177 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 24 00:56:42.754172 kubelet[2177]: I0124 00:56:42.754020 2177 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-32cc93a80b","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 24 00:56:42.754172 kubelet[2177]: I0124 00:56:42.754156 2177 topology_manager.go:138] "Creating topology manager with none policy" Jan 24 00:56:42.754172 kubelet[2177]: I0124 00:56:42.754163 2177 container_manager_linux.go:304] "Creating device plugin manager" Jan 24 00:56:42.754362 kubelet[2177]: I0124 00:56:42.754261 2177 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:56:42.757715 kubelet[2177]: I0124 00:56:42.757681 2177 kubelet.go:446] "Attempting to sync node with API server" Jan 24 00:56:42.757715 kubelet[2177]: I0124 00:56:42.757704 2177 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 24 00:56:42.757715 kubelet[2177]: I0124 00:56:42.757719 2177 kubelet.go:352] "Adding apiserver pod source" Jan 24 00:56:42.758827 kubelet[2177]: I0124 00:56:42.757728 2177 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 24 00:56:42.768214 kubelet[2177]: W0124 00:56:42.768112 2177 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://89.167.6.198:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 89.167.6.198:6443: connect: connection refused Jan 24 00:56:42.768214 kubelet[2177]: E0124 00:56:42.768143 2177 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://89.167.6.198:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 89.167.6.198:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:56:42.769050 kubelet[2177]: W0124 00:56:42.768934 2177 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://89.167.6.198:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-32cc93a80b&limit=500&resourceVersion=0": dial tcp 89.167.6.198:6443: connect: connection refused Jan 24 00:56:42.769050 kubelet[2177]: E0124 00:56:42.768994 2177 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://89.167.6.198:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-32cc93a80b&limit=500&resourceVersion=0\": dial tcp 89.167.6.198:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:56:42.769213 kubelet[2177]: I0124 00:56:42.769185 2177 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 24 00:56:42.769821 kubelet[2177]: I0124 00:56:42.769803 2177 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 24 00:56:42.770695 kubelet[2177]: W0124 00:56:42.770673 2177 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 24 00:56:42.774038 kubelet[2177]: I0124 00:56:42.774010 2177 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 24 00:56:42.774092 kubelet[2177]: I0124 00:56:42.774080 2177 server.go:1287] "Started kubelet" Jan 24 00:56:42.775988 kubelet[2177]: I0124 00:56:42.775957 2177 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 24 00:56:42.778565 kubelet[2177]: E0124 00:56:42.777508 2177 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://89.167.6.198:6443/api/v1/namespaces/default/events\": dial tcp 89.167.6.198:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-6-n-32cc93a80b.188d84c34136389c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-n-32cc93a80b,UID:ci-4081-3-6-n-32cc93a80b,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-32cc93a80b,},FirstTimestamp:2026-01-24 00:56:42.774034588 +0000 UTC m=+0.394689121,LastTimestamp:2026-01-24 00:56:42.774034588 +0000 UTC m=+0.394689121,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-32cc93a80b,}" Jan 24 00:56:42.780836 kubelet[2177]: I0124 00:56:42.780325 2177 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 24 00:56:42.781199 kubelet[2177]: I0124 00:56:42.781187 2177 server.go:479] "Adding debug handlers to kubelet server" Jan 24 00:56:42.782759 kubelet[2177]: I0124 00:56:42.782712 2177 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 24 00:56:42.782945 kubelet[2177]: I0124 00:56:42.782934 2177 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 24 00:56:42.783527 kubelet[2177]: I0124 00:56:42.783501 2177 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 24 00:56:42.783872 kubelet[2177]: E0124 00:56:42.783846 2177 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-32cc93a80b\" not found" Jan 24 00:56:42.787378 kubelet[2177]: E0124 00:56:42.787357 2177 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://89.167.6.198:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-32cc93a80b?timeout=10s\": dial tcp 89.167.6.198:6443: connect: connection refused" interval="200ms" Jan 24 00:56:42.787601 kubelet[2177]: I0124 00:56:42.787590 2177 factory.go:221] Registration of the systemd container factory successfully Jan 24 00:56:42.787710 kubelet[2177]: I0124 00:56:42.787699 2177 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 24 00:56:42.791315 kubelet[2177]: I0124 00:56:42.791288 2177 factory.go:221] Registration of the containerd container factory successfully Jan 24 00:56:42.793178 kubelet[2177]: E0124 00:56:42.793146 2177 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 24 00:56:42.793409 kubelet[2177]: I0124 00:56:42.793378 2177 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 24 00:56:42.793557 kubelet[2177]: I0124 00:56:42.793532 2177 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 24 00:56:42.793610 kubelet[2177]: I0124 00:56:42.793593 2177 reconciler.go:26] "Reconciler: start to sync state" Jan 24 00:56:42.809311 kubelet[2177]: W0124 00:56:42.809280 2177 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://89.167.6.198:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 89.167.6.198:6443: connect: connection refused Jan 24 00:56:42.809447 kubelet[2177]: E0124 00:56:42.809384 2177 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://89.167.6.198:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 89.167.6.198:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:56:42.815343 kubelet[2177]: I0124 00:56:42.815310 2177 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 24 00:56:42.815499 kubelet[2177]: I0124 00:56:42.815424 2177 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 24 00:56:42.815499 kubelet[2177]: I0124 00:56:42.815451 2177 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:56:42.820030 kubelet[2177]: I0124 00:56:42.819855 2177 policy_none.go:49] "None policy: Start" Jan 24 00:56:42.820030 kubelet[2177]: I0124 00:56:42.819875 2177 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 24 00:56:42.820030 kubelet[2177]: I0124 00:56:42.819885 2177 state_mem.go:35] "Initializing new in-memory state store" Jan 24 00:56:42.826695 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 24 00:56:42.829562 kubelet[2177]: I0124 00:56:42.829454 2177 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 24 00:56:42.831933 kubelet[2177]: I0124 00:56:42.831868 2177 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 24 00:56:42.831933 kubelet[2177]: I0124 00:56:42.831882 2177 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 24 00:56:42.831933 kubelet[2177]: I0124 00:56:42.831897 2177 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 24 00:56:42.831933 kubelet[2177]: I0124 00:56:42.831904 2177 kubelet.go:2382] "Starting kubelet main sync loop" Jan 24 00:56:42.832198 kubelet[2177]: E0124 00:56:42.832071 2177 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 24 00:56:42.834569 kubelet[2177]: W0124 00:56:42.834552 2177 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://89.167.6.198:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 89.167.6.198:6443: connect: connection refused Jan 24 00:56:42.834710 kubelet[2177]: E0124 00:56:42.834647 2177 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://89.167.6.198:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 89.167.6.198:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:56:42.837851 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 24 00:56:42.841936 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 24 00:56:42.853884 kubelet[2177]: I0124 00:56:42.853852 2177 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 24 00:56:42.854198 kubelet[2177]: I0124 00:56:42.854151 2177 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 24 00:56:42.854198 kubelet[2177]: I0124 00:56:42.854171 2177 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 24 00:56:42.854524 kubelet[2177]: I0124 00:56:42.854500 2177 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 24 00:56:42.857268 kubelet[2177]: E0124 00:56:42.857240 2177 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 24 00:56:42.857352 kubelet[2177]: E0124 00:56:42.857331 2177 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-6-n-32cc93a80b\" not found" Jan 24 00:56:42.948967 systemd[1]: Created slice kubepods-burstable-pod0dffbbb0eb7a6d47fe54af980d4623ec.slice - libcontainer container kubepods-burstable-pod0dffbbb0eb7a6d47fe54af980d4623ec.slice. Jan 24 00:56:42.956961 kubelet[2177]: I0124 00:56:42.956893 2177 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-32cc93a80b" Jan 24 00:56:42.957630 kubelet[2177]: E0124 00:56:42.957415 2177 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://89.167.6.198:6443/api/v1/nodes\": dial tcp 89.167.6.198:6443: connect: connection refused" node="ci-4081-3-6-n-32cc93a80b" Jan 24 00:56:42.964705 kubelet[2177]: E0124 00:56:42.964647 2177 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-32cc93a80b\" not found" node="ci-4081-3-6-n-32cc93a80b" Jan 24 00:56:42.970311 systemd[1]: Created slice kubepods-burstable-pode63aa3adde624d08ed8b79c4194ca41f.slice - libcontainer container kubepods-burstable-pode63aa3adde624d08ed8b79c4194ca41f.slice. Jan 24 00:56:42.977205 kubelet[2177]: E0124 00:56:42.976876 2177 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-32cc93a80b\" not found" node="ci-4081-3-6-n-32cc93a80b" Jan 24 00:56:42.979926 systemd[1]: Created slice kubepods-burstable-pode7382ce665c84654c9379bb985be579b.slice - libcontainer container kubepods-burstable-pode7382ce665c84654c9379bb985be579b.slice. Jan 24 00:56:42.982470 kubelet[2177]: E0124 00:56:42.982423 2177 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-32cc93a80b\" not found" node="ci-4081-3-6-n-32cc93a80b" Jan 24 00:56:42.988134 kubelet[2177]: E0124 00:56:42.988088 2177 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://89.167.6.198:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-32cc93a80b?timeout=10s\": dial tcp 89.167.6.198:6443: connect: connection refused" interval="400ms" Jan 24 00:56:42.995780 kubelet[2177]: I0124 00:56:42.995492 2177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e63aa3adde624d08ed8b79c4194ca41f-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-32cc93a80b\" (UID: \"e63aa3adde624d08ed8b79c4194ca41f\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-32cc93a80b" Jan 24 00:56:42.995780 kubelet[2177]: I0124 00:56:42.995537 2177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e7382ce665c84654c9379bb985be579b-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-32cc93a80b\" (UID: \"e7382ce665c84654c9379bb985be579b\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-32cc93a80b" Jan 24 00:56:42.995780 kubelet[2177]: I0124 00:56:42.995563 2177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e7382ce665c84654c9379bb985be579b-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-32cc93a80b\" (UID: \"e7382ce665c84654c9379bb985be579b\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-32cc93a80b" Jan 24 00:56:42.995780 kubelet[2177]: I0124 00:56:42.995587 2177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0dffbbb0eb7a6d47fe54af980d4623ec-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-32cc93a80b\" (UID: \"0dffbbb0eb7a6d47fe54af980d4623ec\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-32cc93a80b" Jan 24 00:56:42.995780 kubelet[2177]: I0124 00:56:42.995614 2177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e63aa3adde624d08ed8b79c4194ca41f-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-32cc93a80b\" (UID: \"e63aa3adde624d08ed8b79c4194ca41f\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-32cc93a80b" Jan 24 00:56:42.996033 kubelet[2177]: I0124 00:56:42.995636 2177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e7382ce665c84654c9379bb985be579b-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-32cc93a80b\" (UID: \"e7382ce665c84654c9379bb985be579b\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-32cc93a80b" Jan 24 00:56:42.996033 kubelet[2177]: I0124 00:56:42.995660 2177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e7382ce665c84654c9379bb985be579b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-32cc93a80b\" (UID: \"e7382ce665c84654c9379bb985be579b\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-32cc93a80b" Jan 24 00:56:42.996033 kubelet[2177]: I0124 00:56:42.995687 2177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e63aa3adde624d08ed8b79c4194ca41f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-32cc93a80b\" (UID: \"e63aa3adde624d08ed8b79c4194ca41f\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-32cc93a80b" Jan 24 00:56:42.996033 kubelet[2177]: I0124 00:56:42.995709 2177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e7382ce665c84654c9379bb985be579b-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-32cc93a80b\" (UID: \"e7382ce665c84654c9379bb985be579b\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-32cc93a80b" Jan 24 00:56:43.161190 kubelet[2177]: I0124 00:56:43.161141 2177 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-32cc93a80b" Jan 24 00:56:43.161579 kubelet[2177]: E0124 00:56:43.161533 2177 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://89.167.6.198:6443/api/v1/nodes\": dial tcp 89.167.6.198:6443: connect: connection refused" node="ci-4081-3-6-n-32cc93a80b" Jan 24 00:56:43.266372 containerd[1500]: time="2026-01-24T00:56:43.266173728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-32cc93a80b,Uid:0dffbbb0eb7a6d47fe54af980d4623ec,Namespace:kube-system,Attempt:0,}" Jan 24 00:56:43.278511 containerd[1500]: time="2026-01-24T00:56:43.278454878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-32cc93a80b,Uid:e63aa3adde624d08ed8b79c4194ca41f,Namespace:kube-system,Attempt:0,}" Jan 24 00:56:43.284388 containerd[1500]: time="2026-01-24T00:56:43.284326128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-32cc93a80b,Uid:e7382ce665c84654c9379bb985be579b,Namespace:kube-system,Attempt:0,}" Jan 24 00:56:43.390008 kubelet[2177]: E0124 00:56:43.389940 2177 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://89.167.6.198:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-32cc93a80b?timeout=10s\": dial tcp 89.167.6.198:6443: connect: connection refused" interval="800ms" Jan 24 00:56:43.565111 kubelet[2177]: I0124 00:56:43.564952 2177 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-32cc93a80b" Jan 24 00:56:43.565685 kubelet[2177]: E0124 00:56:43.565349 2177 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://89.167.6.198:6443/api/v1/nodes\": dial tcp 89.167.6.198:6443: connect: connection refused" node="ci-4081-3-6-n-32cc93a80b" Jan 24 00:56:43.622527 kubelet[2177]: W0124 00:56:43.622453 2177 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://89.167.6.198:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 89.167.6.198:6443: connect: connection refused Jan 24 00:56:43.622527 kubelet[2177]: E0124 00:56:43.622524 2177 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://89.167.6.198:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 89.167.6.198:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:56:43.728593 kubelet[2177]: W0124 00:56:43.728506 2177 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://89.167.6.198:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 89.167.6.198:6443: connect: connection refused Jan 24 00:56:43.728593 kubelet[2177]: E0124 00:56:43.728594 2177 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://89.167.6.198:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 89.167.6.198:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:56:43.731507 kubelet[2177]: W0124 00:56:43.731466 2177 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://89.167.6.198:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 89.167.6.198:6443: connect: connection refused Jan 24 00:56:43.731683 kubelet[2177]: E0124 00:56:43.731506 2177 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://89.167.6.198:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 89.167.6.198:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:56:43.760371 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3488067681.mount: Deactivated successfully. Jan 24 00:56:43.769697 containerd[1500]: time="2026-01-24T00:56:43.769618588Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:56:43.772836 containerd[1500]: time="2026-01-24T00:56:43.772386388Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 24 00:56:43.773861 containerd[1500]: time="2026-01-24T00:56:43.773824218Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:56:43.775137 containerd[1500]: time="2026-01-24T00:56:43.775099698Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:56:43.777406 containerd[1500]: time="2026-01-24T00:56:43.777338968Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:56:43.778338 containerd[1500]: time="2026-01-24T00:56:43.778154368Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 24 00:56:43.779729 containerd[1500]: time="2026-01-24T00:56:43.779668358Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312078" Jan 24 00:56:43.782767 containerd[1500]: time="2026-01-24T00:56:43.781816928Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:56:43.785778 containerd[1500]: time="2026-01-24T00:56:43.785704208Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 501.29706ms" Jan 24 00:56:43.789151 containerd[1500]: time="2026-01-24T00:56:43.789111588Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 522.80656ms" Jan 24 00:56:43.789511 containerd[1500]: time="2026-01-24T00:56:43.789436128Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 510.89491ms" Jan 24 00:56:43.951864 containerd[1500]: time="2026-01-24T00:56:43.951411878Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:56:43.951864 containerd[1500]: time="2026-01-24T00:56:43.951490898Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:56:43.951864 containerd[1500]: time="2026-01-24T00:56:43.951510628Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:56:43.951864 containerd[1500]: time="2026-01-24T00:56:43.951644538Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:56:43.968496 containerd[1500]: time="2026-01-24T00:56:43.968103898Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:56:43.968496 containerd[1500]: time="2026-01-24T00:56:43.968271128Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:56:43.968496 containerd[1500]: time="2026-01-24T00:56:43.968339278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:56:43.968496 containerd[1500]: time="2026-01-24T00:56:43.968306408Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:56:43.968496 containerd[1500]: time="2026-01-24T00:56:43.968376818Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:56:43.968496 containerd[1500]: time="2026-01-24T00:56:43.968431168Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:56:43.973759 containerd[1500]: time="2026-01-24T00:56:43.970513798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:56:43.973759 containerd[1500]: time="2026-01-24T00:56:43.968533848Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:56:43.983673 kubelet[2177]: W0124 00:56:43.983588 2177 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://89.167.6.198:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-32cc93a80b&limit=500&resourceVersion=0": dial tcp 89.167.6.198:6443: connect: connection refused Jan 24 00:56:43.983673 kubelet[2177]: E0124 00:56:43.983675 2177 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://89.167.6.198:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-32cc93a80b&limit=500&resourceVersion=0\": dial tcp 89.167.6.198:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:56:43.995858 systemd[1]: Started cri-containerd-b156be70aefdd7348cf610581c490ca4de06525d399a71f11220a227e72c608d.scope - libcontainer container b156be70aefdd7348cf610581c490ca4de06525d399a71f11220a227e72c608d. Jan 24 00:56:43.998811 systemd[1]: Started cri-containerd-a660fb331b2bd891397c1992c1bb8c341521198d55b68c79ed916ee7f55e8cad.scope - libcontainer container a660fb331b2bd891397c1992c1bb8c341521198d55b68c79ed916ee7f55e8cad. Jan 24 00:56:44.007917 systemd[1]: Started cri-containerd-4fcd09a830dcbbb3c25d2c8c73e7e0330d1ffa9b98b01d1ecb984c2df1c9ec4f.scope - libcontainer container 4fcd09a830dcbbb3c25d2c8c73e7e0330d1ffa9b98b01d1ecb984c2df1c9ec4f. Jan 24 00:56:44.062100 containerd[1500]: time="2026-01-24T00:56:44.061851728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-32cc93a80b,Uid:0dffbbb0eb7a6d47fe54af980d4623ec,Namespace:kube-system,Attempt:0,} returns sandbox id \"a660fb331b2bd891397c1992c1bb8c341521198d55b68c79ed916ee7f55e8cad\"" Jan 24 00:56:44.064259 containerd[1500]: time="2026-01-24T00:56:44.064237688Z" level=info msg="CreateContainer within sandbox \"a660fb331b2bd891397c1992c1bb8c341521198d55b68c79ed916ee7f55e8cad\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 24 00:56:44.065517 containerd[1500]: time="2026-01-24T00:56:44.065497378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-32cc93a80b,Uid:e7382ce665c84654c9379bb985be579b,Namespace:kube-system,Attempt:0,} returns sandbox id \"b156be70aefdd7348cf610581c490ca4de06525d399a71f11220a227e72c608d\"" Jan 24 00:56:44.070540 containerd[1500]: time="2026-01-24T00:56:44.070514538Z" level=info msg="CreateContainer within sandbox \"b156be70aefdd7348cf610581c490ca4de06525d399a71f11220a227e72c608d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 24 00:56:44.072365 containerd[1500]: time="2026-01-24T00:56:44.072220768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-32cc93a80b,Uid:e63aa3adde624d08ed8b79c4194ca41f,Namespace:kube-system,Attempt:0,} returns sandbox id \"4fcd09a830dcbbb3c25d2c8c73e7e0330d1ffa9b98b01d1ecb984c2df1c9ec4f\"" Jan 24 00:56:44.076433 containerd[1500]: time="2026-01-24T00:56:44.076415808Z" level=info msg="CreateContainer within sandbox \"4fcd09a830dcbbb3c25d2c8c73e7e0330d1ffa9b98b01d1ecb984c2df1c9ec4f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 24 00:56:44.091889 containerd[1500]: time="2026-01-24T00:56:44.091864638Z" level=info msg="CreateContainer within sandbox \"a660fb331b2bd891397c1992c1bb8c341521198d55b68c79ed916ee7f55e8cad\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"171d68bcb881a0c51562111af18dac67c65223e9782eb71e1dcc86c62fbe85bb\"" Jan 24 00:56:44.092506 containerd[1500]: time="2026-01-24T00:56:44.092485568Z" level=info msg="StartContainer for \"171d68bcb881a0c51562111af18dac67c65223e9782eb71e1dcc86c62fbe85bb\"" Jan 24 00:56:44.097760 containerd[1500]: time="2026-01-24T00:56:44.097683068Z" level=info msg="CreateContainer within sandbox \"b156be70aefdd7348cf610581c490ca4de06525d399a71f11220a227e72c608d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a9d399184c5d88d397c97598d4cba127e23b7454002d8efebf87a9ffc5d6a08b\"" Jan 24 00:56:44.098075 containerd[1500]: time="2026-01-24T00:56:44.098048938Z" level=info msg="StartContainer for \"a9d399184c5d88d397c97598d4cba127e23b7454002d8efebf87a9ffc5d6a08b\"" Jan 24 00:56:44.098425 containerd[1500]: time="2026-01-24T00:56:44.098357768Z" level=info msg="CreateContainer within sandbox \"4fcd09a830dcbbb3c25d2c8c73e7e0330d1ffa9b98b01d1ecb984c2df1c9ec4f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a505df0c18b8befcfb7f946e5c51db1b8dab0000dbed82a31b41214e4dd9cb3c\"" Jan 24 00:56:44.098601 containerd[1500]: time="2026-01-24T00:56:44.098582428Z" level=info msg="StartContainer for \"a505df0c18b8befcfb7f946e5c51db1b8dab0000dbed82a31b41214e4dd9cb3c\"" Jan 24 00:56:44.126020 systemd[1]: Started cri-containerd-171d68bcb881a0c51562111af18dac67c65223e9782eb71e1dcc86c62fbe85bb.scope - libcontainer container 171d68bcb881a0c51562111af18dac67c65223e9782eb71e1dcc86c62fbe85bb. Jan 24 00:56:44.141860 systemd[1]: Started cri-containerd-a505df0c18b8befcfb7f946e5c51db1b8dab0000dbed82a31b41214e4dd9cb3c.scope - libcontainer container a505df0c18b8befcfb7f946e5c51db1b8dab0000dbed82a31b41214e4dd9cb3c. Jan 24 00:56:44.145000 systemd[1]: Started cri-containerd-a9d399184c5d88d397c97598d4cba127e23b7454002d8efebf87a9ffc5d6a08b.scope - libcontainer container a9d399184c5d88d397c97598d4cba127e23b7454002d8efebf87a9ffc5d6a08b. Jan 24 00:56:44.177435 containerd[1500]: time="2026-01-24T00:56:44.177392078Z" level=info msg="StartContainer for \"171d68bcb881a0c51562111af18dac67c65223e9782eb71e1dcc86c62fbe85bb\" returns successfully" Jan 24 00:56:44.190477 kubelet[2177]: E0124 00:56:44.190372 2177 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://89.167.6.198:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-32cc93a80b?timeout=10s\": dial tcp 89.167.6.198:6443: connect: connection refused" interval="1.6s" Jan 24 00:56:44.202891 containerd[1500]: time="2026-01-24T00:56:44.202169148Z" level=info msg="StartContainer for \"a505df0c18b8befcfb7f946e5c51db1b8dab0000dbed82a31b41214e4dd9cb3c\" returns successfully" Jan 24 00:56:44.211442 containerd[1500]: time="2026-01-24T00:56:44.211407188Z" level=info msg="StartContainer for \"a9d399184c5d88d397c97598d4cba127e23b7454002d8efebf87a9ffc5d6a08b\" returns successfully" Jan 24 00:56:44.368829 kubelet[2177]: I0124 00:56:44.368805 2177 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-32cc93a80b" Jan 24 00:56:44.847822 kubelet[2177]: E0124 00:56:44.847792 2177 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-32cc93a80b\" not found" node="ci-4081-3-6-n-32cc93a80b" Jan 24 00:56:44.850237 kubelet[2177]: E0124 00:56:44.849745 2177 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-32cc93a80b\" not found" node="ci-4081-3-6-n-32cc93a80b" Jan 24 00:56:44.850807 kubelet[2177]: E0124 00:56:44.850796 2177 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-32cc93a80b\" not found" node="ci-4081-3-6-n-32cc93a80b" Jan 24 00:56:45.585162 kubelet[2177]: I0124 00:56:45.585111 2177 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-n-32cc93a80b" Jan 24 00:56:45.585162 kubelet[2177]: E0124 00:56:45.585159 2177 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081-3-6-n-32cc93a80b\": node \"ci-4081-3-6-n-32cc93a80b\" not found" Jan 24 00:56:45.626275 kubelet[2177]: E0124 00:56:45.626209 2177 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-32cc93a80b\" not found" Jan 24 00:56:45.726998 kubelet[2177]: E0124 00:56:45.726864 2177 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-32cc93a80b\" not found" Jan 24 00:56:45.827923 kubelet[2177]: E0124 00:56:45.827833 2177 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-32cc93a80b\" not found" Jan 24 00:56:45.855667 kubelet[2177]: E0124 00:56:45.854857 2177 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-32cc93a80b\" not found" node="ci-4081-3-6-n-32cc93a80b" Jan 24 00:56:45.855667 kubelet[2177]: E0124 00:56:45.855291 2177 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-32cc93a80b\" not found" node="ci-4081-3-6-n-32cc93a80b" Jan 24 00:56:45.928626 kubelet[2177]: E0124 00:56:45.928506 2177 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-32cc93a80b\" not found" Jan 24 00:56:46.029581 kubelet[2177]: E0124 00:56:46.029510 2177 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-32cc93a80b\" not found" Jan 24 00:56:46.130861 kubelet[2177]: E0124 00:56:46.130644 2177 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-32cc93a80b\" not found" Jan 24 00:56:46.231769 kubelet[2177]: E0124 00:56:46.231692 2177 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-32cc93a80b\" not found" Jan 24 00:56:46.332252 kubelet[2177]: E0124 00:56:46.332190 2177 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-32cc93a80b\" not found" Jan 24 00:56:46.433252 kubelet[2177]: E0124 00:56:46.433197 2177 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-32cc93a80b\" not found" Jan 24 00:56:46.534358 kubelet[2177]: E0124 00:56:46.534299 2177 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-32cc93a80b\" not found" Jan 24 00:56:46.634977 kubelet[2177]: E0124 00:56:46.634918 2177 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-32cc93a80b\" not found" Jan 24 00:56:46.735859 kubelet[2177]: E0124 00:56:46.735657 2177 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-32cc93a80b\" not found" Jan 24 00:56:46.836429 kubelet[2177]: E0124 00:56:46.836370 2177 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-32cc93a80b\" not found" Jan 24 00:56:46.937911 kubelet[2177]: E0124 00:56:46.937854 2177 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-32cc93a80b\" not found" Jan 24 00:56:46.984688 kubelet[2177]: I0124 00:56:46.984625 2177 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-32cc93a80b" Jan 24 00:56:46.995954 kubelet[2177]: I0124 00:56:46.995519 2177 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-32cc93a80b" Jan 24 00:56:47.000013 kubelet[2177]: I0124 00:56:46.999846 2177 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-32cc93a80b" Jan 24 00:56:47.009439 kubelet[2177]: E0124 00:56:47.009295 2177 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-32cc93a80b\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-6-n-32cc93a80b" Jan 24 00:56:47.009439 kubelet[2177]: I0124 00:56:47.009335 2177 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-32cc93a80b" Jan 24 00:56:47.367964 systemd[1]: Reloading requested from client PID 2455 ('systemctl') (unit session-7.scope)... Jan 24 00:56:47.367988 systemd[1]: Reloading... Jan 24 00:56:47.504764 zram_generator::config[2501]: No configuration found. Jan 24 00:56:47.586555 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:56:47.659626 systemd[1]: Reloading finished in 290 ms. Jan 24 00:56:47.699981 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:56:47.722050 systemd[1]: kubelet.service: Deactivated successfully. Jan 24 00:56:47.722316 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:56:47.726971 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:56:47.843980 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:56:47.845688 (kubelet)[2546]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 24 00:56:47.900935 kubelet[2546]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:56:47.901295 kubelet[2546]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 24 00:56:47.901295 kubelet[2546]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:56:47.901411 kubelet[2546]: I0124 00:56:47.901307 2546 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 24 00:56:47.908351 kubelet[2546]: I0124 00:56:47.908315 2546 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 24 00:56:47.908351 kubelet[2546]: I0124 00:56:47.908330 2546 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 24 00:56:47.908503 kubelet[2546]: I0124 00:56:47.908453 2546 server.go:954] "Client rotation is on, will bootstrap in background" Jan 24 00:56:47.909237 kubelet[2546]: I0124 00:56:47.909209 2546 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 24 00:56:47.911544 kubelet[2546]: I0124 00:56:47.910560 2546 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 24 00:56:47.916832 kubelet[2546]: E0124 00:56:47.916795 2546 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 24 00:56:47.916832 kubelet[2546]: I0124 00:56:47.916816 2546 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 24 00:56:47.919654 kubelet[2546]: I0124 00:56:47.919630 2546 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 24 00:56:47.919857 kubelet[2546]: I0124 00:56:47.919819 2546 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 24 00:56:47.919978 kubelet[2546]: I0124 00:56:47.919838 2546 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-32cc93a80b","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 24 00:56:47.919978 kubelet[2546]: I0124 00:56:47.919959 2546 topology_manager.go:138] "Creating topology manager with none policy" Jan 24 00:56:47.919978 kubelet[2546]: I0124 00:56:47.919966 2546 container_manager_linux.go:304] "Creating device plugin manager" Jan 24 00:56:47.920235 kubelet[2546]: I0124 00:56:47.920002 2546 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:56:47.920235 kubelet[2546]: I0124 00:56:47.920134 2546 kubelet.go:446] "Attempting to sync node with API server" Jan 24 00:56:47.920235 kubelet[2546]: I0124 00:56:47.920149 2546 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 24 00:56:47.920235 kubelet[2546]: I0124 00:56:47.920161 2546 kubelet.go:352] "Adding apiserver pod source" Jan 24 00:56:47.920235 kubelet[2546]: I0124 00:56:47.920170 2546 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 24 00:56:47.923114 kubelet[2546]: I0124 00:56:47.923100 2546 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 24 00:56:47.923383 kubelet[2546]: I0124 00:56:47.923344 2546 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 24 00:56:47.923661 kubelet[2546]: I0124 00:56:47.923625 2546 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 24 00:56:47.923661 kubelet[2546]: I0124 00:56:47.923647 2546 server.go:1287] "Started kubelet" Jan 24 00:56:47.927782 kubelet[2546]: I0124 00:56:47.926039 2546 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 24 00:56:47.939032 kubelet[2546]: I0124 00:56:47.938825 2546 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 24 00:56:47.941723 kubelet[2546]: I0124 00:56:47.941697 2546 server.go:479] "Adding debug handlers to kubelet server" Jan 24 00:56:47.945907 kubelet[2546]: I0124 00:56:47.944892 2546 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 24 00:56:47.945907 kubelet[2546]: I0124 00:56:47.945055 2546 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 24 00:56:47.945907 kubelet[2546]: I0124 00:56:47.945214 2546 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 24 00:56:47.947113 kubelet[2546]: I0124 00:56:47.947036 2546 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 24 00:56:47.950402 kubelet[2546]: I0124 00:56:47.950377 2546 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 24 00:56:47.952330 kubelet[2546]: I0124 00:56:47.952309 2546 reconciler.go:26] "Reconciler: start to sync state" Jan 24 00:56:47.956830 kubelet[2546]: E0124 00:56:47.956801 2546 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 24 00:56:47.959809 kubelet[2546]: I0124 00:56:47.959788 2546 factory.go:221] Registration of the containerd container factory successfully Jan 24 00:56:47.959980 kubelet[2546]: I0124 00:56:47.959954 2546 factory.go:221] Registration of the systemd container factory successfully Jan 24 00:56:47.961536 kubelet[2546]: I0124 00:56:47.961448 2546 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 24 00:56:47.961983 kubelet[2546]: I0124 00:56:47.961953 2546 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 24 00:56:47.968197 kubelet[2546]: I0124 00:56:47.968152 2546 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 24 00:56:47.968197 kubelet[2546]: I0124 00:56:47.968171 2546 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 24 00:56:47.968197 kubelet[2546]: I0124 00:56:47.968185 2546 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 24 00:56:47.968197 kubelet[2546]: I0124 00:56:47.968191 2546 kubelet.go:2382] "Starting kubelet main sync loop" Jan 24 00:56:47.968414 kubelet[2546]: E0124 00:56:47.968226 2546 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 24 00:56:48.004040 kubelet[2546]: I0124 00:56:48.004011 2546 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 24 00:56:48.004419 kubelet[2546]: I0124 00:56:48.004401 2546 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 24 00:56:48.004523 kubelet[2546]: I0124 00:56:48.004508 2546 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:56:48.004878 kubelet[2546]: I0124 00:56:48.004848 2546 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 24 00:56:48.004993 kubelet[2546]: I0124 00:56:48.004964 2546 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 24 00:56:48.005063 kubelet[2546]: I0124 00:56:48.005049 2546 policy_none.go:49] "None policy: Start" Jan 24 00:56:48.005410 kubelet[2546]: I0124 00:56:48.005149 2546 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 24 00:56:48.005410 kubelet[2546]: I0124 00:56:48.005172 2546 state_mem.go:35] "Initializing new in-memory state store" Jan 24 00:56:48.005410 kubelet[2546]: I0124 00:56:48.005328 2546 state_mem.go:75] "Updated machine memory state" Jan 24 00:56:48.012213 kubelet[2546]: I0124 00:56:48.012171 2546 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 24 00:56:48.012401 kubelet[2546]: I0124 00:56:48.012303 2546 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 24 00:56:48.012401 kubelet[2546]: I0124 00:56:48.012310 2546 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 24 00:56:48.012770 kubelet[2546]: I0124 00:56:48.012750 2546 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 24 00:56:48.016263 kubelet[2546]: E0124 00:56:48.013642 2546 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 24 00:56:48.069006 kubelet[2546]: I0124 00:56:48.068962 2546 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-32cc93a80b" Jan 24 00:56:48.069284 kubelet[2546]: I0124 00:56:48.069256 2546 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-32cc93a80b" Jan 24 00:56:48.069440 kubelet[2546]: I0124 00:56:48.069412 2546 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-32cc93a80b" Jan 24 00:56:48.077130 kubelet[2546]: E0124 00:56:48.077056 2546 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-n-32cc93a80b\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-6-n-32cc93a80b" Jan 24 00:56:48.078334 kubelet[2546]: E0124 00:56:48.078272 2546 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-32cc93a80b\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-6-n-32cc93a80b" Jan 24 00:56:48.078334 kubelet[2546]: E0124 00:56:48.078313 2546 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-6-n-32cc93a80b\" already exists" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-32cc93a80b" Jan 24 00:56:48.126550 kubelet[2546]: I0124 00:56:48.126499 2546 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-32cc93a80b" Jan 24 00:56:48.139334 kubelet[2546]: I0124 00:56:48.138873 2546 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-6-n-32cc93a80b" Jan 24 00:56:48.139334 kubelet[2546]: I0124 00:56:48.138972 2546 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-n-32cc93a80b" Jan 24 00:56:48.153675 kubelet[2546]: I0124 00:56:48.153627 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0dffbbb0eb7a6d47fe54af980d4623ec-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-32cc93a80b\" (UID: \"0dffbbb0eb7a6d47fe54af980d4623ec\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-32cc93a80b" Jan 24 00:56:48.153675 kubelet[2546]: I0124 00:56:48.153676 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e7382ce665c84654c9379bb985be579b-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-32cc93a80b\" (UID: \"e7382ce665c84654c9379bb985be579b\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-32cc93a80b" Jan 24 00:56:48.153928 kubelet[2546]: I0124 00:56:48.153707 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e7382ce665c84654c9379bb985be579b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-32cc93a80b\" (UID: \"e7382ce665c84654c9379bb985be579b\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-32cc93a80b" Jan 24 00:56:48.153928 kubelet[2546]: I0124 00:56:48.153770 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e7382ce665c84654c9379bb985be579b-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-32cc93a80b\" (UID: \"e7382ce665c84654c9379bb985be579b\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-32cc93a80b" Jan 24 00:56:48.153928 kubelet[2546]: I0124 00:56:48.153795 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e7382ce665c84654c9379bb985be579b-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-32cc93a80b\" (UID: \"e7382ce665c84654c9379bb985be579b\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-32cc93a80b" Jan 24 00:56:48.153928 kubelet[2546]: I0124 00:56:48.153817 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e7382ce665c84654c9379bb985be579b-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-32cc93a80b\" (UID: \"e7382ce665c84654c9379bb985be579b\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-32cc93a80b" Jan 24 00:56:48.153928 kubelet[2546]: I0124 00:56:48.153839 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e63aa3adde624d08ed8b79c4194ca41f-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-32cc93a80b\" (UID: \"e63aa3adde624d08ed8b79c4194ca41f\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-32cc93a80b" Jan 24 00:56:48.154157 kubelet[2546]: I0124 00:56:48.153861 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e63aa3adde624d08ed8b79c4194ca41f-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-32cc93a80b\" (UID: \"e63aa3adde624d08ed8b79c4194ca41f\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-32cc93a80b" Jan 24 00:56:48.154157 kubelet[2546]: I0124 00:56:48.153887 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e63aa3adde624d08ed8b79c4194ca41f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-32cc93a80b\" (UID: \"e63aa3adde624d08ed8b79c4194ca41f\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-32cc93a80b" Jan 24 00:56:48.923703 kubelet[2546]: I0124 00:56:48.922628 2546 apiserver.go:52] "Watching apiserver" Jan 24 00:56:48.951005 kubelet[2546]: I0124 00:56:48.950890 2546 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 24 00:56:48.988024 kubelet[2546]: I0124 00:56:48.987987 2546 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-32cc93a80b" Jan 24 00:56:48.991167 kubelet[2546]: I0124 00:56:48.990825 2546 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-32cc93a80b" Jan 24 00:56:49.000623 kubelet[2546]: E0124 00:56:48.999695 2546 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-32cc93a80b\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-6-n-32cc93a80b" Jan 24 00:56:49.000875 kubelet[2546]: E0124 00:56:49.000257 2546 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-n-32cc93a80b\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-6-n-32cc93a80b" Jan 24 00:56:49.008317 kubelet[2546]: I0124 00:56:49.008272 2546 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-6-n-32cc93a80b" podStartSLOduration=3.008260198 podStartE2EDuration="3.008260198s" podCreationTimestamp="2026-01-24 00:56:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:56:49.007562258 +0000 UTC m=+1.157503751" watchObservedRunningTime="2026-01-24 00:56:49.008260198 +0000 UTC m=+1.158201681" Jan 24 00:56:49.018345 kubelet[2546]: I0124 00:56:49.018290 2546 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-6-n-32cc93a80b" podStartSLOduration=3.018263378 podStartE2EDuration="3.018263378s" podCreationTimestamp="2026-01-24 00:56:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:56:49.016394548 +0000 UTC m=+1.166336041" watchObservedRunningTime="2026-01-24 00:56:49.018263378 +0000 UTC m=+1.168204871" Jan 24 00:56:49.033704 kubelet[2546]: I0124 00:56:49.033467 2546 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-32cc93a80b" podStartSLOduration=2.033446118 podStartE2EDuration="2.033446118s" podCreationTimestamp="2026-01-24 00:56:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:56:49.025335838 +0000 UTC m=+1.175277331" watchObservedRunningTime="2026-01-24 00:56:49.033446118 +0000 UTC m=+1.183387651" Jan 24 00:56:52.661940 kubelet[2546]: I0124 00:56:52.661848 2546 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 24 00:56:52.663028 containerd[1500]: time="2026-01-24T00:56:52.662991657Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 24 00:56:52.664136 kubelet[2546]: I0124 00:56:52.663222 2546 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 24 00:56:53.297875 update_engine[1477]: I20260124 00:56:53.297796 1477 update_attempter.cc:509] Updating boot flags... Jan 24 00:56:53.390198 systemd[1]: Created slice kubepods-besteffort-pod348b867e_9354_45b3_a7bb_b7eb67f9bc75.slice - libcontainer container kubepods-besteffort-pod348b867e_9354_45b3_a7bb_b7eb67f9bc75.slice. Jan 24 00:56:53.393360 kubelet[2546]: I0124 00:56:53.392916 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/348b867e-9354-45b3-a7bb-b7eb67f9bc75-xtables-lock\") pod \"kube-proxy-wj9zm\" (UID: \"348b867e-9354-45b3-a7bb-b7eb67f9bc75\") " pod="kube-system/kube-proxy-wj9zm" Jan 24 00:56:53.393360 kubelet[2546]: I0124 00:56:53.392938 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/348b867e-9354-45b3-a7bb-b7eb67f9bc75-kube-proxy\") pod \"kube-proxy-wj9zm\" (UID: \"348b867e-9354-45b3-a7bb-b7eb67f9bc75\") " pod="kube-system/kube-proxy-wj9zm" Jan 24 00:56:53.393360 kubelet[2546]: I0124 00:56:53.392950 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/348b867e-9354-45b3-a7bb-b7eb67f9bc75-lib-modules\") pod \"kube-proxy-wj9zm\" (UID: \"348b867e-9354-45b3-a7bb-b7eb67f9bc75\") " pod="kube-system/kube-proxy-wj9zm" Jan 24 00:56:53.393360 kubelet[2546]: I0124 00:56:53.392961 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5h8lv\" (UniqueName: \"kubernetes.io/projected/348b867e-9354-45b3-a7bb-b7eb67f9bc75-kube-api-access-5h8lv\") pod \"kube-proxy-wj9zm\" (UID: \"348b867e-9354-45b3-a7bb-b7eb67f9bc75\") " pod="kube-system/kube-proxy-wj9zm" Jan 24 00:56:53.404754 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (2601) Jan 24 00:56:53.462077 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (2601) Jan 24 00:56:53.502402 kubelet[2546]: E0124 00:56:53.502374 2546 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 24 00:56:53.502402 kubelet[2546]: E0124 00:56:53.502399 2546 projected.go:194] Error preparing data for projected volume kube-api-access-5h8lv for pod kube-system/kube-proxy-wj9zm: configmap "kube-root-ca.crt" not found Jan 24 00:56:53.502515 kubelet[2546]: E0124 00:56:53.502441 2546 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/348b867e-9354-45b3-a7bb-b7eb67f9bc75-kube-api-access-5h8lv podName:348b867e-9354-45b3-a7bb-b7eb67f9bc75 nodeName:}" failed. No retries permitted until 2026-01-24 00:56:54.002422427 +0000 UTC m=+6.152363910 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5h8lv" (UniqueName: "kubernetes.io/projected/348b867e-9354-45b3-a7bb-b7eb67f9bc75-kube-api-access-5h8lv") pod "kube-proxy-wj9zm" (UID: "348b867e-9354-45b3-a7bb-b7eb67f9bc75") : configmap "kube-root-ca.crt" not found Jan 24 00:56:53.528870 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (2601) Jan 24 00:56:53.796548 systemd[1]: Created slice kubepods-besteffort-podec8ba4de_8ded_459b_bd27_14288e528b4d.slice - libcontainer container kubepods-besteffort-podec8ba4de_8ded_459b_bd27_14288e528b4d.slice. Jan 24 00:56:53.898474 kubelet[2546]: I0124 00:56:53.898381 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cl9jx\" (UniqueName: \"kubernetes.io/projected/ec8ba4de-8ded-459b-bd27-14288e528b4d-kube-api-access-cl9jx\") pod \"tigera-operator-7dcd859c48-kb5r9\" (UID: \"ec8ba4de-8ded-459b-bd27-14288e528b4d\") " pod="tigera-operator/tigera-operator-7dcd859c48-kb5r9" Jan 24 00:56:53.898474 kubelet[2546]: I0124 00:56:53.898451 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ec8ba4de-8ded-459b-bd27-14288e528b4d-var-lib-calico\") pod \"tigera-operator-7dcd859c48-kb5r9\" (UID: \"ec8ba4de-8ded-459b-bd27-14288e528b4d\") " pod="tigera-operator/tigera-operator-7dcd859c48-kb5r9" Jan 24 00:56:54.104171 containerd[1500]: time="2026-01-24T00:56:54.103407167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-kb5r9,Uid:ec8ba4de-8ded-459b-bd27-14288e528b4d,Namespace:tigera-operator,Attempt:0,}" Jan 24 00:56:54.161679 containerd[1500]: time="2026-01-24T00:56:54.161075807Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:56:54.161679 containerd[1500]: time="2026-01-24T00:56:54.161190377Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:56:54.161679 containerd[1500]: time="2026-01-24T00:56:54.161215027Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:56:54.161679 containerd[1500]: time="2026-01-24T00:56:54.161356137Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:56:54.204976 systemd[1]: Started cri-containerd-199864c55f5e749fe30f0342e3802029d2f6fb62e2fda72fc543341c34dc43d5.scope - libcontainer container 199864c55f5e749fe30f0342e3802029d2f6fb62e2fda72fc543341c34dc43d5. Jan 24 00:56:54.282423 containerd[1500]: time="2026-01-24T00:56:54.282125857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-kb5r9,Uid:ec8ba4de-8ded-459b-bd27-14288e528b4d,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"199864c55f5e749fe30f0342e3802029d2f6fb62e2fda72fc543341c34dc43d5\"" Jan 24 00:56:54.286119 containerd[1500]: time="2026-01-24T00:56:54.286062767Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 24 00:56:54.299120 containerd[1500]: time="2026-01-24T00:56:54.299061557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wj9zm,Uid:348b867e-9354-45b3-a7bb-b7eb67f9bc75,Namespace:kube-system,Attempt:0,}" Jan 24 00:56:54.332432 containerd[1500]: time="2026-01-24T00:56:54.331994957Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:56:54.332432 containerd[1500]: time="2026-01-24T00:56:54.332065607Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:56:54.332432 containerd[1500]: time="2026-01-24T00:56:54.332164947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:56:54.333478 containerd[1500]: time="2026-01-24T00:56:54.332379617Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:56:54.370133 systemd[1]: Started cri-containerd-3944cda6b8a527a8f69f67e206ac5c581c09b0c55c9aded8d58c4db0d0c41632.scope - libcontainer container 3944cda6b8a527a8f69f67e206ac5c581c09b0c55c9aded8d58c4db0d0c41632. Jan 24 00:56:54.407723 containerd[1500]: time="2026-01-24T00:56:54.407647177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wj9zm,Uid:348b867e-9354-45b3-a7bb-b7eb67f9bc75,Namespace:kube-system,Attempt:0,} returns sandbox id \"3944cda6b8a527a8f69f67e206ac5c581c09b0c55c9aded8d58c4db0d0c41632\"" Jan 24 00:56:54.412532 containerd[1500]: time="2026-01-24T00:56:54.412382337Z" level=info msg="CreateContainer within sandbox \"3944cda6b8a527a8f69f67e206ac5c581c09b0c55c9aded8d58c4db0d0c41632\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 24 00:56:54.432789 containerd[1500]: time="2026-01-24T00:56:54.432722687Z" level=info msg="CreateContainer within sandbox \"3944cda6b8a527a8f69f67e206ac5c581c09b0c55c9aded8d58c4db0d0c41632\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0e78056e90420e6c3ca6e6c223c802da622c51750816e33a125967f91e8c8628\"" Jan 24 00:56:54.434933 containerd[1500]: time="2026-01-24T00:56:54.433427927Z" level=info msg="StartContainer for \"0e78056e90420e6c3ca6e6c223c802da622c51750816e33a125967f91e8c8628\"" Jan 24 00:56:54.465857 systemd[1]: Started cri-containerd-0e78056e90420e6c3ca6e6c223c802da622c51750816e33a125967f91e8c8628.scope - libcontainer container 0e78056e90420e6c3ca6e6c223c802da622c51750816e33a125967f91e8c8628. Jan 24 00:56:54.503638 containerd[1500]: time="2026-01-24T00:56:54.503594357Z" level=info msg="StartContainer for \"0e78056e90420e6c3ca6e6c223c802da622c51750816e33a125967f91e8c8628\" returns successfully" Jan 24 00:56:55.039555 kubelet[2546]: I0124 00:56:55.039043 2546 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wj9zm" podStartSLOduration=2.039017447 podStartE2EDuration="2.039017447s" podCreationTimestamp="2026-01-24 00:56:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:56:55.019081367 +0000 UTC m=+7.169022890" watchObservedRunningTime="2026-01-24 00:56:55.039017447 +0000 UTC m=+7.188958960" Jan 24 00:56:56.074111 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1014741514.mount: Deactivated successfully. Jan 24 00:56:56.651372 containerd[1500]: time="2026-01-24T00:56:56.651324247Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:56.652512 containerd[1500]: time="2026-01-24T00:56:56.652363027Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Jan 24 00:56:56.653826 containerd[1500]: time="2026-01-24T00:56:56.653576937Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:56.655459 containerd[1500]: time="2026-01-24T00:56:56.655440997Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:56.655886 containerd[1500]: time="2026-01-24T00:56:56.655862687Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.36951107s" Jan 24 00:56:56.655925 containerd[1500]: time="2026-01-24T00:56:56.655887457Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 24 00:56:56.657879 containerd[1500]: time="2026-01-24T00:56:56.657845837Z" level=info msg="CreateContainer within sandbox \"199864c55f5e749fe30f0342e3802029d2f6fb62e2fda72fc543341c34dc43d5\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 24 00:56:56.678658 containerd[1500]: time="2026-01-24T00:56:56.678606817Z" level=info msg="CreateContainer within sandbox \"199864c55f5e749fe30f0342e3802029d2f6fb62e2fda72fc543341c34dc43d5\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"33f2d89ca355e9d418a6570f4c2c0038f5d5a37d11e1a7f203197674cf96e302\"" Jan 24 00:56:56.679078 containerd[1500]: time="2026-01-24T00:56:56.679049317Z" level=info msg="StartContainer for \"33f2d89ca355e9d418a6570f4c2c0038f5d5a37d11e1a7f203197674cf96e302\"" Jan 24 00:56:56.704896 systemd[1]: Started cri-containerd-33f2d89ca355e9d418a6570f4c2c0038f5d5a37d11e1a7f203197674cf96e302.scope - libcontainer container 33f2d89ca355e9d418a6570f4c2c0038f5d5a37d11e1a7f203197674cf96e302. Jan 24 00:56:56.753489 containerd[1500]: time="2026-01-24T00:56:56.753439597Z" level=info msg="StartContainer for \"33f2d89ca355e9d418a6570f4c2c0038f5d5a37d11e1a7f203197674cf96e302\" returns successfully" Jan 24 00:56:57.022340 kubelet[2546]: I0124 00:56:57.022082 2546 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-kb5r9" podStartSLOduration=1.650544067 podStartE2EDuration="4.022041417s" podCreationTimestamp="2026-01-24 00:56:53 +0000 UTC" firstStartedPulling="2026-01-24 00:56:54.284960967 +0000 UTC m=+6.434902490" lastFinishedPulling="2026-01-24 00:56:56.656458347 +0000 UTC m=+8.806399840" observedRunningTime="2026-01-24 00:56:57.021803937 +0000 UTC m=+9.171745470" watchObservedRunningTime="2026-01-24 00:56:57.022041417 +0000 UTC m=+9.171982940" Jan 24 00:56:57.073043 systemd[1]: run-containerd-runc-k8s.io-33f2d89ca355e9d418a6570f4c2c0038f5d5a37d11e1a7f203197674cf96e302-runc.Y1ALkl.mount: Deactivated successfully. Jan 24 00:57:02.056678 sudo[1701]: pam_unix(sudo:session): session closed for user root Jan 24 00:57:02.179987 sshd[1698]: pam_unix(sshd:session): session closed for user core Jan 24 00:57:02.182614 systemd[1]: sshd@6-89.167.6.198:22-20.161.92.111:52512.service: Deactivated successfully. Jan 24 00:57:02.184490 systemd[1]: session-7.scope: Deactivated successfully. Jan 24 00:57:02.184694 systemd[1]: session-7.scope: Consumed 6.153s CPU time, 153.8M memory peak, 0B memory swap peak. Jan 24 00:57:02.186207 systemd-logind[1476]: Session 7 logged out. Waiting for processes to exit. Jan 24 00:57:02.187241 systemd-logind[1476]: Removed session 7. Jan 24 00:57:06.265355 kubelet[2546]: I0124 00:57:06.265065 2546 status_manager.go:890] "Failed to get status for pod" podUID="936e3a2f-fb5c-4249-9060-b6980dd45cdc" pod="calico-system/calico-typha-5cf869594f-mr5nl" err="pods \"calico-typha-5cf869594f-mr5nl\" is forbidden: User \"system:node:ci-4081-3-6-n-32cc93a80b\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081-3-6-n-32cc93a80b' and this object" Jan 24 00:57:06.265355 kubelet[2546]: W0124 00:57:06.265112 2546 reflector.go:569] object-"calico-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081-3-6-n-32cc93a80b" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081-3-6-n-32cc93a80b' and this object Jan 24 00:57:06.265355 kubelet[2546]: E0124 00:57:06.265146 2546 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4081-3-6-n-32cc93a80b\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081-3-6-n-32cc93a80b' and this object" logger="UnhandledError" Jan 24 00:57:06.267383 kubelet[2546]: W0124 00:57:06.266129 2546 reflector.go:569] object-"calico-system"/"typha-certs": failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:ci-4081-3-6-n-32cc93a80b" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081-3-6-n-32cc93a80b' and this object Jan 24 00:57:06.267383 kubelet[2546]: E0124 00:57:06.266149 2546 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"typha-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"typha-certs\" is forbidden: User \"system:node:ci-4081-3-6-n-32cc93a80b\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081-3-6-n-32cc93a80b' and this object" logger="UnhandledError" Jan 24 00:57:06.267383 kubelet[2546]: W0124 00:57:06.266477 2546 reflector.go:569] object-"calico-system"/"tigera-ca-bundle": failed to list *v1.ConfigMap: configmaps "tigera-ca-bundle" is forbidden: User "system:node:ci-4081-3-6-n-32cc93a80b" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081-3-6-n-32cc93a80b' and this object Jan 24 00:57:06.267383 kubelet[2546]: E0124 00:57:06.266488 2546 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"tigera-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"tigera-ca-bundle\" is forbidden: User \"system:node:ci-4081-3-6-n-32cc93a80b\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081-3-6-n-32cc93a80b' and this object" logger="UnhandledError" Jan 24 00:57:06.270820 systemd[1]: Created slice kubepods-besteffort-pod936e3a2f_fb5c_4249_9060_b6980dd45cdc.slice - libcontainer container kubepods-besteffort-pod936e3a2f_fb5c_4249_9060_b6980dd45cdc.slice. Jan 24 00:57:06.273995 kubelet[2546]: I0124 00:57:06.273911 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/936e3a2f-fb5c-4249-9060-b6980dd45cdc-typha-certs\") pod \"calico-typha-5cf869594f-mr5nl\" (UID: \"936e3a2f-fb5c-4249-9060-b6980dd45cdc\") " pod="calico-system/calico-typha-5cf869594f-mr5nl" Jan 24 00:57:06.273995 kubelet[2546]: I0124 00:57:06.273935 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdkfh\" (UniqueName: \"kubernetes.io/projected/936e3a2f-fb5c-4249-9060-b6980dd45cdc-kube-api-access-mdkfh\") pod \"calico-typha-5cf869594f-mr5nl\" (UID: \"936e3a2f-fb5c-4249-9060-b6980dd45cdc\") " pod="calico-system/calico-typha-5cf869594f-mr5nl" Jan 24 00:57:06.273995 kubelet[2546]: I0124 00:57:06.273947 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/936e3a2f-fb5c-4249-9060-b6980dd45cdc-tigera-ca-bundle\") pod \"calico-typha-5cf869594f-mr5nl\" (UID: \"936e3a2f-fb5c-4249-9060-b6980dd45cdc\") " pod="calico-system/calico-typha-5cf869594f-mr5nl" Jan 24 00:57:06.483611 systemd[1]: Created slice kubepods-besteffort-pod1e096f52_aca0_485f_b877_f8a68fcdf025.slice - libcontainer container kubepods-besteffort-pod1e096f52_aca0_485f_b877_f8a68fcdf025.slice. Jan 24 00:57:06.578360 kubelet[2546]: I0124 00:57:06.578160 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1e096f52-aca0-485f-b877-f8a68fcdf025-cni-net-dir\") pod \"calico-node-pd5qk\" (UID: \"1e096f52-aca0-485f-b877-f8a68fcdf025\") " pod="calico-system/calico-node-pd5qk" Jan 24 00:57:06.578360 kubelet[2546]: I0124 00:57:06.578213 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e096f52-aca0-485f-b877-f8a68fcdf025-tigera-ca-bundle\") pod \"calico-node-pd5qk\" (UID: \"1e096f52-aca0-485f-b877-f8a68fcdf025\") " pod="calico-system/calico-node-pd5qk" Jan 24 00:57:06.578360 kubelet[2546]: I0124 00:57:06.578245 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1e096f52-aca0-485f-b877-f8a68fcdf025-var-run-calico\") pod \"calico-node-pd5qk\" (UID: \"1e096f52-aca0-485f-b877-f8a68fcdf025\") " pod="calico-system/calico-node-pd5qk" Jan 24 00:57:06.578360 kubelet[2546]: I0124 00:57:06.578273 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1e096f52-aca0-485f-b877-f8a68fcdf025-cni-bin-dir\") pod \"calico-node-pd5qk\" (UID: \"1e096f52-aca0-485f-b877-f8a68fcdf025\") " pod="calico-system/calico-node-pd5qk" Jan 24 00:57:06.578360 kubelet[2546]: I0124 00:57:06.578296 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1e096f52-aca0-485f-b877-f8a68fcdf025-lib-modules\") pod \"calico-node-pd5qk\" (UID: \"1e096f52-aca0-485f-b877-f8a68fcdf025\") " pod="calico-system/calico-node-pd5qk" Jan 24 00:57:06.578718 kubelet[2546]: I0124 00:57:06.578320 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1e096f52-aca0-485f-b877-f8a68fcdf025-xtables-lock\") pod \"calico-node-pd5qk\" (UID: \"1e096f52-aca0-485f-b877-f8a68fcdf025\") " pod="calico-system/calico-node-pd5qk" Jan 24 00:57:06.579256 kubelet[2546]: I0124 00:57:06.579171 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1e096f52-aca0-485f-b877-f8a68fcdf025-flexvol-driver-host\") pod \"calico-node-pd5qk\" (UID: \"1e096f52-aca0-485f-b877-f8a68fcdf025\") " pod="calico-system/calico-node-pd5qk" Jan 24 00:57:06.579256 kubelet[2546]: I0124 00:57:06.579228 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1e096f52-aca0-485f-b877-f8a68fcdf025-cni-log-dir\") pod \"calico-node-pd5qk\" (UID: \"1e096f52-aca0-485f-b877-f8a68fcdf025\") " pod="calico-system/calico-node-pd5qk" Jan 24 00:57:06.579256 kubelet[2546]: I0124 00:57:06.579254 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1e096f52-aca0-485f-b877-f8a68fcdf025-var-lib-calico\") pod \"calico-node-pd5qk\" (UID: \"1e096f52-aca0-485f-b877-f8a68fcdf025\") " pod="calico-system/calico-node-pd5qk" Jan 24 00:57:06.579590 kubelet[2546]: I0124 00:57:06.579278 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4np5\" (UniqueName: \"kubernetes.io/projected/1e096f52-aca0-485f-b877-f8a68fcdf025-kube-api-access-l4np5\") pod \"calico-node-pd5qk\" (UID: \"1e096f52-aca0-485f-b877-f8a68fcdf025\") " pod="calico-system/calico-node-pd5qk" Jan 24 00:57:06.579590 kubelet[2546]: I0124 00:57:06.579305 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1e096f52-aca0-485f-b877-f8a68fcdf025-node-certs\") pod \"calico-node-pd5qk\" (UID: \"1e096f52-aca0-485f-b877-f8a68fcdf025\") " pod="calico-system/calico-node-pd5qk" Jan 24 00:57:06.579590 kubelet[2546]: I0124 00:57:06.579330 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1e096f52-aca0-485f-b877-f8a68fcdf025-policysync\") pod \"calico-node-pd5qk\" (UID: \"1e096f52-aca0-485f-b877-f8a68fcdf025\") " pod="calico-system/calico-node-pd5qk" Jan 24 00:57:06.654981 kubelet[2546]: E0124 00:57:06.654874 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ftl5s" podUID="43bd5f1f-4a0c-4b9f-b986-69bf7780bcee" Jan 24 00:57:06.696021 kubelet[2546]: E0124 00:57:06.695940 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:06.696021 kubelet[2546]: W0124 00:57:06.696006 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:06.696223 kubelet[2546]: E0124 00:57:06.696086 2546 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:06.784126 kubelet[2546]: E0124 00:57:06.783886 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:06.784126 kubelet[2546]: W0124 00:57:06.783914 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:06.784126 kubelet[2546]: E0124 00:57:06.783940 2546 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:06.784126 kubelet[2546]: I0124 00:57:06.783981 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdst7\" (UniqueName: \"kubernetes.io/projected/43bd5f1f-4a0c-4b9f-b986-69bf7780bcee-kube-api-access-zdst7\") pod \"csi-node-driver-ftl5s\" (UID: \"43bd5f1f-4a0c-4b9f-b986-69bf7780bcee\") " pod="calico-system/csi-node-driver-ftl5s" Jan 24 00:57:06.784689 kubelet[2546]: E0124 00:57:06.784669 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:06.785002 kubelet[2546]: W0124 00:57:06.784792 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:06.785002 kubelet[2546]: E0124 00:57:06.784854 2546 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:06.785002 kubelet[2546]: I0124 00:57:06.784875 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/43bd5f1f-4a0c-4b9f-b986-69bf7780bcee-varrun\") pod \"csi-node-driver-ftl5s\" (UID: \"43bd5f1f-4a0c-4b9f-b986-69bf7780bcee\") " pod="calico-system/csi-node-driver-ftl5s" Jan 24 00:57:06.785441 kubelet[2546]: E0124 00:57:06.785358 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:06.785866 kubelet[2546]: W0124 00:57:06.785547 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:06.785866 kubelet[2546]: E0124 00:57:06.785597 2546 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:06.785866 kubelet[2546]: I0124 00:57:06.785637 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/43bd5f1f-4a0c-4b9f-b986-69bf7780bcee-registration-dir\") pod \"csi-node-driver-ftl5s\" (UID: \"43bd5f1f-4a0c-4b9f-b986-69bf7780bcee\") " pod="calico-system/csi-node-driver-ftl5s" Jan 24 00:57:06.786710 kubelet[2546]: E0124 00:57:06.786673 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:06.786710 kubelet[2546]: W0124 00:57:06.786707 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:06.786834 kubelet[2546]: E0124 00:57:06.786774 2546 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:06.787231 kubelet[2546]: E0124 00:57:06.787203 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:06.787231 kubelet[2546]: W0124 00:57:06.787229 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:06.787826 kubelet[2546]: E0124 00:57:06.787466 2546 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:06.787826 kubelet[2546]: E0124 00:57:06.787786 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:06.788374 kubelet[2546]: W0124 00:57:06.788320 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:06.788513 kubelet[2546]: E0124 00:57:06.788476 2546 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:06.788869 kubelet[2546]: E0124 00:57:06.788838 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:06.788869 kubelet[2546]: W0124 00:57:06.788863 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:06.789283 kubelet[2546]: E0124 00:57:06.789247 2546 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:06.789333 kubelet[2546]: I0124 00:57:06.789295 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/43bd5f1f-4a0c-4b9f-b986-69bf7780bcee-kubelet-dir\") pod \"csi-node-driver-ftl5s\" (UID: \"43bd5f1f-4a0c-4b9f-b986-69bf7780bcee\") " pod="calico-system/csi-node-driver-ftl5s" Jan 24 00:57:06.789854 kubelet[2546]: E0124 00:57:06.789794 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:06.789854 kubelet[2546]: W0124 00:57:06.789818 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:06.790370 kubelet[2546]: E0124 00:57:06.790051 2546 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:06.790702 kubelet[2546]: E0124 00:57:06.790400 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:06.790702 kubelet[2546]: W0124 00:57:06.790440 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:06.790702 kubelet[2546]: E0124 00:57:06.790459 2546 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:06.791621 kubelet[2546]: E0124 00:57:06.791551 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:06.791621 kubelet[2546]: W0124 00:57:06.791582 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:06.791621 kubelet[2546]: E0124 00:57:06.791618 2546 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:06.792499 kubelet[2546]: E0124 00:57:06.792466 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:06.792499 kubelet[2546]: W0124 00:57:06.792494 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:06.792642 kubelet[2546]: E0124 00:57:06.792513 2546 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:06.793452 kubelet[2546]: E0124 00:57:06.793404 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:06.793452 kubelet[2546]: W0124 00:57:06.793451 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:06.793555 kubelet[2546]: E0124 00:57:06.793468 2546 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:06.794176 kubelet[2546]: E0124 00:57:06.794146 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:06.794247 kubelet[2546]: W0124 00:57:06.794171 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:06.794247 kubelet[2546]: E0124 00:57:06.794227 2546 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:06.794303 kubelet[2546]: I0124 00:57:06.794262 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/43bd5f1f-4a0c-4b9f-b986-69bf7780bcee-socket-dir\") pod \"csi-node-driver-ftl5s\" (UID: \"43bd5f1f-4a0c-4b9f-b986-69bf7780bcee\") " pod="calico-system/csi-node-driver-ftl5s" Jan 24 00:57:06.795052 kubelet[2546]: E0124 00:57:06.795017 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:06.795052 kubelet[2546]: W0124 00:57:06.795045 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:06.795166 kubelet[2546]: E0124 00:57:06.795101 2546 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:06.796833 kubelet[2546]: E0124 00:57:06.795768 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:06.796833 kubelet[2546]: W0124 00:57:06.795794 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:06.796833 kubelet[2546]: E0124 00:57:06.795811 2546 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:06.895848 kubelet[2546]: E0124 00:57:06.894976 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:06.895848 kubelet[2546]: W0124 00:57:06.895009 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:06.895848 kubelet[2546]: E0124 00:57:06.895037 2546 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:06.895848 kubelet[2546]: E0124 00:57:06.895573 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:06.895848 kubelet[2546]: W0124 00:57:06.895597 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:06.895848 kubelet[2546]: E0124 00:57:06.895631 2546 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:06.898034 kubelet[2546]: E0124 00:57:06.896230 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:06.898034 kubelet[2546]: W0124 00:57:06.896247 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:06.898034 kubelet[2546]: E0124 00:57:06.896280 2546 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:06.898034 kubelet[2546]: E0124 00:57:06.896791 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:06.898034 kubelet[2546]: W0124 00:57:06.896808 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:06.898034 kubelet[2546]: E0124 00:57:06.896826 2546 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:06.898034 kubelet[2546]: E0124 00:57:06.897227 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:06.898034 kubelet[2546]: W0124 00:57:06.897244 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:06.898034 kubelet[2546]: E0124 00:57:06.897262 2546 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:06.898034 kubelet[2546]: E0124 00:57:06.897755 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:06.898469 kubelet[2546]: W0124 00:57:06.897780 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:06.898469 kubelet[2546]: E0124 00:57:06.897837 2546 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:06.899273 kubelet[2546]: E0124 00:57:06.898919 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:06.899273 kubelet[2546]: W0124 00:57:06.898997 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:06.899273 kubelet[2546]: E0124 00:57:06.899031 2546 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:06.900396 kubelet[2546]: E0124 00:57:06.900357 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:06.900396 kubelet[2546]: W0124 00:57:06.900385 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:06.900652 kubelet[2546]: E0124 00:57:06.900589 2546 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:06.901214 kubelet[2546]: E0124 00:57:06.901051 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:06.901214 kubelet[2546]: W0124 00:57:06.901070 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:06.901214 kubelet[2546]: E0124 00:57:06.901123 2546 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:06.901535 kubelet[2546]: E0124 00:57:06.901497 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:06.901535 kubelet[2546]: W0124 00:57:06.901517 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:06.901727 kubelet[2546]: E0124 00:57:06.901623 2546 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:06.901963 kubelet[2546]: E0124 00:57:06.901928 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:06.901963 kubelet[2546]: W0124 00:57:06.901950 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:06.902059 kubelet[2546]: E0124 00:57:06.901972 2546 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:06.902384 kubelet[2546]: E0124 00:57:06.902350 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:06.902384 kubelet[2546]: W0124 00:57:06.902375 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:06.902826 kubelet[2546]: E0124 00:57:06.902616 2546 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:06.902902 kubelet[2546]: E0124 00:57:06.902891 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:06.902939 kubelet[2546]: W0124 00:57:06.902906 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:06.903024 kubelet[2546]: E0124 00:57:06.903003 2546 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:06.903383 kubelet[2546]: E0124 00:57:06.903355 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:06.903383 kubelet[2546]: W0124 00:57:06.903378 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:06.903618 kubelet[2546]: E0124 00:57:06.903513 2546 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:06.903932 kubelet[2546]: E0124 00:57:06.903899 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:06.903932 kubelet[2546]: W0124 00:57:06.903923 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:06.904294 kubelet[2546]: E0124 00:57:06.904100 2546 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:06.904383 kubelet[2546]: E0124 00:57:06.904353 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:06.904383 kubelet[2546]: W0124 00:57:06.904366 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:06.904576 kubelet[2546]: E0124 00:57:06.904533 2546 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:06.904891 kubelet[2546]: E0124 00:57:06.904872 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:06.905065 kubelet[2546]: W0124 00:57:06.904983 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:06.905065 kubelet[2546]: E0124 00:57:06.905042 2546 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:06.905707 kubelet[2546]: E0124 00:57:06.905571 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:06.905707 kubelet[2546]: W0124 00:57:06.905588 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:06.905707 kubelet[2546]: E0124 00:57:06.905633 2546 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:06.906198 kubelet[2546]: E0124 00:57:06.906047 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:06.906198 kubelet[2546]: W0124 00:57:06.906064 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:06.906198 kubelet[2546]: E0124 00:57:06.906111 2546 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:06.906489 kubelet[2546]: E0124 00:57:06.906470 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:06.906689 kubelet[2546]: W0124 00:57:06.906581 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:06.906689 kubelet[2546]: E0124 00:57:06.906645 2546 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:06.907329 kubelet[2546]: E0124 00:57:06.907290 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:06.907329 kubelet[2546]: W0124 00:57:06.907314 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:06.907712 kubelet[2546]: E0124 00:57:06.907472 2546 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:06.907901 kubelet[2546]: E0124 00:57:06.907841 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:06.907901 kubelet[2546]: W0124 00:57:06.907883 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:06.908082 kubelet[2546]: E0124 00:57:06.908005 2546 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:06.908345 kubelet[2546]: E0124 00:57:06.908303 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:06.908345 kubelet[2546]: W0124 00:57:06.908326 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:06.908804 kubelet[2546]: E0124 00:57:06.908543 2546 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:06.908954 kubelet[2546]: E0124 00:57:06.908910 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:06.908954 kubelet[2546]: W0124 00:57:06.908933 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:06.909861 kubelet[2546]: E0124 00:57:06.909124 2546 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:06.909861 kubelet[2546]: E0124 00:57:06.909414 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:06.909861 kubelet[2546]: W0124 00:57:06.909446 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:06.909861 kubelet[2546]: E0124 00:57:06.909463 2546 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:07.198299 kubelet[2546]: E0124 00:57:07.198152 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:07.198299 kubelet[2546]: W0124 00:57:07.198191 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:07.198299 kubelet[2546]: E0124 00:57:07.198225 2546 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:07.375994 kubelet[2546]: E0124 00:57:07.375921 2546 configmap.go:193] Couldn't get configMap calico-system/tigera-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jan 24 00:57:07.376937 kubelet[2546]: E0124 00:57:07.376023 2546 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/936e3a2f-fb5c-4249-9060-b6980dd45cdc-tigera-ca-bundle podName:936e3a2f-fb5c-4249-9060-b6980dd45cdc nodeName:}" failed. No retries permitted until 2026-01-24 00:57:07.87600286 +0000 UTC m=+20.025944343 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tigera-ca-bundle" (UniqueName: "kubernetes.io/configmap/936e3a2f-fb5c-4249-9060-b6980dd45cdc-tigera-ca-bundle") pod "calico-typha-5cf869594f-mr5nl" (UID: "936e3a2f-fb5c-4249-9060-b6980dd45cdc") : failed to sync configmap cache: timed out waiting for the condition Jan 24 00:57:07.382943 kubelet[2546]: E0124 00:57:07.382908 2546 projected.go:288] Couldn't get configMap calico-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 24 00:57:07.382943 kubelet[2546]: E0124 00:57:07.382934 2546 projected.go:194] Error preparing data for projected volume kube-api-access-mdkfh for pod calico-system/calico-typha-5cf869594f-mr5nl: failed to sync configmap cache: timed out waiting for the condition Jan 24 00:57:07.383109 kubelet[2546]: E0124 00:57:07.382984 2546 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/936e3a2f-fb5c-4249-9060-b6980dd45cdc-kube-api-access-mdkfh podName:936e3a2f-fb5c-4249-9060-b6980dd45cdc nodeName:}" failed. No retries permitted until 2026-01-24 00:57:07.882972392 +0000 UTC m=+20.032913885 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-mdkfh" (UniqueName: "kubernetes.io/projected/936e3a2f-fb5c-4249-9060-b6980dd45cdc-kube-api-access-mdkfh") pod "calico-typha-5cf869594f-mr5nl" (UID: "936e3a2f-fb5c-4249-9060-b6980dd45cdc") : failed to sync configmap cache: timed out waiting for the condition Jan 24 00:57:07.406850 kubelet[2546]: E0124 00:57:07.406798 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:07.406850 kubelet[2546]: W0124 00:57:07.406832 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:07.407041 kubelet[2546]: E0124 00:57:07.406858 2546 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:07.407370 kubelet[2546]: E0124 00:57:07.407315 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:07.407370 kubelet[2546]: W0124 00:57:07.407340 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:07.407370 kubelet[2546]: E0124 00:57:07.407360 2546 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:07.428280 kubelet[2546]: E0124 00:57:07.427202 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:07.428280 kubelet[2546]: W0124 00:57:07.427363 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:07.428280 kubelet[2546]: E0124 00:57:07.427525 2546 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:07.431826 kubelet[2546]: E0124 00:57:07.431495 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:07.431826 kubelet[2546]: W0124 00:57:07.431823 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:07.438633 kubelet[2546]: E0124 00:57:07.431844 2546 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:07.455225 kubelet[2546]: E0124 00:57:07.455108 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:07.455225 kubelet[2546]: W0124 00:57:07.455139 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:07.455225 kubelet[2546]: E0124 00:57:07.455160 2546 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:07.508397 kubelet[2546]: E0124 00:57:07.508343 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:07.508397 kubelet[2546]: W0124 00:57:07.508371 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:07.508397 kubelet[2546]: E0124 00:57:07.508395 2546 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:07.508934 kubelet[2546]: E0124 00:57:07.508889 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:07.508934 kubelet[2546]: W0124 00:57:07.508920 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:07.508934 kubelet[2546]: E0124 00:57:07.508948 2546 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:07.610634 kubelet[2546]: E0124 00:57:07.610382 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:07.610634 kubelet[2546]: W0124 00:57:07.610402 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:07.610634 kubelet[2546]: E0124 00:57:07.610423 2546 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:07.610910 kubelet[2546]: E0124 00:57:07.610839 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:07.610910 kubelet[2546]: W0124 00:57:07.610871 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:07.610960 kubelet[2546]: E0124 00:57:07.610907 2546 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:07.690631 containerd[1500]: time="2026-01-24T00:57:07.690564070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pd5qk,Uid:1e096f52-aca0-485f-b877-f8a68fcdf025,Namespace:calico-system,Attempt:0,}" Jan 24 00:57:07.716391 kubelet[2546]: E0124 00:57:07.712823 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:07.716391 kubelet[2546]: W0124 00:57:07.712993 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:07.716391 kubelet[2546]: E0124 00:57:07.713020 2546 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:07.716391 kubelet[2546]: E0124 00:57:07.714873 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:07.716391 kubelet[2546]: W0124 00:57:07.714896 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:07.716391 kubelet[2546]: E0124 00:57:07.714919 2546 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:07.736166 containerd[1500]: time="2026-01-24T00:57:07.734482781Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:57:07.736166 containerd[1500]: time="2026-01-24T00:57:07.734670885Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:57:07.736166 containerd[1500]: time="2026-01-24T00:57:07.734927111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:57:07.736755 containerd[1500]: time="2026-01-24T00:57:07.736613110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:57:07.784441 systemd[1]: run-containerd-runc-k8s.io-c590654a8496e4bbb5c3347b81c0fb0da656f463508a18e2707ba55dcb6830a0-runc.SdQGj1.mount: Deactivated successfully. Jan 24 00:57:07.798977 systemd[1]: Started cri-containerd-c590654a8496e4bbb5c3347b81c0fb0da656f463508a18e2707ba55dcb6830a0.scope - libcontainer container c590654a8496e4bbb5c3347b81c0fb0da656f463508a18e2707ba55dcb6830a0. Jan 24 00:57:07.816724 kubelet[2546]: E0124 00:57:07.816601 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:07.816724 kubelet[2546]: W0124 00:57:07.816628 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:07.816724 kubelet[2546]: E0124 00:57:07.816679 2546 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:07.817786 kubelet[2546]: E0124 00:57:07.817252 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:07.817786 kubelet[2546]: W0124 00:57:07.817270 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:07.817786 kubelet[2546]: E0124 00:57:07.817287 2546 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:07.852121 containerd[1500]: time="2026-01-24T00:57:07.851980992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pd5qk,Uid:1e096f52-aca0-485f-b877-f8a68fcdf025,Namespace:calico-system,Attempt:0,} returns sandbox id \"c590654a8496e4bbb5c3347b81c0fb0da656f463508a18e2707ba55dcb6830a0\"" Jan 24 00:57:07.857150 containerd[1500]: time="2026-01-24T00:57:07.856995298Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 24 00:57:07.918594 kubelet[2546]: E0124 00:57:07.918404 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:07.918594 kubelet[2546]: W0124 00:57:07.918432 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:07.918594 kubelet[2546]: E0124 00:57:07.918483 2546 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:07.919202 kubelet[2546]: E0124 00:57:07.919149 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:07.919202 kubelet[2546]: W0124 00:57:07.919170 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:07.919781 kubelet[2546]: E0124 00:57:07.919519 2546 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:07.920923 kubelet[2546]: E0124 00:57:07.920890 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:07.920923 kubelet[2546]: W0124 00:57:07.920918 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:07.921927 kubelet[2546]: E0124 00:57:07.921788 2546 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:07.922431 kubelet[2546]: E0124 00:57:07.922404 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:07.922613 kubelet[2546]: W0124 00:57:07.922479 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:07.922613 kubelet[2546]: E0124 00:57:07.922500 2546 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:07.923560 kubelet[2546]: E0124 00:57:07.923503 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:07.923560 kubelet[2546]: W0124 00:57:07.923529 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:07.923888 kubelet[2546]: E0124 00:57:07.923545 2546 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:07.925999 kubelet[2546]: E0124 00:57:07.925799 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:07.925999 kubelet[2546]: W0124 00:57:07.925822 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:07.925999 kubelet[2546]: E0124 00:57:07.925873 2546 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:07.927935 kubelet[2546]: E0124 00:57:07.927857 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:07.927935 kubelet[2546]: W0124 00:57:07.927878 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:07.927935 kubelet[2546]: E0124 00:57:07.927896 2546 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:07.928625 kubelet[2546]: E0124 00:57:07.928563 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:07.928764 kubelet[2546]: W0124 00:57:07.928583 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:07.928826 kubelet[2546]: E0124 00:57:07.928728 2546 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:07.930273 kubelet[2546]: E0124 00:57:07.930210 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:07.930334 kubelet[2546]: W0124 00:57:07.930284 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:07.930334 kubelet[2546]: E0124 00:57:07.930311 2546 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:07.931227 kubelet[2546]: E0124 00:57:07.931193 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:07.931279 kubelet[2546]: W0124 00:57:07.931226 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:07.931279 kubelet[2546]: E0124 00:57:07.931249 2546 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:07.940481 kubelet[2546]: E0124 00:57:07.936053 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:07.940481 kubelet[2546]: W0124 00:57:07.936087 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:07.940481 kubelet[2546]: E0124 00:57:07.936165 2546 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:07.944873 kubelet[2546]: E0124 00:57:07.944838 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:57:07.944873 kubelet[2546]: W0124 00:57:07.944867 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:57:07.945121 kubelet[2546]: E0124 00:57:07.944892 2546 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:57:07.970099 kubelet[2546]: E0124 00:57:07.969952 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ftl5s" podUID="43bd5f1f-4a0c-4b9f-b986-69bf7780bcee" Jan 24 00:57:08.076034 containerd[1500]: time="2026-01-24T00:57:08.075971319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5cf869594f-mr5nl,Uid:936e3a2f-fb5c-4249-9060-b6980dd45cdc,Namespace:calico-system,Attempt:0,}" Jan 24 00:57:08.135981 containerd[1500]: time="2026-01-24T00:57:08.135324762Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:57:08.135981 containerd[1500]: time="2026-01-24T00:57:08.135444394Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:57:08.135981 containerd[1500]: time="2026-01-24T00:57:08.135491475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:57:08.135981 containerd[1500]: time="2026-01-24T00:57:08.135669179Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:57:08.184988 systemd[1]: Started cri-containerd-650f8b561740de88287ec33a63345d50c90a6d7bd86d5d7f4d41bb5ca10c8df6.scope - libcontainer container 650f8b561740de88287ec33a63345d50c90a6d7bd86d5d7f4d41bb5ca10c8df6. Jan 24 00:57:08.250896 containerd[1500]: time="2026-01-24T00:57:08.250286386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5cf869594f-mr5nl,Uid:936e3a2f-fb5c-4249-9060-b6980dd45cdc,Namespace:calico-system,Attempt:0,} returns sandbox id \"650f8b561740de88287ec33a63345d50c90a6d7bd86d5d7f4d41bb5ca10c8df6\"" Jan 24 00:57:09.729713 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1996039527.mount: Deactivated successfully. Jan 24 00:57:09.870467 containerd[1500]: time="2026-01-24T00:57:09.870393171Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:57:09.871726 containerd[1500]: time="2026-01-24T00:57:09.871423792Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=5941492" Jan 24 00:57:09.872901 containerd[1500]: time="2026-01-24T00:57:09.872812520Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:57:09.876799 containerd[1500]: time="2026-01-24T00:57:09.876390064Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:57:09.877788 containerd[1500]: time="2026-01-24T00:57:09.877056637Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 2.020009788s" Jan 24 00:57:09.877788 containerd[1500]: time="2026-01-24T00:57:09.877097468Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 24 00:57:09.880764 containerd[1500]: time="2026-01-24T00:57:09.880697572Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 24 00:57:09.883323 containerd[1500]: time="2026-01-24T00:57:09.883120521Z" level=info msg="CreateContainer within sandbox \"c590654a8496e4bbb5c3347b81c0fb0da656f463508a18e2707ba55dcb6830a0\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 24 00:57:09.910069 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3021864004.mount: Deactivated successfully. Jan 24 00:57:09.911842 containerd[1500]: time="2026-01-24T00:57:09.911787627Z" level=info msg="CreateContainer within sandbox \"c590654a8496e4bbb5c3347b81c0fb0da656f463508a18e2707ba55dcb6830a0\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6b04f36a6b57b1ed5e073f3e9e8d5b8eb343fac524bbad6359caaa5e6e442485\"" Jan 24 00:57:09.914621 containerd[1500]: time="2026-01-24T00:57:09.913437120Z" level=info msg="StartContainer for \"6b04f36a6b57b1ed5e073f3e9e8d5b8eb343fac524bbad6359caaa5e6e442485\"" Jan 24 00:57:09.971646 kubelet[2546]: E0124 00:57:09.970807 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ftl5s" podUID="43bd5f1f-4a0c-4b9f-b986-69bf7780bcee" Jan 24 00:57:09.982203 systemd[1]: Started cri-containerd-6b04f36a6b57b1ed5e073f3e9e8d5b8eb343fac524bbad6359caaa5e6e442485.scope - libcontainer container 6b04f36a6b57b1ed5e073f3e9e8d5b8eb343fac524bbad6359caaa5e6e442485. Jan 24 00:57:10.034706 containerd[1500]: time="2026-01-24T00:57:10.034657603Z" level=info msg="StartContainer for \"6b04f36a6b57b1ed5e073f3e9e8d5b8eb343fac524bbad6359caaa5e6e442485\" returns successfully" Jan 24 00:57:10.052631 systemd[1]: cri-containerd-6b04f36a6b57b1ed5e073f3e9e8d5b8eb343fac524bbad6359caaa5e6e442485.scope: Deactivated successfully. Jan 24 00:57:10.174155 containerd[1500]: time="2026-01-24T00:57:10.174021812Z" level=info msg="shim disconnected" id=6b04f36a6b57b1ed5e073f3e9e8d5b8eb343fac524bbad6359caaa5e6e442485 namespace=k8s.io Jan 24 00:57:10.174155 containerd[1500]: time="2026-01-24T00:57:10.174153574Z" level=warning msg="cleaning up after shim disconnected" id=6b04f36a6b57b1ed5e073f3e9e8d5b8eb343fac524bbad6359caaa5e6e442485 namespace=k8s.io Jan 24 00:57:10.174428 containerd[1500]: time="2026-01-24T00:57:10.174175275Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:57:10.730320 systemd[1]: run-containerd-runc-k8s.io-6b04f36a6b57b1ed5e073f3e9e8d5b8eb343fac524bbad6359caaa5e6e442485-runc.WWHqRe.mount: Deactivated successfully. Jan 24 00:57:10.730505 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6b04f36a6b57b1ed5e073f3e9e8d5b8eb343fac524bbad6359caaa5e6e442485-rootfs.mount: Deactivated successfully. Jan 24 00:57:11.971097 kubelet[2546]: E0124 00:57:11.970991 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ftl5s" podUID="43bd5f1f-4a0c-4b9f-b986-69bf7780bcee" Jan 24 00:57:13.146245 containerd[1500]: time="2026-01-24T00:57:13.146184462Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:57:13.147286 containerd[1500]: time="2026-01-24T00:57:13.147142377Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33739890" Jan 24 00:57:13.148120 containerd[1500]: time="2026-01-24T00:57:13.148098962Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:57:13.150102 containerd[1500]: time="2026-01-24T00:57:13.149621136Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:57:13.150102 containerd[1500]: time="2026-01-24T00:57:13.150015722Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 3.26928059s" Jan 24 00:57:13.150102 containerd[1500]: time="2026-01-24T00:57:13.150034973Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 24 00:57:13.151708 containerd[1500]: time="2026-01-24T00:57:13.151593847Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 24 00:57:13.163802 containerd[1500]: time="2026-01-24T00:57:13.163129959Z" level=info msg="CreateContainer within sandbox \"650f8b561740de88287ec33a63345d50c90a6d7bd86d5d7f4d41bb5ca10c8df6\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 24 00:57:13.181248 containerd[1500]: time="2026-01-24T00:57:13.181205014Z" level=info msg="CreateContainer within sandbox \"650f8b561740de88287ec33a63345d50c90a6d7bd86d5d7f4d41bb5ca10c8df6\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"4833ea7b132e0d5db67f8d95734e6c5f795b44f68f70d5d6a67f1b8654590395\"" Jan 24 00:57:13.181696 containerd[1500]: time="2026-01-24T00:57:13.181682712Z" level=info msg="StartContainer for \"4833ea7b132e0d5db67f8d95734e6c5f795b44f68f70d5d6a67f1b8654590395\"" Jan 24 00:57:13.208892 systemd[1]: Started cri-containerd-4833ea7b132e0d5db67f8d95734e6c5f795b44f68f70d5d6a67f1b8654590395.scope - libcontainer container 4833ea7b132e0d5db67f8d95734e6c5f795b44f68f70d5d6a67f1b8654590395. Jan 24 00:57:13.242330 containerd[1500]: time="2026-01-24T00:57:13.242295728Z" level=info msg="StartContainer for \"4833ea7b132e0d5db67f8d95734e6c5f795b44f68f70d5d6a67f1b8654590395\" returns successfully" Jan 24 00:57:13.972119 kubelet[2546]: E0124 00:57:13.969422 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ftl5s" podUID="43bd5f1f-4a0c-4b9f-b986-69bf7780bcee" Jan 24 00:57:14.071956 kubelet[2546]: I0124 00:57:14.071402 2546 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5cf869594f-mr5nl" podStartSLOduration=3.17429668 podStartE2EDuration="8.071376242s" podCreationTimestamp="2026-01-24 00:57:06 +0000 UTC" firstStartedPulling="2026-01-24 00:57:08.253901725 +0000 UTC m=+20.403843238" lastFinishedPulling="2026-01-24 00:57:13.150981317 +0000 UTC m=+25.300922800" observedRunningTime="2026-01-24 00:57:14.070962365 +0000 UTC m=+26.220903888" watchObservedRunningTime="2026-01-24 00:57:14.071376242 +0000 UTC m=+26.221317775" Jan 24 00:57:15.059217 kubelet[2546]: I0124 00:57:15.059155 2546 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 00:57:15.971656 kubelet[2546]: E0124 00:57:15.970728 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ftl5s" podUID="43bd5f1f-4a0c-4b9f-b986-69bf7780bcee" Jan 24 00:57:17.398397 containerd[1500]: time="2026-01-24T00:57:17.398355025Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:57:17.399540 containerd[1500]: time="2026-01-24T00:57:17.399454749Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 24 00:57:17.400670 containerd[1500]: time="2026-01-24T00:57:17.400503961Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:57:17.402414 containerd[1500]: time="2026-01-24T00:57:17.402378204Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:57:17.402909 containerd[1500]: time="2026-01-24T00:57:17.402790169Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 4.251176632s" Jan 24 00:57:17.402909 containerd[1500]: time="2026-01-24T00:57:17.402815010Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 24 00:57:17.405896 containerd[1500]: time="2026-01-24T00:57:17.405863747Z" level=info msg="CreateContainer within sandbox \"c590654a8496e4bbb5c3347b81c0fb0da656f463508a18e2707ba55dcb6830a0\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 24 00:57:17.420594 containerd[1500]: time="2026-01-24T00:57:17.420554706Z" level=info msg="CreateContainer within sandbox \"c590654a8496e4bbb5c3347b81c0fb0da656f463508a18e2707ba55dcb6830a0\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d60993a0cd9f8a4f12511c12ac4850a5d20b6e39e8bc8e03a31ead2782f1f6f3\"" Jan 24 00:57:17.421089 containerd[1500]: time="2026-01-24T00:57:17.421054132Z" level=info msg="StartContainer for \"d60993a0cd9f8a4f12511c12ac4850a5d20b6e39e8bc8e03a31ead2782f1f6f3\"" Jan 24 00:57:17.450875 systemd[1]: Started cri-containerd-d60993a0cd9f8a4f12511c12ac4850a5d20b6e39e8bc8e03a31ead2782f1f6f3.scope - libcontainer container d60993a0cd9f8a4f12511c12ac4850a5d20b6e39e8bc8e03a31ead2782f1f6f3. Jan 24 00:57:17.481414 containerd[1500]: time="2026-01-24T00:57:17.481284596Z" level=info msg="StartContainer for \"d60993a0cd9f8a4f12511c12ac4850a5d20b6e39e8bc8e03a31ead2782f1f6f3\" returns successfully" Jan 24 00:57:17.901782 systemd[1]: cri-containerd-d60993a0cd9f8a4f12511c12ac4850a5d20b6e39e8bc8e03a31ead2782f1f6f3.scope: Deactivated successfully. Jan 24 00:57:17.941582 kubelet[2546]: I0124 00:57:17.941454 2546 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 24 00:57:17.950345 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d60993a0cd9f8a4f12511c12ac4850a5d20b6e39e8bc8e03a31ead2782f1f6f3-rootfs.mount: Deactivated successfully. Jan 24 00:57:17.994570 systemd[1]: Created slice kubepods-besteffort-pod43bd5f1f_4a0c_4b9f_b986_69bf7780bcee.slice - libcontainer container kubepods-besteffort-pod43bd5f1f_4a0c_4b9f_b986_69bf7780bcee.slice. Jan 24 00:57:18.003558 containerd[1500]: time="2026-01-24T00:57:18.003484689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ftl5s,Uid:43bd5f1f-4a0c-4b9f-b986-69bf7780bcee,Namespace:calico-system,Attempt:0,}" Jan 24 00:57:18.010285 kubelet[2546]: W0124 00:57:18.010261 2546 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4081-3-6-n-32cc93a80b" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-6-n-32cc93a80b' and this object Jan 24 00:57:18.010392 kubelet[2546]: E0124 00:57:18.010298 2546 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ci-4081-3-6-n-32cc93a80b\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-6-n-32cc93a80b' and this object" logger="UnhandledError" Jan 24 00:57:18.013884 kubelet[2546]: I0124 00:57:18.013854 2546 status_manager.go:890] "Failed to get status for pod" podUID="60844a2d-0038-4132-8338-140b75e01a74" pod="kube-system/coredns-668d6bf9bc-mnh95" err="pods \"coredns-668d6bf9bc-mnh95\" is forbidden: User \"system:node:ci-4081-3-6-n-32cc93a80b\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-6-n-32cc93a80b' and this object" Jan 24 00:57:18.016408 systemd[1]: Created slice kubepods-burstable-pod60844a2d_0038_4132_8338_140b75e01a74.slice - libcontainer container kubepods-burstable-pod60844a2d_0038_4132_8338_140b75e01a74.slice. Jan 24 00:57:18.032173 systemd[1]: Created slice kubepods-burstable-pod1d12baac_e259_43f8_8c34_2fc70e4e9750.slice - libcontainer container kubepods-burstable-pod1d12baac_e259_43f8_8c34_2fc70e4e9750.slice. Jan 24 00:57:18.043585 systemd[1]: Created slice kubepods-besteffort-pod267130dd_42b7_45fa_9166_0420d7cd47cc.slice - libcontainer container kubepods-besteffort-pod267130dd_42b7_45fa_9166_0420d7cd47cc.slice. Jan 24 00:57:18.048546 systemd[1]: Created slice kubepods-besteffort-pode25c9c50_eb09_419b_a216_dabe2aa24f5e.slice - libcontainer container kubepods-besteffort-pode25c9c50_eb09_419b_a216_dabe2aa24f5e.slice. Jan 24 00:57:18.056717 systemd[1]: Created slice kubepods-besteffort-pod92edd234_ce88_420a_bb1b_56d2f203263f.slice - libcontainer container kubepods-besteffort-pod92edd234_ce88_420a_bb1b_56d2f203263f.slice. Jan 24 00:57:18.063707 systemd[1]: Created slice kubepods-besteffort-podabee6eff_7ee6_4417_a4eb_5f0514e6e7e9.slice - libcontainer container kubepods-besteffort-podabee6eff_7ee6_4417_a4eb_5f0514e6e7e9.slice. Jan 24 00:57:18.070933 systemd[1]: Created slice kubepods-besteffort-pod3be98e24_0896_49a9_8666_4ca8f66cf2c8.slice - libcontainer container kubepods-besteffort-pod3be98e24_0896_49a9_8666_4ca8f66cf2c8.slice. Jan 24 00:57:18.076965 systemd[1]: Created slice kubepods-besteffort-pod2623c9d2_b9f3_4861_944f_8da3fba4e042.slice - libcontainer container kubepods-besteffort-pod2623c9d2_b9f3_4861_944f_8da3fba4e042.slice. Jan 24 00:57:18.094269 containerd[1500]: time="2026-01-24T00:57:18.094128895Z" level=info msg="shim disconnected" id=d60993a0cd9f8a4f12511c12ac4850a5d20b6e39e8bc8e03a31ead2782f1f6f3 namespace=k8s.io Jan 24 00:57:18.094536 containerd[1500]: time="2026-01-24T00:57:18.094383458Z" level=warning msg="cleaning up after shim disconnected" id=d60993a0cd9f8a4f12511c12ac4850a5d20b6e39e8bc8e03a31ead2782f1f6f3 namespace=k8s.io Jan 24 00:57:18.094536 containerd[1500]: time="2026-01-24T00:57:18.094397048Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:57:18.097587 kubelet[2546]: I0124 00:57:18.097485 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2623c9d2-b9f3-4861-944f-8da3fba4e042-whisker-ca-bundle\") pod \"whisker-675bdfd5f-2k8rp\" (UID: \"2623c9d2-b9f3-4861-944f-8da3fba4e042\") " pod="calico-system/whisker-675bdfd5f-2k8rp" Jan 24 00:57:18.097587 kubelet[2546]: I0124 00:57:18.097530 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3be98e24-0896-49a9-8666-4ca8f66cf2c8-calico-apiserver-certs\") pod \"calico-apiserver-59667657-b8mx9\" (UID: \"3be98e24-0896-49a9-8666-4ca8f66cf2c8\") " pod="calico-apiserver/calico-apiserver-59667657-b8mx9" Jan 24 00:57:18.097587 kubelet[2546]: I0124 00:57:18.097565 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbb4z\" (UniqueName: \"kubernetes.io/projected/e25c9c50-eb09-419b-a216-dabe2aa24f5e-kube-api-access-pbb4z\") pod \"calico-apiserver-6ff89d9558-pr2mw\" (UID: \"e25c9c50-eb09-419b-a216-dabe2aa24f5e\") " pod="calico-apiserver/calico-apiserver-6ff89d9558-pr2mw" Jan 24 00:57:18.098714 kubelet[2546]: I0124 00:57:18.097592 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvjcq\" (UniqueName: \"kubernetes.io/projected/3be98e24-0896-49a9-8666-4ca8f66cf2c8-kube-api-access-nvjcq\") pod \"calico-apiserver-59667657-b8mx9\" (UID: \"3be98e24-0896-49a9-8666-4ca8f66cf2c8\") " pod="calico-apiserver/calico-apiserver-59667657-b8mx9" Jan 24 00:57:18.098714 kubelet[2546]: I0124 00:57:18.097614 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2623c9d2-b9f3-4861-944f-8da3fba4e042-whisker-backend-key-pair\") pod \"whisker-675bdfd5f-2k8rp\" (UID: \"2623c9d2-b9f3-4861-944f-8da3fba4e042\") " pod="calico-system/whisker-675bdfd5f-2k8rp" Jan 24 00:57:18.098714 kubelet[2546]: I0124 00:57:18.097633 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptnkw\" (UniqueName: \"kubernetes.io/projected/60844a2d-0038-4132-8338-140b75e01a74-kube-api-access-ptnkw\") pod \"coredns-668d6bf9bc-mnh95\" (UID: \"60844a2d-0038-4132-8338-140b75e01a74\") " pod="kube-system/coredns-668d6bf9bc-mnh95" Jan 24 00:57:18.098714 kubelet[2546]: I0124 00:57:18.097663 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/267130dd-42b7-45fa-9166-0420d7cd47cc-goldmane-key-pair\") pod \"goldmane-666569f655-9lcpv\" (UID: \"267130dd-42b7-45fa-9166-0420d7cd47cc\") " pod="calico-system/goldmane-666569f655-9lcpv" Jan 24 00:57:18.098714 kubelet[2546]: I0124 00:57:18.097681 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zmnd\" (UniqueName: \"kubernetes.io/projected/1d12baac-e259-43f8-8c34-2fc70e4e9750-kube-api-access-6zmnd\") pod \"coredns-668d6bf9bc-7fv4k\" (UID: \"1d12baac-e259-43f8-8c34-2fc70e4e9750\") " pod="kube-system/coredns-668d6bf9bc-7fv4k" Jan 24 00:57:18.098886 kubelet[2546]: I0124 00:57:18.097704 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvgfk\" (UniqueName: \"kubernetes.io/projected/abee6eff-7ee6-4417-a4eb-5f0514e6e7e9-kube-api-access-gvgfk\") pod \"calico-apiserver-6ff89d9558-qsdz4\" (UID: \"abee6eff-7ee6-4417-a4eb-5f0514e6e7e9\") " pod="calico-apiserver/calico-apiserver-6ff89d9558-qsdz4" Jan 24 00:57:18.098886 kubelet[2546]: I0124 00:57:18.097754 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/60844a2d-0038-4132-8338-140b75e01a74-config-volume\") pod \"coredns-668d6bf9bc-mnh95\" (UID: \"60844a2d-0038-4132-8338-140b75e01a74\") " pod="kube-system/coredns-668d6bf9bc-mnh95" Jan 24 00:57:18.102897 kubelet[2546]: I0124 00:57:18.100414 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvvmn\" (UniqueName: \"kubernetes.io/projected/2623c9d2-b9f3-4861-944f-8da3fba4e042-kube-api-access-gvvmn\") pod \"whisker-675bdfd5f-2k8rp\" (UID: \"2623c9d2-b9f3-4861-944f-8da3fba4e042\") " pod="calico-system/whisker-675bdfd5f-2k8rp" Jan 24 00:57:18.102897 kubelet[2546]: I0124 00:57:18.100441 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e25c9c50-eb09-419b-a216-dabe2aa24f5e-calico-apiserver-certs\") pod \"calico-apiserver-6ff89d9558-pr2mw\" (UID: \"e25c9c50-eb09-419b-a216-dabe2aa24f5e\") " pod="calico-apiserver/calico-apiserver-6ff89d9558-pr2mw" Jan 24 00:57:18.102897 kubelet[2546]: I0124 00:57:18.100455 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/267130dd-42b7-45fa-9166-0420d7cd47cc-goldmane-ca-bundle\") pod \"goldmane-666569f655-9lcpv\" (UID: \"267130dd-42b7-45fa-9166-0420d7cd47cc\") " pod="calico-system/goldmane-666569f655-9lcpv" Jan 24 00:57:18.102897 kubelet[2546]: I0124 00:57:18.100466 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjqwt\" (UniqueName: \"kubernetes.io/projected/92edd234-ce88-420a-bb1b-56d2f203263f-kube-api-access-wjqwt\") pod \"calico-kube-controllers-85cdccf5-5whtp\" (UID: \"92edd234-ce88-420a-bb1b-56d2f203263f\") " pod="calico-system/calico-kube-controllers-85cdccf5-5whtp" Jan 24 00:57:18.102897 kubelet[2546]: I0124 00:57:18.100491 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/abee6eff-7ee6-4417-a4eb-5f0514e6e7e9-calico-apiserver-certs\") pod \"calico-apiserver-6ff89d9558-qsdz4\" (UID: \"abee6eff-7ee6-4417-a4eb-5f0514e6e7e9\") " pod="calico-apiserver/calico-apiserver-6ff89d9558-qsdz4" Jan 24 00:57:18.103090 kubelet[2546]: I0124 00:57:18.100502 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1d12baac-e259-43f8-8c34-2fc70e4e9750-config-volume\") pod \"coredns-668d6bf9bc-7fv4k\" (UID: \"1d12baac-e259-43f8-8c34-2fc70e4e9750\") " pod="kube-system/coredns-668d6bf9bc-7fv4k" Jan 24 00:57:18.103090 kubelet[2546]: I0124 00:57:18.100517 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhfqz\" (UniqueName: \"kubernetes.io/projected/267130dd-42b7-45fa-9166-0420d7cd47cc-kube-api-access-dhfqz\") pod \"goldmane-666569f655-9lcpv\" (UID: \"267130dd-42b7-45fa-9166-0420d7cd47cc\") " pod="calico-system/goldmane-666569f655-9lcpv" Jan 24 00:57:18.103090 kubelet[2546]: I0124 00:57:18.100533 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92edd234-ce88-420a-bb1b-56d2f203263f-tigera-ca-bundle\") pod \"calico-kube-controllers-85cdccf5-5whtp\" (UID: \"92edd234-ce88-420a-bb1b-56d2f203263f\") " pod="calico-system/calico-kube-controllers-85cdccf5-5whtp" Jan 24 00:57:18.103090 kubelet[2546]: I0124 00:57:18.100546 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/267130dd-42b7-45fa-9166-0420d7cd47cc-config\") pod \"goldmane-666569f655-9lcpv\" (UID: \"267130dd-42b7-45fa-9166-0420d7cd47cc\") " pod="calico-system/goldmane-666569f655-9lcpv" Jan 24 00:57:18.172249 containerd[1500]: time="2026-01-24T00:57:18.172209687Z" level=error msg="Failed to destroy network for sandbox \"35e4953a296a961a054181d1aa896e675da60d2594a6e7b913242f403f394e88\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:18.172662 containerd[1500]: time="2026-01-24T00:57:18.172542001Z" level=error msg="encountered an error cleaning up failed sandbox \"35e4953a296a961a054181d1aa896e675da60d2594a6e7b913242f403f394e88\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:18.172662 containerd[1500]: time="2026-01-24T00:57:18.172591051Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ftl5s,Uid:43bd5f1f-4a0c-4b9f-b986-69bf7780bcee,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"35e4953a296a961a054181d1aa896e675da60d2594a6e7b913242f403f394e88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:18.172829 kubelet[2546]: E0124 00:57:18.172796 2546 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35e4953a296a961a054181d1aa896e675da60d2594a6e7b913242f403f394e88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:18.173337 kubelet[2546]: E0124 00:57:18.172922 2546 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35e4953a296a961a054181d1aa896e675da60d2594a6e7b913242f403f394e88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ftl5s" Jan 24 00:57:18.173337 kubelet[2546]: E0124 00:57:18.172940 2546 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35e4953a296a961a054181d1aa896e675da60d2594a6e7b913242f403f394e88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ftl5s" Jan 24 00:57:18.173337 kubelet[2546]: E0124 00:57:18.172971 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ftl5s_calico-system(43bd5f1f-4a0c-4b9f-b986-69bf7780bcee)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ftl5s_calico-system(43bd5f1f-4a0c-4b9f-b986-69bf7780bcee)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"35e4953a296a961a054181d1aa896e675da60d2594a6e7b913242f403f394e88\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ftl5s" podUID="43bd5f1f-4a0c-4b9f-b986-69bf7780bcee" Jan 24 00:57:18.347257 containerd[1500]: time="2026-01-24T00:57:18.347164406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-9lcpv,Uid:267130dd-42b7-45fa-9166-0420d7cd47cc,Namespace:calico-system,Attempt:0,}" Jan 24 00:57:18.355848 containerd[1500]: time="2026-01-24T00:57:18.355780184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ff89d9558-pr2mw,Uid:e25c9c50-eb09-419b-a216-dabe2aa24f5e,Namespace:calico-apiserver,Attempt:0,}" Jan 24 00:57:18.367314 containerd[1500]: time="2026-01-24T00:57:18.367230005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85cdccf5-5whtp,Uid:92edd234-ce88-420a-bb1b-56d2f203263f,Namespace:calico-system,Attempt:0,}" Jan 24 00:57:18.367632 containerd[1500]: time="2026-01-24T00:57:18.367614090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ff89d9558-qsdz4,Uid:abee6eff-7ee6-4417-a4eb-5f0514e6e7e9,Namespace:calico-apiserver,Attempt:0,}" Jan 24 00:57:18.376461 containerd[1500]: time="2026-01-24T00:57:18.376417490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59667657-b8mx9,Uid:3be98e24-0896-49a9-8666-4ca8f66cf2c8,Namespace:calico-apiserver,Attempt:0,}" Jan 24 00:57:18.382579 containerd[1500]: time="2026-01-24T00:57:18.381982674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-675bdfd5f-2k8rp,Uid:2623c9d2-b9f3-4861-944f-8da3fba4e042,Namespace:calico-system,Attempt:0,}" Jan 24 00:57:18.558527 containerd[1500]: time="2026-01-24T00:57:18.557918314Z" level=error msg="Failed to destroy network for sandbox \"77ab0cbafcc89f4443e2cf533c919cdaa642383054c9490d7af6cf2d3be4851d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:18.560839 containerd[1500]: time="2026-01-24T00:57:18.560800527Z" level=error msg="encountered an error cleaning up failed sandbox \"77ab0cbafcc89f4443e2cf533c919cdaa642383054c9490d7af6cf2d3be4851d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:18.560917 containerd[1500]: time="2026-01-24T00:57:18.560847948Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-9lcpv,Uid:267130dd-42b7-45fa-9166-0420d7cd47cc,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"77ab0cbafcc89f4443e2cf533c919cdaa642383054c9490d7af6cf2d3be4851d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:18.562131 kubelet[2546]: E0124 00:57:18.561357 2546 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77ab0cbafcc89f4443e2cf533c919cdaa642383054c9490d7af6cf2d3be4851d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:18.562131 kubelet[2546]: E0124 00:57:18.561416 2546 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77ab0cbafcc89f4443e2cf533c919cdaa642383054c9490d7af6cf2d3be4851d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-9lcpv" Jan 24 00:57:18.562131 kubelet[2546]: E0124 00:57:18.561433 2546 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77ab0cbafcc89f4443e2cf533c919cdaa642383054c9490d7af6cf2d3be4851d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-9lcpv" Jan 24 00:57:18.562231 kubelet[2546]: E0124 00:57:18.561505 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-9lcpv_calico-system(267130dd-42b7-45fa-9166-0420d7cd47cc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-9lcpv_calico-system(267130dd-42b7-45fa-9166-0420d7cd47cc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"77ab0cbafcc89f4443e2cf533c919cdaa642383054c9490d7af6cf2d3be4851d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-9lcpv" podUID="267130dd-42b7-45fa-9166-0420d7cd47cc" Jan 24 00:57:18.566380 containerd[1500]: time="2026-01-24T00:57:18.566341461Z" level=error msg="Failed to destroy network for sandbox \"caa2653b5e4099534e31eca9e2b062c04e8173a1b4f42cdefa228cfaa1e6b053\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:18.566759 containerd[1500]: time="2026-01-24T00:57:18.566725835Z" level=error msg="encountered an error cleaning up failed sandbox \"caa2653b5e4099534e31eca9e2b062c04e8173a1b4f42cdefa228cfaa1e6b053\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:18.566859 containerd[1500]: time="2026-01-24T00:57:18.566843806Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-675bdfd5f-2k8rp,Uid:2623c9d2-b9f3-4861-944f-8da3fba4e042,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"caa2653b5e4099534e31eca9e2b062c04e8173a1b4f42cdefa228cfaa1e6b053\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:18.567153 kubelet[2546]: E0124 00:57:18.567120 2546 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"caa2653b5e4099534e31eca9e2b062c04e8173a1b4f42cdefa228cfaa1e6b053\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:18.567312 kubelet[2546]: E0124 00:57:18.567300 2546 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"caa2653b5e4099534e31eca9e2b062c04e8173a1b4f42cdefa228cfaa1e6b053\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-675bdfd5f-2k8rp" Jan 24 00:57:18.567374 kubelet[2546]: E0124 00:57:18.567357 2546 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"caa2653b5e4099534e31eca9e2b062c04e8173a1b4f42cdefa228cfaa1e6b053\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-675bdfd5f-2k8rp" Jan 24 00:57:18.568102 kubelet[2546]: E0124 00:57:18.568077 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-675bdfd5f-2k8rp_calico-system(2623c9d2-b9f3-4861-944f-8da3fba4e042)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-675bdfd5f-2k8rp_calico-system(2623c9d2-b9f3-4861-944f-8da3fba4e042)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"caa2653b5e4099534e31eca9e2b062c04e8173a1b4f42cdefa228cfaa1e6b053\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-675bdfd5f-2k8rp" podUID="2623c9d2-b9f3-4861-944f-8da3fba4e042" Jan 24 00:57:18.574425 containerd[1500]: time="2026-01-24T00:57:18.574387393Z" level=error msg="Failed to destroy network for sandbox \"e21f9a7bedb692f4cb3dbded049ddb229728defded5a1fcdfba2545bf409fd5f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:18.576949 containerd[1500]: time="2026-01-24T00:57:18.576808230Z" level=error msg="encountered an error cleaning up failed sandbox \"e21f9a7bedb692f4cb3dbded049ddb229728defded5a1fcdfba2545bf409fd5f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:18.576949 containerd[1500]: time="2026-01-24T00:57:18.576852341Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ff89d9558-pr2mw,Uid:e25c9c50-eb09-419b-a216-dabe2aa24f5e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e21f9a7bedb692f4cb3dbded049ddb229728defded5a1fcdfba2545bf409fd5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:18.577050 kubelet[2546]: E0124 00:57:18.576986 2546 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e21f9a7bedb692f4cb3dbded049ddb229728defded5a1fcdfba2545bf409fd5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:18.577050 kubelet[2546]: E0124 00:57:18.577025 2546 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e21f9a7bedb692f4cb3dbded049ddb229728defded5a1fcdfba2545bf409fd5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6ff89d9558-pr2mw" Jan 24 00:57:18.577050 kubelet[2546]: E0124 00:57:18.577039 2546 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e21f9a7bedb692f4cb3dbded049ddb229728defded5a1fcdfba2545bf409fd5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6ff89d9558-pr2mw" Jan 24 00:57:18.577122 kubelet[2546]: E0124 00:57:18.577067 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6ff89d9558-pr2mw_calico-apiserver(e25c9c50-eb09-419b-a216-dabe2aa24f5e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6ff89d9558-pr2mw_calico-apiserver(e25c9c50-eb09-419b-a216-dabe2aa24f5e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e21f9a7bedb692f4cb3dbded049ddb229728defded5a1fcdfba2545bf409fd5f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6ff89d9558-pr2mw" podUID="e25c9c50-eb09-419b-a216-dabe2aa24f5e" Jan 24 00:57:18.595119 containerd[1500]: time="2026-01-24T00:57:18.594904767Z" level=error msg="Failed to destroy network for sandbox \"cc13e5b333c492a15e7614e5d51324566f831788a9cd5cc424e87f5203496bc1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:18.595493 containerd[1500]: time="2026-01-24T00:57:18.595459093Z" level=error msg="encountered an error cleaning up failed sandbox \"cc13e5b333c492a15e7614e5d51324566f831788a9cd5cc424e87f5203496bc1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:18.595591 containerd[1500]: time="2026-01-24T00:57:18.595576635Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ff89d9558-qsdz4,Uid:abee6eff-7ee6-4417-a4eb-5f0514e6e7e9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cc13e5b333c492a15e7614e5d51324566f831788a9cd5cc424e87f5203496bc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:18.596257 kubelet[2546]: E0124 00:57:18.595867 2546 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc13e5b333c492a15e7614e5d51324566f831788a9cd5cc424e87f5203496bc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:18.596257 kubelet[2546]: E0124 00:57:18.595917 2546 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc13e5b333c492a15e7614e5d51324566f831788a9cd5cc424e87f5203496bc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6ff89d9558-qsdz4" Jan 24 00:57:18.596257 kubelet[2546]: E0124 00:57:18.595951 2546 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc13e5b333c492a15e7614e5d51324566f831788a9cd5cc424e87f5203496bc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6ff89d9558-qsdz4" Jan 24 00:57:18.596355 kubelet[2546]: E0124 00:57:18.595990 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6ff89d9558-qsdz4_calico-apiserver(abee6eff-7ee6-4417-a4eb-5f0514e6e7e9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6ff89d9558-qsdz4_calico-apiserver(abee6eff-7ee6-4417-a4eb-5f0514e6e7e9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cc13e5b333c492a15e7614e5d51324566f831788a9cd5cc424e87f5203496bc1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6ff89d9558-qsdz4" podUID="abee6eff-7ee6-4417-a4eb-5f0514e6e7e9" Jan 24 00:57:18.606017 containerd[1500]: time="2026-01-24T00:57:18.605987134Z" level=error msg="Failed to destroy network for sandbox \"934533da5a1c76725bbaafd12ab91034fc55767a58215f0bc374fe7cd77e1d5b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:18.607672 containerd[1500]: time="2026-01-24T00:57:18.606368408Z" level=error msg="encountered an error cleaning up failed sandbox \"934533da5a1c76725bbaafd12ab91034fc55767a58215f0bc374fe7cd77e1d5b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:18.607672 containerd[1500]: time="2026-01-24T00:57:18.606408749Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59667657-b8mx9,Uid:3be98e24-0896-49a9-8666-4ca8f66cf2c8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"934533da5a1c76725bbaafd12ab91034fc55767a58215f0bc374fe7cd77e1d5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:18.607821 kubelet[2546]: E0124 00:57:18.606546 2546 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"934533da5a1c76725bbaafd12ab91034fc55767a58215f0bc374fe7cd77e1d5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:18.607821 kubelet[2546]: E0124 00:57:18.606582 2546 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"934533da5a1c76725bbaafd12ab91034fc55767a58215f0bc374fe7cd77e1d5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59667657-b8mx9" Jan 24 00:57:18.607821 kubelet[2546]: E0124 00:57:18.606596 2546 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"934533da5a1c76725bbaafd12ab91034fc55767a58215f0bc374fe7cd77e1d5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59667657-b8mx9" Jan 24 00:57:18.607903 kubelet[2546]: E0124 00:57:18.606625 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-59667657-b8mx9_calico-apiserver(3be98e24-0896-49a9-8666-4ca8f66cf2c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-59667657-b8mx9_calico-apiserver(3be98e24-0896-49a9-8666-4ca8f66cf2c8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"934533da5a1c76725bbaafd12ab91034fc55767a58215f0bc374fe7cd77e1d5b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-59667657-b8mx9" podUID="3be98e24-0896-49a9-8666-4ca8f66cf2c8" Jan 24 00:57:18.609892 containerd[1500]: time="2026-01-24T00:57:18.609868358Z" level=error msg="Failed to destroy network for sandbox \"a76459f809f24468977c0635d780f8572d989f71d76e163ecda73729d954f19c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:18.610132 containerd[1500]: time="2026-01-24T00:57:18.610117901Z" level=error msg="encountered an error cleaning up failed sandbox \"a76459f809f24468977c0635d780f8572d989f71d76e163ecda73729d954f19c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:18.610181 containerd[1500]: time="2026-01-24T00:57:18.610149501Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85cdccf5-5whtp,Uid:92edd234-ce88-420a-bb1b-56d2f203263f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a76459f809f24468977c0635d780f8572d989f71d76e163ecda73729d954f19c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:18.610311 kubelet[2546]: E0124 00:57:18.610285 2546 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a76459f809f24468977c0635d780f8572d989f71d76e163ecda73729d954f19c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:18.610374 kubelet[2546]: E0124 00:57:18.610319 2546 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a76459f809f24468977c0635d780f8572d989f71d76e163ecda73729d954f19c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85cdccf5-5whtp" Jan 24 00:57:18.610374 kubelet[2546]: E0124 00:57:18.610332 2546 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a76459f809f24468977c0635d780f8572d989f71d76e163ecda73729d954f19c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85cdccf5-5whtp" Jan 24 00:57:18.610374 kubelet[2546]: E0124 00:57:18.610363 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-85cdccf5-5whtp_calico-system(92edd234-ce88-420a-bb1b-56d2f203263f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-85cdccf5-5whtp_calico-system(92edd234-ce88-420a-bb1b-56d2f203263f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a76459f809f24468977c0635d780f8572d989f71d76e163ecda73729d954f19c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-85cdccf5-5whtp" podUID="92edd234-ce88-420a-bb1b-56d2f203263f" Jan 24 00:57:19.083605 kubelet[2546]: I0124 00:57:19.082442 2546 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc13e5b333c492a15e7614e5d51324566f831788a9cd5cc424e87f5203496bc1" Jan 24 00:57:19.085436 containerd[1500]: time="2026-01-24T00:57:19.085342041Z" level=info msg="StopPodSandbox for \"cc13e5b333c492a15e7614e5d51324566f831788a9cd5cc424e87f5203496bc1\"" Jan 24 00:57:19.086406 containerd[1500]: time="2026-01-24T00:57:19.085777086Z" level=info msg="Ensure that sandbox cc13e5b333c492a15e7614e5d51324566f831788a9cd5cc424e87f5203496bc1 in task-service has been cleanup successfully" Jan 24 00:57:19.086705 kubelet[2546]: I0124 00:57:19.086646 2546 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="934533da5a1c76725bbaafd12ab91034fc55767a58215f0bc374fe7cd77e1d5b" Jan 24 00:57:19.088532 containerd[1500]: time="2026-01-24T00:57:19.087717007Z" level=info msg="StopPodSandbox for \"934533da5a1c76725bbaafd12ab91034fc55767a58215f0bc374fe7cd77e1d5b\"" Jan 24 00:57:19.088532 containerd[1500]: time="2026-01-24T00:57:19.088101341Z" level=info msg="Ensure that sandbox 934533da5a1c76725bbaafd12ab91034fc55767a58215f0bc374fe7cd77e1d5b in task-service has been cleanup successfully" Jan 24 00:57:19.094297 kubelet[2546]: I0124 00:57:19.094249 2546 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a76459f809f24468977c0635d780f8572d989f71d76e163ecda73729d954f19c" Jan 24 00:57:19.101815 containerd[1500]: time="2026-01-24T00:57:19.101648166Z" level=info msg="StopPodSandbox for \"a76459f809f24468977c0635d780f8572d989f71d76e163ecda73729d954f19c\"" Jan 24 00:57:19.102361 containerd[1500]: time="2026-01-24T00:57:19.102295053Z" level=info msg="Ensure that sandbox a76459f809f24468977c0635d780f8572d989f71d76e163ecda73729d954f19c in task-service has been cleanup successfully" Jan 24 00:57:19.106789 kubelet[2546]: I0124 00:57:19.106122 2546 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e21f9a7bedb692f4cb3dbded049ddb229728defded5a1fcdfba2545bf409fd5f" Jan 24 00:57:19.109334 containerd[1500]: time="2026-01-24T00:57:19.108813293Z" level=info msg="StopPodSandbox for \"e21f9a7bedb692f4cb3dbded049ddb229728defded5a1fcdfba2545bf409fd5f\"" Jan 24 00:57:19.112298 containerd[1500]: time="2026-01-24T00:57:19.112226489Z" level=info msg="Ensure that sandbox e21f9a7bedb692f4cb3dbded049ddb229728defded5a1fcdfba2545bf409fd5f in task-service has been cleanup successfully" Jan 24 00:57:19.128124 kubelet[2546]: I0124 00:57:19.127965 2546 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="caa2653b5e4099534e31eca9e2b062c04e8173a1b4f42cdefa228cfaa1e6b053" Jan 24 00:57:19.131957 containerd[1500]: time="2026-01-24T00:57:19.131197352Z" level=info msg="StopPodSandbox for \"caa2653b5e4099534e31eca9e2b062c04e8173a1b4f42cdefa228cfaa1e6b053\"" Jan 24 00:57:19.131957 containerd[1500]: time="2026-01-24T00:57:19.131555456Z" level=info msg="Ensure that sandbox caa2653b5e4099534e31eca9e2b062c04e8173a1b4f42cdefa228cfaa1e6b053 in task-service has been cleanup successfully" Jan 24 00:57:19.176699 containerd[1500]: time="2026-01-24T00:57:19.175663399Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 24 00:57:19.184503 kubelet[2546]: I0124 00:57:19.184482 2546 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77ab0cbafcc89f4443e2cf533c919cdaa642383054c9490d7af6cf2d3be4851d" Jan 24 00:57:19.185989 containerd[1500]: time="2026-01-24T00:57:19.185234281Z" level=info msg="StopPodSandbox for \"77ab0cbafcc89f4443e2cf533c919cdaa642383054c9490d7af6cf2d3be4851d\"" Jan 24 00:57:19.188756 containerd[1500]: time="2026-01-24T00:57:19.186599916Z" level=info msg="Ensure that sandbox 77ab0cbafcc89f4443e2cf533c919cdaa642383054c9490d7af6cf2d3be4851d in task-service has been cleanup successfully" Jan 24 00:57:19.190448 kubelet[2546]: I0124 00:57:19.190423 2546 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35e4953a296a961a054181d1aa896e675da60d2594a6e7b913242f403f394e88" Jan 24 00:57:19.195006 containerd[1500]: time="2026-01-24T00:57:19.194979216Z" level=info msg="StopPodSandbox for \"35e4953a296a961a054181d1aa896e675da60d2594a6e7b913242f403f394e88\"" Jan 24 00:57:19.195982 containerd[1500]: time="2026-01-24T00:57:19.195965976Z" level=info msg="Ensure that sandbox 35e4953a296a961a054181d1aa896e675da60d2594a6e7b913242f403f394e88 in task-service has been cleanup successfully" Jan 24 00:57:19.211082 containerd[1500]: time="2026-01-24T00:57:19.211036598Z" level=error msg="StopPodSandbox for \"cc13e5b333c492a15e7614e5d51324566f831788a9cd5cc424e87f5203496bc1\" failed" error="failed to destroy network for sandbox \"cc13e5b333c492a15e7614e5d51324566f831788a9cd5cc424e87f5203496bc1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:19.212519 containerd[1500]: time="2026-01-24T00:57:19.212496303Z" level=error msg="StopPodSandbox for \"e21f9a7bedb692f4cb3dbded049ddb229728defded5a1fcdfba2545bf409fd5f\" failed" error="failed to destroy network for sandbox \"e21f9a7bedb692f4cb3dbded049ddb229728defded5a1fcdfba2545bf409fd5f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:19.214473 kubelet[2546]: E0124 00:57:19.214431 2546 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e21f9a7bedb692f4cb3dbded049ddb229728defded5a1fcdfba2545bf409fd5f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e21f9a7bedb692f4cb3dbded049ddb229728defded5a1fcdfba2545bf409fd5f" Jan 24 00:57:19.214541 kubelet[2546]: E0124 00:57:19.214483 2546 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e21f9a7bedb692f4cb3dbded049ddb229728defded5a1fcdfba2545bf409fd5f"} Jan 24 00:57:19.214541 kubelet[2546]: E0124 00:57:19.214524 2546 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e25c9c50-eb09-419b-a216-dabe2aa24f5e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e21f9a7bedb692f4cb3dbded049ddb229728defded5a1fcdfba2545bf409fd5f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:57:19.214614 kubelet[2546]: E0124 00:57:19.214542 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e25c9c50-eb09-419b-a216-dabe2aa24f5e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e21f9a7bedb692f4cb3dbded049ddb229728defded5a1fcdfba2545bf409fd5f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6ff89d9558-pr2mw" podUID="e25c9c50-eb09-419b-a216-dabe2aa24f5e" Jan 24 00:57:19.214614 kubelet[2546]: E0124 00:57:19.214562 2546 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cc13e5b333c492a15e7614e5d51324566f831788a9cd5cc424e87f5203496bc1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cc13e5b333c492a15e7614e5d51324566f831788a9cd5cc424e87f5203496bc1" Jan 24 00:57:19.214614 kubelet[2546]: E0124 00:57:19.214574 2546 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cc13e5b333c492a15e7614e5d51324566f831788a9cd5cc424e87f5203496bc1"} Jan 24 00:57:19.214614 kubelet[2546]: E0124 00:57:19.214585 2546 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"abee6eff-7ee6-4417-a4eb-5f0514e6e7e9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cc13e5b333c492a15e7614e5d51324566f831788a9cd5cc424e87f5203496bc1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:57:19.214718 kubelet[2546]: E0124 00:57:19.214601 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"abee6eff-7ee6-4417-a4eb-5f0514e6e7e9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cc13e5b333c492a15e7614e5d51324566f831788a9cd5cc424e87f5203496bc1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6ff89d9558-qsdz4" podUID="abee6eff-7ee6-4417-a4eb-5f0514e6e7e9" Jan 24 00:57:19.217572 containerd[1500]: time="2026-01-24T00:57:19.217424486Z" level=error msg="StopPodSandbox for \"934533da5a1c76725bbaafd12ab91034fc55767a58215f0bc374fe7cd77e1d5b\" failed" error="failed to destroy network for sandbox \"934533da5a1c76725bbaafd12ab91034fc55767a58215f0bc374fe7cd77e1d5b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:19.217644 kubelet[2546]: E0124 00:57:19.217590 2546 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"934533da5a1c76725bbaafd12ab91034fc55767a58215f0bc374fe7cd77e1d5b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="934533da5a1c76725bbaafd12ab91034fc55767a58215f0bc374fe7cd77e1d5b" Jan 24 00:57:19.217644 kubelet[2546]: E0124 00:57:19.217634 2546 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"934533da5a1c76725bbaafd12ab91034fc55767a58215f0bc374fe7cd77e1d5b"} Jan 24 00:57:19.217706 kubelet[2546]: E0124 00:57:19.217661 2546 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3be98e24-0896-49a9-8666-4ca8f66cf2c8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"934533da5a1c76725bbaafd12ab91034fc55767a58215f0bc374fe7cd77e1d5b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:57:19.218060 kubelet[2546]: E0124 00:57:19.217872 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3be98e24-0896-49a9-8666-4ca8f66cf2c8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"934533da5a1c76725bbaafd12ab91034fc55767a58215f0bc374fe7cd77e1d5b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-59667657-b8mx9" podUID="3be98e24-0896-49a9-8666-4ca8f66cf2c8" Jan 24 00:57:19.224875 containerd[1500]: time="2026-01-24T00:57:19.224844906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mnh95,Uid:60844a2d-0038-4132-8338-140b75e01a74,Namespace:kube-system,Attempt:0,}" Jan 24 00:57:19.235256 containerd[1500]: time="2026-01-24T00:57:19.235118576Z" level=error msg="StopPodSandbox for \"a76459f809f24468977c0635d780f8572d989f71d76e163ecda73729d954f19c\" failed" error="failed to destroy network for sandbox \"a76459f809f24468977c0635d780f8572d989f71d76e163ecda73729d954f19c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:19.235782 kubelet[2546]: E0124 00:57:19.235401 2546 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a76459f809f24468977c0635d780f8572d989f71d76e163ecda73729d954f19c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a76459f809f24468977c0635d780f8572d989f71d76e163ecda73729d954f19c" Jan 24 00:57:19.235782 kubelet[2546]: E0124 00:57:19.235443 2546 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a76459f809f24468977c0635d780f8572d989f71d76e163ecda73729d954f19c"} Jan 24 00:57:19.235782 kubelet[2546]: E0124 00:57:19.235468 2546 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"92edd234-ce88-420a-bb1b-56d2f203263f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a76459f809f24468977c0635d780f8572d989f71d76e163ecda73729d954f19c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:57:19.235782 kubelet[2546]: E0124 00:57:19.235490 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"92edd234-ce88-420a-bb1b-56d2f203263f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a76459f809f24468977c0635d780f8572d989f71d76e163ecda73729d954f19c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-85cdccf5-5whtp" podUID="92edd234-ce88-420a-bb1b-56d2f203263f" Jan 24 00:57:19.243972 containerd[1500]: time="2026-01-24T00:57:19.243944840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7fv4k,Uid:1d12baac-e259-43f8-8c34-2fc70e4e9750,Namespace:kube-system,Attempt:0,}" Jan 24 00:57:19.249887 containerd[1500]: time="2026-01-24T00:57:19.249854694Z" level=error msg="StopPodSandbox for \"caa2653b5e4099534e31eca9e2b062c04e8173a1b4f42cdefa228cfaa1e6b053\" failed" error="failed to destroy network for sandbox \"caa2653b5e4099534e31eca9e2b062c04e8173a1b4f42cdefa228cfaa1e6b053\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:19.250302 kubelet[2546]: E0124 00:57:19.250175 2546 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"caa2653b5e4099534e31eca9e2b062c04e8173a1b4f42cdefa228cfaa1e6b053\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="caa2653b5e4099534e31eca9e2b062c04e8173a1b4f42cdefa228cfaa1e6b053" Jan 24 00:57:19.250302 kubelet[2546]: E0124 00:57:19.250224 2546 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"caa2653b5e4099534e31eca9e2b062c04e8173a1b4f42cdefa228cfaa1e6b053"} Jan 24 00:57:19.250302 kubelet[2546]: E0124 00:57:19.250250 2546 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2623c9d2-b9f3-4861-944f-8da3fba4e042\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"caa2653b5e4099534e31eca9e2b062c04e8173a1b4f42cdefa228cfaa1e6b053\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:57:19.250302 kubelet[2546]: E0124 00:57:19.250277 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2623c9d2-b9f3-4861-944f-8da3fba4e042\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"caa2653b5e4099534e31eca9e2b062c04e8173a1b4f42cdefa228cfaa1e6b053\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-675bdfd5f-2k8rp" podUID="2623c9d2-b9f3-4861-944f-8da3fba4e042" Jan 24 00:57:19.260933 containerd[1500]: time="2026-01-24T00:57:19.260892362Z" level=error msg="StopPodSandbox for \"77ab0cbafcc89f4443e2cf533c919cdaa642383054c9490d7af6cf2d3be4851d\" failed" error="failed to destroy network for sandbox \"77ab0cbafcc89f4443e2cf533c919cdaa642383054c9490d7af6cf2d3be4851d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:19.261230 kubelet[2546]: E0124 00:57:19.261079 2546 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"77ab0cbafcc89f4443e2cf533c919cdaa642383054c9490d7af6cf2d3be4851d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="77ab0cbafcc89f4443e2cf533c919cdaa642383054c9490d7af6cf2d3be4851d" Jan 24 00:57:19.261230 kubelet[2546]: E0124 00:57:19.261130 2546 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"77ab0cbafcc89f4443e2cf533c919cdaa642383054c9490d7af6cf2d3be4851d"} Jan 24 00:57:19.261230 kubelet[2546]: E0124 00:57:19.261161 2546 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"267130dd-42b7-45fa-9166-0420d7cd47cc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"77ab0cbafcc89f4443e2cf533c919cdaa642383054c9490d7af6cf2d3be4851d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:57:19.261230 kubelet[2546]: E0124 00:57:19.261178 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"267130dd-42b7-45fa-9166-0420d7cd47cc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"77ab0cbafcc89f4443e2cf533c919cdaa642383054c9490d7af6cf2d3be4851d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-9lcpv" podUID="267130dd-42b7-45fa-9166-0420d7cd47cc" Jan 24 00:57:19.265093 containerd[1500]: time="2026-01-24T00:57:19.265067927Z" level=error msg="StopPodSandbox for \"35e4953a296a961a054181d1aa896e675da60d2594a6e7b913242f403f394e88\" failed" error="failed to destroy network for sandbox \"35e4953a296a961a054181d1aa896e675da60d2594a6e7b913242f403f394e88\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:19.265332 kubelet[2546]: E0124 00:57:19.265309 2546 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"35e4953a296a961a054181d1aa896e675da60d2594a6e7b913242f403f394e88\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="35e4953a296a961a054181d1aa896e675da60d2594a6e7b913242f403f394e88" Jan 24 00:57:19.265428 kubelet[2546]: E0124 00:57:19.265415 2546 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"35e4953a296a961a054181d1aa896e675da60d2594a6e7b913242f403f394e88"} Jan 24 00:57:19.265496 kubelet[2546]: E0124 00:57:19.265484 2546 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"43bd5f1f-4a0c-4b9f-b986-69bf7780bcee\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"35e4953a296a961a054181d1aa896e675da60d2594a6e7b913242f403f394e88\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:57:19.265585 kubelet[2546]: E0124 00:57:19.265557 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"43bd5f1f-4a0c-4b9f-b986-69bf7780bcee\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"35e4953a296a961a054181d1aa896e675da60d2594a6e7b913242f403f394e88\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ftl5s" podUID="43bd5f1f-4a0c-4b9f-b986-69bf7780bcee" Jan 24 00:57:19.306944 containerd[1500]: time="2026-01-24T00:57:19.306826194Z" level=error msg="Failed to destroy network for sandbox \"bb268af4557ca6655434d066efced3db85a612e145e290fe843fe0365ad7bdd5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:19.307395 containerd[1500]: time="2026-01-24T00:57:19.307300459Z" level=error msg="encountered an error cleaning up failed sandbox \"bb268af4557ca6655434d066efced3db85a612e145e290fe843fe0365ad7bdd5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:19.307395 containerd[1500]: time="2026-01-24T00:57:19.307353780Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mnh95,Uid:60844a2d-0038-4132-8338-140b75e01a74,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bb268af4557ca6655434d066efced3db85a612e145e290fe843fe0365ad7bdd5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:19.307622 kubelet[2546]: E0124 00:57:19.307590 2546 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb268af4557ca6655434d066efced3db85a612e145e290fe843fe0365ad7bdd5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:19.307676 kubelet[2546]: E0124 00:57:19.307638 2546 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb268af4557ca6655434d066efced3db85a612e145e290fe843fe0365ad7bdd5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-mnh95" Jan 24 00:57:19.307676 kubelet[2546]: E0124 00:57:19.307655 2546 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb268af4557ca6655434d066efced3db85a612e145e290fe843fe0365ad7bdd5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-mnh95" Jan 24 00:57:19.307754 kubelet[2546]: E0124 00:57:19.307710 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-mnh95_kube-system(60844a2d-0038-4132-8338-140b75e01a74)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-mnh95_kube-system(60844a2d-0038-4132-8338-140b75e01a74)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bb268af4557ca6655434d066efced3db85a612e145e290fe843fe0365ad7bdd5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-mnh95" podUID="60844a2d-0038-4132-8338-140b75e01a74" Jan 24 00:57:19.314436 containerd[1500]: time="2026-01-24T00:57:19.314395955Z" level=error msg="Failed to destroy network for sandbox \"ba11e042217f8e4b40484292a53b786856df957fe4d7a333bd0a1a6ef8f12481\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:19.314703 containerd[1500]: time="2026-01-24T00:57:19.314678948Z" level=error msg="encountered an error cleaning up failed sandbox \"ba11e042217f8e4b40484292a53b786856df957fe4d7a333bd0a1a6ef8f12481\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:19.314728 containerd[1500]: time="2026-01-24T00:57:19.314715128Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7fv4k,Uid:1d12baac-e259-43f8-8c34-2fc70e4e9750,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ba11e042217f8e4b40484292a53b786856df957fe4d7a333bd0a1a6ef8f12481\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:19.314995 kubelet[2546]: E0124 00:57:19.314878 2546 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba11e042217f8e4b40484292a53b786856df957fe4d7a333bd0a1a6ef8f12481\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:19.314995 kubelet[2546]: E0124 00:57:19.314921 2546 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba11e042217f8e4b40484292a53b786856df957fe4d7a333bd0a1a6ef8f12481\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-7fv4k" Jan 24 00:57:19.314995 kubelet[2546]: E0124 00:57:19.314936 2546 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba11e042217f8e4b40484292a53b786856df957fe4d7a333bd0a1a6ef8f12481\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-7fv4k" Jan 24 00:57:19.315076 kubelet[2546]: E0124 00:57:19.314970 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-7fv4k_kube-system(1d12baac-e259-43f8-8c34-2fc70e4e9750)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-7fv4k_kube-system(1d12baac-e259-43f8-8c34-2fc70e4e9750)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ba11e042217f8e4b40484292a53b786856df957fe4d7a333bd0a1a6ef8f12481\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-7fv4k" podUID="1d12baac-e259-43f8-8c34-2fc70e4e9750" Jan 24 00:57:19.421337 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-caa2653b5e4099534e31eca9e2b062c04e8173a1b4f42cdefa228cfaa1e6b053-shm.mount: Deactivated successfully. Jan 24 00:57:19.421518 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cc13e5b333c492a15e7614e5d51324566f831788a9cd5cc424e87f5203496bc1-shm.mount: Deactivated successfully. Jan 24 00:57:19.421654 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-934533da5a1c76725bbaafd12ab91034fc55767a58215f0bc374fe7cd77e1d5b-shm.mount: Deactivated successfully. Jan 24 00:57:19.421839 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a76459f809f24468977c0635d780f8572d989f71d76e163ecda73729d954f19c-shm.mount: Deactivated successfully. Jan 24 00:57:19.421971 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e21f9a7bedb692f4cb3dbded049ddb229728defded5a1fcdfba2545bf409fd5f-shm.mount: Deactivated successfully. Jan 24 00:57:19.422094 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-77ab0cbafcc89f4443e2cf533c919cdaa642383054c9490d7af6cf2d3be4851d-shm.mount: Deactivated successfully. Jan 24 00:57:20.194576 kubelet[2546]: I0124 00:57:20.194515 2546 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba11e042217f8e4b40484292a53b786856df957fe4d7a333bd0a1a6ef8f12481" Jan 24 00:57:20.196729 containerd[1500]: time="2026-01-24T00:57:20.196638537Z" level=info msg="StopPodSandbox for \"ba11e042217f8e4b40484292a53b786856df957fe4d7a333bd0a1a6ef8f12481\"" Jan 24 00:57:20.197835 containerd[1500]: time="2026-01-24T00:57:20.197109712Z" level=info msg="Ensure that sandbox ba11e042217f8e4b40484292a53b786856df957fe4d7a333bd0a1a6ef8f12481 in task-service has been cleanup successfully" Jan 24 00:57:20.202781 kubelet[2546]: I0124 00:57:20.200640 2546 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb268af4557ca6655434d066efced3db85a612e145e290fe843fe0365ad7bdd5" Jan 24 00:57:20.203866 containerd[1500]: time="2026-01-24T00:57:20.203576447Z" level=info msg="StopPodSandbox for \"bb268af4557ca6655434d066efced3db85a612e145e290fe843fe0365ad7bdd5\"" Jan 24 00:57:20.205489 containerd[1500]: time="2026-01-24T00:57:20.205120933Z" level=info msg="Ensure that sandbox bb268af4557ca6655434d066efced3db85a612e145e290fe843fe0365ad7bdd5 in task-service has been cleanup successfully" Jan 24 00:57:20.250664 containerd[1500]: time="2026-01-24T00:57:20.250456238Z" level=error msg="StopPodSandbox for \"ba11e042217f8e4b40484292a53b786856df957fe4d7a333bd0a1a6ef8f12481\" failed" error="failed to destroy network for sandbox \"ba11e042217f8e4b40484292a53b786856df957fe4d7a333bd0a1a6ef8f12481\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:20.251848 kubelet[2546]: E0124 00:57:20.251581 2546 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ba11e042217f8e4b40484292a53b786856df957fe4d7a333bd0a1a6ef8f12481\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ba11e042217f8e4b40484292a53b786856df957fe4d7a333bd0a1a6ef8f12481" Jan 24 00:57:20.251848 kubelet[2546]: E0124 00:57:20.251641 2546 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ba11e042217f8e4b40484292a53b786856df957fe4d7a333bd0a1a6ef8f12481"} Jan 24 00:57:20.251848 kubelet[2546]: E0124 00:57:20.251710 2546 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1d12baac-e259-43f8-8c34-2fc70e4e9750\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ba11e042217f8e4b40484292a53b786856df957fe4d7a333bd0a1a6ef8f12481\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:57:20.251848 kubelet[2546]: E0124 00:57:20.251786 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1d12baac-e259-43f8-8c34-2fc70e4e9750\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ba11e042217f8e4b40484292a53b786856df957fe4d7a333bd0a1a6ef8f12481\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-7fv4k" podUID="1d12baac-e259-43f8-8c34-2fc70e4e9750" Jan 24 00:57:20.261084 containerd[1500]: time="2026-01-24T00:57:20.260991534Z" level=error msg="StopPodSandbox for \"bb268af4557ca6655434d066efced3db85a612e145e290fe843fe0365ad7bdd5\" failed" error="failed to destroy network for sandbox \"bb268af4557ca6655434d066efced3db85a612e145e290fe843fe0365ad7bdd5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:57:20.261361 kubelet[2546]: E0124 00:57:20.261248 2546 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bb268af4557ca6655434d066efced3db85a612e145e290fe843fe0365ad7bdd5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bb268af4557ca6655434d066efced3db85a612e145e290fe843fe0365ad7bdd5" Jan 24 00:57:20.261361 kubelet[2546]: E0124 00:57:20.261305 2546 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bb268af4557ca6655434d066efced3db85a612e145e290fe843fe0365ad7bdd5"} Jan 24 00:57:20.261361 kubelet[2546]: E0124 00:57:20.261346 2546 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"60844a2d-0038-4132-8338-140b75e01a74\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bb268af4557ca6655434d066efced3db85a612e145e290fe843fe0365ad7bdd5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:57:20.261705 kubelet[2546]: E0124 00:57:20.261380 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"60844a2d-0038-4132-8338-140b75e01a74\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bb268af4557ca6655434d066efced3db85a612e145e290fe843fe0365ad7bdd5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-mnh95" podUID="60844a2d-0038-4132-8338-140b75e01a74" Jan 24 00:57:25.467829 kubelet[2546]: I0124 00:57:25.466260 2546 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 00:57:26.790072 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1375704584.mount: Deactivated successfully. Jan 24 00:57:26.821188 containerd[1500]: time="2026-01-24T00:57:26.821122678Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:57:26.822324 containerd[1500]: time="2026-01-24T00:57:26.822207816Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 24 00:57:26.824020 containerd[1500]: time="2026-01-24T00:57:26.823261103Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:57:26.825137 containerd[1500]: time="2026-01-24T00:57:26.825015705Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:57:26.825661 containerd[1500]: time="2026-01-24T00:57:26.825635849Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 7.6499007s" Jan 24 00:57:26.825691 containerd[1500]: time="2026-01-24T00:57:26.825661459Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 24 00:57:26.850061 containerd[1500]: time="2026-01-24T00:57:26.849383301Z" level=info msg="CreateContainer within sandbox \"c590654a8496e4bbb5c3347b81c0fb0da656f463508a18e2707ba55dcb6830a0\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 24 00:57:26.872298 containerd[1500]: time="2026-01-24T00:57:26.872259367Z" level=info msg="CreateContainer within sandbox \"c590654a8496e4bbb5c3347b81c0fb0da656f463508a18e2707ba55dcb6830a0\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"0b66e3806099d827dc7122b9ff99bb076625f906ba3b820f870a192b341883aa\"" Jan 24 00:57:26.873921 containerd[1500]: time="2026-01-24T00:57:26.872797141Z" level=info msg="StartContainer for \"0b66e3806099d827dc7122b9ff99bb076625f906ba3b820f870a192b341883aa\"" Jan 24 00:57:26.906917 systemd[1]: Started cri-containerd-0b66e3806099d827dc7122b9ff99bb076625f906ba3b820f870a192b341883aa.scope - libcontainer container 0b66e3806099d827dc7122b9ff99bb076625f906ba3b820f870a192b341883aa. Jan 24 00:57:26.937886 containerd[1500]: time="2026-01-24T00:57:26.937838714Z" level=info msg="StartContainer for \"0b66e3806099d827dc7122b9ff99bb076625f906ba3b820f870a192b341883aa\" returns successfully" Jan 24 00:57:27.013470 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 24 00:57:27.013643 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 24 00:57:27.139825 containerd[1500]: time="2026-01-24T00:57:27.139566631Z" level=info msg="StopPodSandbox for \"caa2653b5e4099534e31eca9e2b062c04e8173a1b4f42cdefa228cfaa1e6b053\"" Jan 24 00:57:27.248149 kubelet[2546]: I0124 00:57:27.247095 2546 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-pd5qk" podStartSLOduration=2.276755732 podStartE2EDuration="21.247080218s" podCreationTimestamp="2026-01-24 00:57:06 +0000 UTC" firstStartedPulling="2026-01-24 00:57:07.856170789 +0000 UTC m=+20.006112302" lastFinishedPulling="2026-01-24 00:57:26.826495295 +0000 UTC m=+38.976436788" observedRunningTime="2026-01-24 00:57:27.246907547 +0000 UTC m=+39.396849030" watchObservedRunningTime="2026-01-24 00:57:27.247080218 +0000 UTC m=+39.397021711" Jan 24 00:57:27.264590 containerd[1500]: 2026-01-24 00:57:27.219 [INFO][3756] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="caa2653b5e4099534e31eca9e2b062c04e8173a1b4f42cdefa228cfaa1e6b053" Jan 24 00:57:27.264590 containerd[1500]: 2026-01-24 00:57:27.220 [INFO][3756] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="caa2653b5e4099534e31eca9e2b062c04e8173a1b4f42cdefa228cfaa1e6b053" iface="eth0" netns="/var/run/netns/cni-4eddf51d-39e0-c802-b7d9-007584d8f057" Jan 24 00:57:27.264590 containerd[1500]: 2026-01-24 00:57:27.221 [INFO][3756] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="caa2653b5e4099534e31eca9e2b062c04e8173a1b4f42cdefa228cfaa1e6b053" iface="eth0" netns="/var/run/netns/cni-4eddf51d-39e0-c802-b7d9-007584d8f057" Jan 24 00:57:27.264590 containerd[1500]: 2026-01-24 00:57:27.221 [INFO][3756] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="caa2653b5e4099534e31eca9e2b062c04e8173a1b4f42cdefa228cfaa1e6b053" iface="eth0" netns="/var/run/netns/cni-4eddf51d-39e0-c802-b7d9-007584d8f057" Jan 24 00:57:27.264590 containerd[1500]: 2026-01-24 00:57:27.221 [INFO][3756] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="caa2653b5e4099534e31eca9e2b062c04e8173a1b4f42cdefa228cfaa1e6b053" Jan 24 00:57:27.264590 containerd[1500]: 2026-01-24 00:57:27.221 [INFO][3756] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="caa2653b5e4099534e31eca9e2b062c04e8173a1b4f42cdefa228cfaa1e6b053" Jan 24 00:57:27.264590 containerd[1500]: 2026-01-24 00:57:27.247 [INFO][3764] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="caa2653b5e4099534e31eca9e2b062c04e8173a1b4f42cdefa228cfaa1e6b053" HandleID="k8s-pod-network.caa2653b5e4099534e31eca9e2b062c04e8173a1b4f42cdefa228cfaa1e6b053" Workload="ci--4081--3--6--n--32cc93a80b-k8s-whisker--675bdfd5f--2k8rp-eth0" Jan 24 00:57:27.264590 containerd[1500]: 2026-01-24 00:57:27.248 [INFO][3764] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:27.264590 containerd[1500]: 2026-01-24 00:57:27.248 [INFO][3764] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:27.264590 containerd[1500]: 2026-01-24 00:57:27.253 [WARNING][3764] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="caa2653b5e4099534e31eca9e2b062c04e8173a1b4f42cdefa228cfaa1e6b053" HandleID="k8s-pod-network.caa2653b5e4099534e31eca9e2b062c04e8173a1b4f42cdefa228cfaa1e6b053" Workload="ci--4081--3--6--n--32cc93a80b-k8s-whisker--675bdfd5f--2k8rp-eth0" Jan 24 00:57:27.264590 containerd[1500]: 2026-01-24 00:57:27.253 [INFO][3764] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="caa2653b5e4099534e31eca9e2b062c04e8173a1b4f42cdefa228cfaa1e6b053" HandleID="k8s-pod-network.caa2653b5e4099534e31eca9e2b062c04e8173a1b4f42cdefa228cfaa1e6b053" Workload="ci--4081--3--6--n--32cc93a80b-k8s-whisker--675bdfd5f--2k8rp-eth0" Jan 24 00:57:27.264590 containerd[1500]: 2026-01-24 00:57:27.255 [INFO][3764] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:27.264590 containerd[1500]: 2026-01-24 00:57:27.262 [INFO][3756] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="caa2653b5e4099534e31eca9e2b062c04e8173a1b4f42cdefa228cfaa1e6b053" Jan 24 00:57:27.265028 containerd[1500]: time="2026-01-24T00:57:27.264696431Z" level=info msg="TearDown network for sandbox \"caa2653b5e4099534e31eca9e2b062c04e8173a1b4f42cdefa228cfaa1e6b053\" successfully" Jan 24 00:57:27.265028 containerd[1500]: time="2026-01-24T00:57:27.264715551Z" level=info msg="StopPodSandbox for \"caa2653b5e4099534e31eca9e2b062c04e8173a1b4f42cdefa228cfaa1e6b053\" returns successfully" Jan 24 00:57:27.371550 kubelet[2546]: I0124 00:57:27.371495 2546 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvvmn\" (UniqueName: \"kubernetes.io/projected/2623c9d2-b9f3-4861-944f-8da3fba4e042-kube-api-access-gvvmn\") pod \"2623c9d2-b9f3-4861-944f-8da3fba4e042\" (UID: \"2623c9d2-b9f3-4861-944f-8da3fba4e042\") " Jan 24 00:57:27.371550 kubelet[2546]: I0124 00:57:27.371552 2546 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2623c9d2-b9f3-4861-944f-8da3fba4e042-whisker-ca-bundle\") pod \"2623c9d2-b9f3-4861-944f-8da3fba4e042\" (UID: \"2623c9d2-b9f3-4861-944f-8da3fba4e042\") " Jan 24 00:57:27.371550 kubelet[2546]: I0124 00:57:27.371572 2546 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2623c9d2-b9f3-4861-944f-8da3fba4e042-whisker-backend-key-pair\") pod \"2623c9d2-b9f3-4861-944f-8da3fba4e042\" (UID: \"2623c9d2-b9f3-4861-944f-8da3fba4e042\") " Jan 24 00:57:27.375177 kubelet[2546]: I0124 00:57:27.375142 2546 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2623c9d2-b9f3-4861-944f-8da3fba4e042-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "2623c9d2-b9f3-4861-944f-8da3fba4e042" (UID: "2623c9d2-b9f3-4861-944f-8da3fba4e042"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 24 00:57:27.375813 kubelet[2546]: I0124 00:57:27.375774 2546 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2623c9d2-b9f3-4861-944f-8da3fba4e042-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "2623c9d2-b9f3-4861-944f-8da3fba4e042" (UID: "2623c9d2-b9f3-4861-944f-8da3fba4e042"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 24 00:57:27.376069 kubelet[2546]: I0124 00:57:27.376035 2546 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2623c9d2-b9f3-4861-944f-8da3fba4e042-kube-api-access-gvvmn" (OuterVolumeSpecName: "kube-api-access-gvvmn") pod "2623c9d2-b9f3-4861-944f-8da3fba4e042" (UID: "2623c9d2-b9f3-4861-944f-8da3fba4e042"). InnerVolumeSpecName "kube-api-access-gvvmn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 24 00:57:27.472430 kubelet[2546]: I0124 00:57:27.472323 2546 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2623c9d2-b9f3-4861-944f-8da3fba4e042-whisker-ca-bundle\") on node \"ci-4081-3-6-n-32cc93a80b\" DevicePath \"\"" Jan 24 00:57:27.472430 kubelet[2546]: I0124 00:57:27.472382 2546 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2623c9d2-b9f3-4861-944f-8da3fba4e042-whisker-backend-key-pair\") on node \"ci-4081-3-6-n-32cc93a80b\" DevicePath \"\"" Jan 24 00:57:27.472430 kubelet[2546]: I0124 00:57:27.472400 2546 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gvvmn\" (UniqueName: \"kubernetes.io/projected/2623c9d2-b9f3-4861-944f-8da3fba4e042-kube-api-access-gvvmn\") on node \"ci-4081-3-6-n-32cc93a80b\" DevicePath \"\"" Jan 24 00:57:27.791642 systemd[1]: run-netns-cni\x2d4eddf51d\x2d39e0\x2dc802\x2db7d9\x2d007584d8f057.mount: Deactivated successfully. Jan 24 00:57:27.791946 systemd[1]: var-lib-kubelet-pods-2623c9d2\x2db9f3\x2d4861\x2d944f\x2d8da3fba4e042-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgvvmn.mount: Deactivated successfully. Jan 24 00:57:27.792096 systemd[1]: var-lib-kubelet-pods-2623c9d2\x2db9f3\x2d4861\x2d944f\x2d8da3fba4e042-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 24 00:57:27.983559 systemd[1]: Removed slice kubepods-besteffort-pod2623c9d2_b9f3_4861_944f_8da3fba4e042.slice - libcontainer container kubepods-besteffort-pod2623c9d2_b9f3_4861_944f_8da3fba4e042.slice. Jan 24 00:57:28.339507 systemd[1]: Created slice kubepods-besteffort-pod52940e35_8fee_4532_9c73_0644eb969513.slice - libcontainer container kubepods-besteffort-pod52940e35_8fee_4532_9c73_0644eb969513.slice. Jan 24 00:57:28.379236 kubelet[2546]: I0124 00:57:28.379183 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/52940e35-8fee-4532-9c73-0644eb969513-whisker-ca-bundle\") pod \"whisker-6bf994bc7f-8g8k6\" (UID: \"52940e35-8fee-4532-9c73-0644eb969513\") " pod="calico-system/whisker-6bf994bc7f-8g8k6" Jan 24 00:57:28.379236 kubelet[2546]: I0124 00:57:28.379223 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/52940e35-8fee-4532-9c73-0644eb969513-whisker-backend-key-pair\") pod \"whisker-6bf994bc7f-8g8k6\" (UID: \"52940e35-8fee-4532-9c73-0644eb969513\") " pod="calico-system/whisker-6bf994bc7f-8g8k6" Jan 24 00:57:28.379236 kubelet[2546]: I0124 00:57:28.379238 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkcx7\" (UniqueName: \"kubernetes.io/projected/52940e35-8fee-4532-9c73-0644eb969513-kube-api-access-zkcx7\") pod \"whisker-6bf994bc7f-8g8k6\" (UID: \"52940e35-8fee-4532-9c73-0644eb969513\") " pod="calico-system/whisker-6bf994bc7f-8g8k6" Jan 24 00:57:28.645531 containerd[1500]: time="2026-01-24T00:57:28.645427150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6bf994bc7f-8g8k6,Uid:52940e35-8fee-4532-9c73-0644eb969513,Namespace:calico-system,Attempt:0,}" Jan 24 00:57:28.763068 systemd-networkd[1402]: calicad9877ef00: Link UP Jan 24 00:57:28.763935 systemd-networkd[1402]: calicad9877ef00: Gained carrier Jan 24 00:57:28.781472 containerd[1500]: 2026-01-24 00:57:28.687 [INFO][3923] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 00:57:28.781472 containerd[1500]: 2026-01-24 00:57:28.696 [INFO][3923] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--32cc93a80b-k8s-whisker--6bf994bc7f--8g8k6-eth0 whisker-6bf994bc7f- calico-system 52940e35-8fee-4532-9c73-0644eb969513 916 0 2026-01-24 00:57:28 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6bf994bc7f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-6-n-32cc93a80b whisker-6bf994bc7f-8g8k6 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calicad9877ef00 [] [] }} ContainerID="7cc27fa7c5fbcb225fd1ec12f8214e605970ff4f6a99dc8c6ba21a97346dbba3" Namespace="calico-system" Pod="whisker-6bf994bc7f-8g8k6" WorkloadEndpoint="ci--4081--3--6--n--32cc93a80b-k8s-whisker--6bf994bc7f--8g8k6-" Jan 24 00:57:28.781472 containerd[1500]: 2026-01-24 00:57:28.696 [INFO][3923] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7cc27fa7c5fbcb225fd1ec12f8214e605970ff4f6a99dc8c6ba21a97346dbba3" Namespace="calico-system" Pod="whisker-6bf994bc7f-8g8k6" WorkloadEndpoint="ci--4081--3--6--n--32cc93a80b-k8s-whisker--6bf994bc7f--8g8k6-eth0" Jan 24 00:57:28.781472 containerd[1500]: 2026-01-24 00:57:28.722 [INFO][3950] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7cc27fa7c5fbcb225fd1ec12f8214e605970ff4f6a99dc8c6ba21a97346dbba3" HandleID="k8s-pod-network.7cc27fa7c5fbcb225fd1ec12f8214e605970ff4f6a99dc8c6ba21a97346dbba3" Workload="ci--4081--3--6--n--32cc93a80b-k8s-whisker--6bf994bc7f--8g8k6-eth0" Jan 24 00:57:28.781472 containerd[1500]: 2026-01-24 00:57:28.722 [INFO][3950] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7cc27fa7c5fbcb225fd1ec12f8214e605970ff4f6a99dc8c6ba21a97346dbba3" HandleID="k8s-pod-network.7cc27fa7c5fbcb225fd1ec12f8214e605970ff4f6a99dc8c6ba21a97346dbba3" Workload="ci--4081--3--6--n--32cc93a80b-k8s-whisker--6bf994bc7f--8g8k6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5800), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-32cc93a80b", "pod":"whisker-6bf994bc7f-8g8k6", "timestamp":"2026-01-24 00:57:28.722492352 +0000 UTC"}, Hostname:"ci-4081-3-6-n-32cc93a80b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:57:28.781472 containerd[1500]: 2026-01-24 00:57:28.722 [INFO][3950] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:28.781472 containerd[1500]: 2026-01-24 00:57:28.722 [INFO][3950] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:28.781472 containerd[1500]: 2026-01-24 00:57:28.722 [INFO][3950] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-32cc93a80b' Jan 24 00:57:28.781472 containerd[1500]: 2026-01-24 00:57:28.728 [INFO][3950] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7cc27fa7c5fbcb225fd1ec12f8214e605970ff4f6a99dc8c6ba21a97346dbba3" host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:28.781472 containerd[1500]: 2026-01-24 00:57:28.731 [INFO][3950] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:28.781472 containerd[1500]: 2026-01-24 00:57:28.735 [INFO][3950] ipam/ipam.go 511: Trying affinity for 192.168.24.128/26 host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:28.781472 containerd[1500]: 2026-01-24 00:57:28.737 [INFO][3950] ipam/ipam.go 158: Attempting to load block cidr=192.168.24.128/26 host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:28.781472 containerd[1500]: 2026-01-24 00:57:28.739 [INFO][3950] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.24.128/26 host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:28.781472 containerd[1500]: 2026-01-24 00:57:28.739 [INFO][3950] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.24.128/26 handle="k8s-pod-network.7cc27fa7c5fbcb225fd1ec12f8214e605970ff4f6a99dc8c6ba21a97346dbba3" host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:28.781472 containerd[1500]: 2026-01-24 00:57:28.740 [INFO][3950] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7cc27fa7c5fbcb225fd1ec12f8214e605970ff4f6a99dc8c6ba21a97346dbba3 Jan 24 00:57:28.781472 containerd[1500]: 2026-01-24 00:57:28.744 [INFO][3950] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.24.128/26 handle="k8s-pod-network.7cc27fa7c5fbcb225fd1ec12f8214e605970ff4f6a99dc8c6ba21a97346dbba3" host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:28.781472 containerd[1500]: 2026-01-24 00:57:28.749 [INFO][3950] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.24.129/26] block=192.168.24.128/26 handle="k8s-pod-network.7cc27fa7c5fbcb225fd1ec12f8214e605970ff4f6a99dc8c6ba21a97346dbba3" host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:28.781472 containerd[1500]: 2026-01-24 00:57:28.749 [INFO][3950] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.24.129/26] handle="k8s-pod-network.7cc27fa7c5fbcb225fd1ec12f8214e605970ff4f6a99dc8c6ba21a97346dbba3" host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:28.781472 containerd[1500]: 2026-01-24 00:57:28.749 [INFO][3950] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:28.781472 containerd[1500]: 2026-01-24 00:57:28.749 [INFO][3950] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.24.129/26] IPv6=[] ContainerID="7cc27fa7c5fbcb225fd1ec12f8214e605970ff4f6a99dc8c6ba21a97346dbba3" HandleID="k8s-pod-network.7cc27fa7c5fbcb225fd1ec12f8214e605970ff4f6a99dc8c6ba21a97346dbba3" Workload="ci--4081--3--6--n--32cc93a80b-k8s-whisker--6bf994bc7f--8g8k6-eth0" Jan 24 00:57:28.782063 containerd[1500]: 2026-01-24 00:57:28.752 [INFO][3923] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7cc27fa7c5fbcb225fd1ec12f8214e605970ff4f6a99dc8c6ba21a97346dbba3" Namespace="calico-system" Pod="whisker-6bf994bc7f-8g8k6" WorkloadEndpoint="ci--4081--3--6--n--32cc93a80b-k8s-whisker--6bf994bc7f--8g8k6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--32cc93a80b-k8s-whisker--6bf994bc7f--8g8k6-eth0", GenerateName:"whisker-6bf994bc7f-", Namespace:"calico-system", SelfLink:"", UID:"52940e35-8fee-4532-9c73-0644eb969513", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 57, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6bf994bc7f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-32cc93a80b", ContainerID:"", Pod:"whisker-6bf994bc7f-8g8k6", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.24.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calicad9877ef00", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:28.782063 containerd[1500]: 2026-01-24 00:57:28.753 [INFO][3923] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.24.129/32] ContainerID="7cc27fa7c5fbcb225fd1ec12f8214e605970ff4f6a99dc8c6ba21a97346dbba3" Namespace="calico-system" Pod="whisker-6bf994bc7f-8g8k6" WorkloadEndpoint="ci--4081--3--6--n--32cc93a80b-k8s-whisker--6bf994bc7f--8g8k6-eth0" Jan 24 00:57:28.782063 containerd[1500]: 2026-01-24 00:57:28.753 [INFO][3923] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicad9877ef00 ContainerID="7cc27fa7c5fbcb225fd1ec12f8214e605970ff4f6a99dc8c6ba21a97346dbba3" Namespace="calico-system" Pod="whisker-6bf994bc7f-8g8k6" WorkloadEndpoint="ci--4081--3--6--n--32cc93a80b-k8s-whisker--6bf994bc7f--8g8k6-eth0" Jan 24 00:57:28.782063 containerd[1500]: 2026-01-24 00:57:28.765 [INFO][3923] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7cc27fa7c5fbcb225fd1ec12f8214e605970ff4f6a99dc8c6ba21a97346dbba3" Namespace="calico-system" Pod="whisker-6bf994bc7f-8g8k6" WorkloadEndpoint="ci--4081--3--6--n--32cc93a80b-k8s-whisker--6bf994bc7f--8g8k6-eth0" Jan 24 00:57:28.782063 containerd[1500]: 2026-01-24 00:57:28.766 [INFO][3923] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7cc27fa7c5fbcb225fd1ec12f8214e605970ff4f6a99dc8c6ba21a97346dbba3" Namespace="calico-system" Pod="whisker-6bf994bc7f-8g8k6" WorkloadEndpoint="ci--4081--3--6--n--32cc93a80b-k8s-whisker--6bf994bc7f--8g8k6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--32cc93a80b-k8s-whisker--6bf994bc7f--8g8k6-eth0", GenerateName:"whisker-6bf994bc7f-", Namespace:"calico-system", SelfLink:"", UID:"52940e35-8fee-4532-9c73-0644eb969513", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 57, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6bf994bc7f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-32cc93a80b", ContainerID:"7cc27fa7c5fbcb225fd1ec12f8214e605970ff4f6a99dc8c6ba21a97346dbba3", Pod:"whisker-6bf994bc7f-8g8k6", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.24.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calicad9877ef00", MAC:"62:cb:fb:22:c4:df", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:28.782063 containerd[1500]: 2026-01-24 00:57:28.776 [INFO][3923] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7cc27fa7c5fbcb225fd1ec12f8214e605970ff4f6a99dc8c6ba21a97346dbba3" Namespace="calico-system" Pod="whisker-6bf994bc7f-8g8k6" WorkloadEndpoint="ci--4081--3--6--n--32cc93a80b-k8s-whisker--6bf994bc7f--8g8k6-eth0" Jan 24 00:57:28.791401 kernel: bpftool[3980]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 24 00:57:28.821619 containerd[1500]: time="2026-01-24T00:57:28.821537525Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:57:28.821801 containerd[1500]: time="2026-01-24T00:57:28.821622066Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:57:28.822818 containerd[1500]: time="2026-01-24T00:57:28.822766193Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:57:28.823038 containerd[1500]: time="2026-01-24T00:57:28.822961564Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:57:28.861893 systemd[1]: Started cri-containerd-7cc27fa7c5fbcb225fd1ec12f8214e605970ff4f6a99dc8c6ba21a97346dbba3.scope - libcontainer container 7cc27fa7c5fbcb225fd1ec12f8214e605970ff4f6a99dc8c6ba21a97346dbba3. Jan 24 00:57:28.903275 containerd[1500]: time="2026-01-24T00:57:28.902967673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6bf994bc7f-8g8k6,Uid:52940e35-8fee-4532-9c73-0644eb969513,Namespace:calico-system,Attempt:0,} returns sandbox id \"7cc27fa7c5fbcb225fd1ec12f8214e605970ff4f6a99dc8c6ba21a97346dbba3\"" Jan 24 00:57:28.906595 containerd[1500]: time="2026-01-24T00:57:28.906435824Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:57:29.036490 systemd-networkd[1402]: vxlan.calico: Link UP Jan 24 00:57:29.036497 systemd-networkd[1402]: vxlan.calico: Gained carrier Jan 24 00:57:29.338883 containerd[1500]: time="2026-01-24T00:57:29.338796929Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:57:29.340347 containerd[1500]: time="2026-01-24T00:57:29.340273997Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:57:29.340347 containerd[1500]: time="2026-01-24T00:57:29.340339647Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 24 00:57:29.340837 kubelet[2546]: E0124 00:57:29.340712 2546 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:57:29.340837 kubelet[2546]: E0124 00:57:29.340798 2546 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:57:29.343682 kubelet[2546]: E0124 00:57:29.343624 2546 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e116ca17b1744963b9e4b3aac3adf522,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zkcx7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6bf994bc7f-8g8k6_calico-system(52940e35-8fee-4532-9c73-0644eb969513): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:57:29.346396 containerd[1500]: time="2026-01-24T00:57:29.345930159Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:57:29.784991 containerd[1500]: time="2026-01-24T00:57:29.784919035Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:57:29.786336 containerd[1500]: time="2026-01-24T00:57:29.786295562Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:57:29.786467 containerd[1500]: time="2026-01-24T00:57:29.786391853Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 24 00:57:29.786663 kubelet[2546]: E0124 00:57:29.786596 2546 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:57:29.787222 kubelet[2546]: E0124 00:57:29.786663 2546 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:57:29.787341 kubelet[2546]: E0124 00:57:29.786864 2546 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zkcx7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6bf994bc7f-8g8k6_calico-system(52940e35-8fee-4532-9c73-0644eb969513): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:57:29.788561 kubelet[2546]: E0124 00:57:29.788440 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6bf994bc7f-8g8k6" podUID="52940e35-8fee-4532-9c73-0644eb969513" Jan 24 00:57:29.973279 containerd[1500]: time="2026-01-24T00:57:29.973216983Z" level=info msg="StopPodSandbox for \"e21f9a7bedb692f4cb3dbded049ddb229728defded5a1fcdfba2545bf409fd5f\"" Jan 24 00:57:29.976459 kubelet[2546]: I0124 00:57:29.976113 2546 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2623c9d2-b9f3-4861-944f-8da3fba4e042" path="/var/lib/kubelet/pods/2623c9d2-b9f3-4861-944f-8da3fba4e042/volumes" Jan 24 00:57:30.026044 systemd-networkd[1402]: calicad9877ef00: Gained IPv6LL Jan 24 00:57:30.127610 containerd[1500]: 2026-01-24 00:57:30.065 [INFO][4101] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e21f9a7bedb692f4cb3dbded049ddb229728defded5a1fcdfba2545bf409fd5f" Jan 24 00:57:30.127610 containerd[1500]: 2026-01-24 00:57:30.066 [INFO][4101] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e21f9a7bedb692f4cb3dbded049ddb229728defded5a1fcdfba2545bf409fd5f" iface="eth0" netns="/var/run/netns/cni-e3a40910-b5a7-816e-67f9-c1cbd8756c5a" Jan 24 00:57:30.127610 containerd[1500]: 2026-01-24 00:57:30.067 [INFO][4101] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e21f9a7bedb692f4cb3dbded049ddb229728defded5a1fcdfba2545bf409fd5f" iface="eth0" netns="/var/run/netns/cni-e3a40910-b5a7-816e-67f9-c1cbd8756c5a" Jan 24 00:57:30.127610 containerd[1500]: 2026-01-24 00:57:30.068 [INFO][4101] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e21f9a7bedb692f4cb3dbded049ddb229728defded5a1fcdfba2545bf409fd5f" iface="eth0" netns="/var/run/netns/cni-e3a40910-b5a7-816e-67f9-c1cbd8756c5a" Jan 24 00:57:30.127610 containerd[1500]: 2026-01-24 00:57:30.068 [INFO][4101] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e21f9a7bedb692f4cb3dbded049ddb229728defded5a1fcdfba2545bf409fd5f" Jan 24 00:57:30.127610 containerd[1500]: 2026-01-24 00:57:30.068 [INFO][4101] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e21f9a7bedb692f4cb3dbded049ddb229728defded5a1fcdfba2545bf409fd5f" Jan 24 00:57:30.127610 containerd[1500]: 2026-01-24 00:57:30.107 [INFO][4109] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e21f9a7bedb692f4cb3dbded049ddb229728defded5a1fcdfba2545bf409fd5f" HandleID="k8s-pod-network.e21f9a7bedb692f4cb3dbded049ddb229728defded5a1fcdfba2545bf409fd5f" Workload="ci--4081--3--6--n--32cc93a80b-k8s-calico--apiserver--6ff89d9558--pr2mw-eth0" Jan 24 00:57:30.127610 containerd[1500]: 2026-01-24 00:57:30.108 [INFO][4109] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:30.127610 containerd[1500]: 2026-01-24 00:57:30.108 [INFO][4109] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:30.127610 containerd[1500]: 2026-01-24 00:57:30.117 [WARNING][4109] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e21f9a7bedb692f4cb3dbded049ddb229728defded5a1fcdfba2545bf409fd5f" HandleID="k8s-pod-network.e21f9a7bedb692f4cb3dbded049ddb229728defded5a1fcdfba2545bf409fd5f" Workload="ci--4081--3--6--n--32cc93a80b-k8s-calico--apiserver--6ff89d9558--pr2mw-eth0" Jan 24 00:57:30.127610 containerd[1500]: 2026-01-24 00:57:30.117 [INFO][4109] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e21f9a7bedb692f4cb3dbded049ddb229728defded5a1fcdfba2545bf409fd5f" HandleID="k8s-pod-network.e21f9a7bedb692f4cb3dbded049ddb229728defded5a1fcdfba2545bf409fd5f" Workload="ci--4081--3--6--n--32cc93a80b-k8s-calico--apiserver--6ff89d9558--pr2mw-eth0" Jan 24 00:57:30.127610 containerd[1500]: 2026-01-24 00:57:30.119 [INFO][4109] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:30.127610 containerd[1500]: 2026-01-24 00:57:30.123 [INFO][4101] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e21f9a7bedb692f4cb3dbded049ddb229728defded5a1fcdfba2545bf409fd5f" Jan 24 00:57:30.131324 containerd[1500]: time="2026-01-24T00:57:30.127964938Z" level=info msg="TearDown network for sandbox \"e21f9a7bedb692f4cb3dbded049ddb229728defded5a1fcdfba2545bf409fd5f\" successfully" Jan 24 00:57:30.131324 containerd[1500]: time="2026-01-24T00:57:30.128017608Z" level=info msg="StopPodSandbox for \"e21f9a7bedb692f4cb3dbded049ddb229728defded5a1fcdfba2545bf409fd5f\" returns successfully" Jan 24 00:57:30.131324 containerd[1500]: time="2026-01-24T00:57:30.130995214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ff89d9558-pr2mw,Uid:e25c9c50-eb09-419b-a216-dabe2aa24f5e,Namespace:calico-apiserver,Attempt:1,}" Jan 24 00:57:30.136810 systemd[1]: run-netns-cni\x2de3a40910\x2db5a7\x2d816e\x2d67f9\x2dc1cbd8756c5a.mount: Deactivated successfully. Jan 24 00:57:30.238507 kubelet[2546]: E0124 00:57:30.236876 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6bf994bc7f-8g8k6" podUID="52940e35-8fee-4532-9c73-0644eb969513" Jan 24 00:57:30.348164 systemd-networkd[1402]: cali5e010050450: Link UP Jan 24 00:57:30.353172 systemd-networkd[1402]: cali5e010050450: Gained carrier Jan 24 00:57:30.391859 containerd[1500]: 2026-01-24 00:57:30.219 [INFO][4116] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--32cc93a80b-k8s-calico--apiserver--6ff89d9558--pr2mw-eth0 calico-apiserver-6ff89d9558- calico-apiserver e25c9c50-eb09-419b-a216-dabe2aa24f5e 931 0 2026-01-24 00:57:02 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6ff89d9558 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-n-32cc93a80b calico-apiserver-6ff89d9558-pr2mw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5e010050450 [] [] }} ContainerID="edba8423462bd8484e093602438692b10ba4392cb8dc3364d67cd02564608eb3" Namespace="calico-apiserver" Pod="calico-apiserver-6ff89d9558-pr2mw" WorkloadEndpoint="ci--4081--3--6--n--32cc93a80b-k8s-calico--apiserver--6ff89d9558--pr2mw-" Jan 24 00:57:30.391859 containerd[1500]: 2026-01-24 00:57:30.220 [INFO][4116] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="edba8423462bd8484e093602438692b10ba4392cb8dc3364d67cd02564608eb3" Namespace="calico-apiserver" Pod="calico-apiserver-6ff89d9558-pr2mw" WorkloadEndpoint="ci--4081--3--6--n--32cc93a80b-k8s-calico--apiserver--6ff89d9558--pr2mw-eth0" Jan 24 00:57:30.391859 containerd[1500]: 2026-01-24 00:57:30.281 [INFO][4128] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="edba8423462bd8484e093602438692b10ba4392cb8dc3364d67cd02564608eb3" HandleID="k8s-pod-network.edba8423462bd8484e093602438692b10ba4392cb8dc3364d67cd02564608eb3" Workload="ci--4081--3--6--n--32cc93a80b-k8s-calico--apiserver--6ff89d9558--pr2mw-eth0" Jan 24 00:57:30.391859 containerd[1500]: 2026-01-24 00:57:30.281 [INFO][4128] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="edba8423462bd8484e093602438692b10ba4392cb8dc3364d67cd02564608eb3" HandleID="k8s-pod-network.edba8423462bd8484e093602438692b10ba4392cb8dc3364d67cd02564608eb3" Workload="ci--4081--3--6--n--32cc93a80b-k8s-calico--apiserver--6ff89d9558--pr2mw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003075f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-6-n-32cc93a80b", "pod":"calico-apiserver-6ff89d9558-pr2mw", "timestamp":"2026-01-24 00:57:30.281169915 +0000 UTC"}, Hostname:"ci-4081-3-6-n-32cc93a80b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:57:30.391859 containerd[1500]: 2026-01-24 00:57:30.281 [INFO][4128] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:30.391859 containerd[1500]: 2026-01-24 00:57:30.281 [INFO][4128] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:30.391859 containerd[1500]: 2026-01-24 00:57:30.281 [INFO][4128] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-32cc93a80b' Jan 24 00:57:30.391859 containerd[1500]: 2026-01-24 00:57:30.291 [INFO][4128] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.edba8423462bd8484e093602438692b10ba4392cb8dc3364d67cd02564608eb3" host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:30.391859 containerd[1500]: 2026-01-24 00:57:30.300 [INFO][4128] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:30.391859 containerd[1500]: 2026-01-24 00:57:30.306 [INFO][4128] ipam/ipam.go 511: Trying affinity for 192.168.24.128/26 host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:30.391859 containerd[1500]: 2026-01-24 00:57:30.309 [INFO][4128] ipam/ipam.go 158: Attempting to load block cidr=192.168.24.128/26 host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:30.391859 containerd[1500]: 2026-01-24 00:57:30.312 [INFO][4128] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.24.128/26 host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:30.391859 containerd[1500]: 2026-01-24 00:57:30.312 [INFO][4128] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.24.128/26 handle="k8s-pod-network.edba8423462bd8484e093602438692b10ba4392cb8dc3364d67cd02564608eb3" host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:30.391859 containerd[1500]: 2026-01-24 00:57:30.315 [INFO][4128] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.edba8423462bd8484e093602438692b10ba4392cb8dc3364d67cd02564608eb3 Jan 24 00:57:30.391859 containerd[1500]: 2026-01-24 00:57:30.320 [INFO][4128] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.24.128/26 handle="k8s-pod-network.edba8423462bd8484e093602438692b10ba4392cb8dc3364d67cd02564608eb3" host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:30.391859 containerd[1500]: 2026-01-24 00:57:30.329 [INFO][4128] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.24.130/26] block=192.168.24.128/26 handle="k8s-pod-network.edba8423462bd8484e093602438692b10ba4392cb8dc3364d67cd02564608eb3" host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:30.391859 containerd[1500]: 2026-01-24 00:57:30.329 [INFO][4128] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.24.130/26] handle="k8s-pod-network.edba8423462bd8484e093602438692b10ba4392cb8dc3364d67cd02564608eb3" host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:30.391859 containerd[1500]: 2026-01-24 00:57:30.330 [INFO][4128] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:30.391859 containerd[1500]: 2026-01-24 00:57:30.330 [INFO][4128] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.24.130/26] IPv6=[] ContainerID="edba8423462bd8484e093602438692b10ba4392cb8dc3364d67cd02564608eb3" HandleID="k8s-pod-network.edba8423462bd8484e093602438692b10ba4392cb8dc3364d67cd02564608eb3" Workload="ci--4081--3--6--n--32cc93a80b-k8s-calico--apiserver--6ff89d9558--pr2mw-eth0" Jan 24 00:57:30.392285 containerd[1500]: 2026-01-24 00:57:30.340 [INFO][4116] cni-plugin/k8s.go 418: Populated endpoint ContainerID="edba8423462bd8484e093602438692b10ba4392cb8dc3364d67cd02564608eb3" Namespace="calico-apiserver" Pod="calico-apiserver-6ff89d9558-pr2mw" WorkloadEndpoint="ci--4081--3--6--n--32cc93a80b-k8s-calico--apiserver--6ff89d9558--pr2mw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--32cc93a80b-k8s-calico--apiserver--6ff89d9558--pr2mw-eth0", GenerateName:"calico-apiserver-6ff89d9558-", Namespace:"calico-apiserver", SelfLink:"", UID:"e25c9c50-eb09-419b-a216-dabe2aa24f5e", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 57, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6ff89d9558", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-32cc93a80b", ContainerID:"", Pod:"calico-apiserver-6ff89d9558-pr2mw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5e010050450", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:30.392285 containerd[1500]: 2026-01-24 00:57:30.340 [INFO][4116] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.24.130/32] ContainerID="edba8423462bd8484e093602438692b10ba4392cb8dc3364d67cd02564608eb3" Namespace="calico-apiserver" Pod="calico-apiserver-6ff89d9558-pr2mw" WorkloadEndpoint="ci--4081--3--6--n--32cc93a80b-k8s-calico--apiserver--6ff89d9558--pr2mw-eth0" Jan 24 00:57:30.392285 containerd[1500]: 2026-01-24 00:57:30.340 [INFO][4116] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5e010050450 ContainerID="edba8423462bd8484e093602438692b10ba4392cb8dc3364d67cd02564608eb3" Namespace="calico-apiserver" Pod="calico-apiserver-6ff89d9558-pr2mw" WorkloadEndpoint="ci--4081--3--6--n--32cc93a80b-k8s-calico--apiserver--6ff89d9558--pr2mw-eth0" Jan 24 00:57:30.392285 containerd[1500]: 2026-01-24 00:57:30.352 [INFO][4116] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="edba8423462bd8484e093602438692b10ba4392cb8dc3364d67cd02564608eb3" Namespace="calico-apiserver" Pod="calico-apiserver-6ff89d9558-pr2mw" WorkloadEndpoint="ci--4081--3--6--n--32cc93a80b-k8s-calico--apiserver--6ff89d9558--pr2mw-eth0" Jan 24 00:57:30.392285 containerd[1500]: 2026-01-24 00:57:30.363 [INFO][4116] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="edba8423462bd8484e093602438692b10ba4392cb8dc3364d67cd02564608eb3" Namespace="calico-apiserver" Pod="calico-apiserver-6ff89d9558-pr2mw" WorkloadEndpoint="ci--4081--3--6--n--32cc93a80b-k8s-calico--apiserver--6ff89d9558--pr2mw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--32cc93a80b-k8s-calico--apiserver--6ff89d9558--pr2mw-eth0", GenerateName:"calico-apiserver-6ff89d9558-", Namespace:"calico-apiserver", SelfLink:"", UID:"e25c9c50-eb09-419b-a216-dabe2aa24f5e", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 57, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6ff89d9558", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-32cc93a80b", ContainerID:"edba8423462bd8484e093602438692b10ba4392cb8dc3364d67cd02564608eb3", Pod:"calico-apiserver-6ff89d9558-pr2mw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5e010050450", MAC:"62:66:df:c8:62:4d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:30.392285 containerd[1500]: 2026-01-24 00:57:30.387 [INFO][4116] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="edba8423462bd8484e093602438692b10ba4392cb8dc3364d67cd02564608eb3" Namespace="calico-apiserver" Pod="calico-apiserver-6ff89d9558-pr2mw" WorkloadEndpoint="ci--4081--3--6--n--32cc93a80b-k8s-calico--apiserver--6ff89d9558--pr2mw-eth0" Jan 24 00:57:30.417688 containerd[1500]: time="2026-01-24T00:57:30.417039780Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:57:30.417688 containerd[1500]: time="2026-01-24T00:57:30.417093671Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:57:30.417688 containerd[1500]: time="2026-01-24T00:57:30.417103871Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:57:30.417688 containerd[1500]: time="2026-01-24T00:57:30.417164211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:57:30.450954 systemd[1]: Started cri-containerd-edba8423462bd8484e093602438692b10ba4392cb8dc3364d67cd02564608eb3.scope - libcontainer container edba8423462bd8484e093602438692b10ba4392cb8dc3364d67cd02564608eb3. Jan 24 00:57:30.485316 containerd[1500]: time="2026-01-24T00:57:30.485269230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ff89d9558-pr2mw,Uid:e25c9c50-eb09-419b-a216-dabe2aa24f5e,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"edba8423462bd8484e093602438692b10ba4392cb8dc3364d67cd02564608eb3\"" Jan 24 00:57:30.487063 containerd[1500]: time="2026-01-24T00:57:30.486804668Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:57:30.666556 systemd-networkd[1402]: vxlan.calico: Gained IPv6LL Jan 24 00:57:30.926674 containerd[1500]: time="2026-01-24T00:57:30.926452724Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:57:30.930826 containerd[1500]: time="2026-01-24T00:57:30.930631946Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:57:30.931257 containerd[1500]: time="2026-01-24T00:57:30.930704736Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:57:30.931357 kubelet[2546]: E0124 00:57:30.931259 2546 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:57:30.931357 kubelet[2546]: E0124 00:57:30.931297 2546 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:57:30.932049 kubelet[2546]: E0124 00:57:30.931434 2546 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pbb4z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6ff89d9558-pr2mw_calico-apiserver(e25c9c50-eb09-419b-a216-dabe2aa24f5e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:57:30.933599 kubelet[2546]: E0124 00:57:30.933556 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6ff89d9558-pr2mw" podUID="e25c9c50-eb09-419b-a216-dabe2aa24f5e" Jan 24 00:57:31.133486 systemd[1]: run-containerd-runc-k8s.io-edba8423462bd8484e093602438692b10ba4392cb8dc3364d67cd02564608eb3-runc.VXpvEe.mount: Deactivated successfully. Jan 24 00:57:31.241469 kubelet[2546]: E0124 00:57:31.239874 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6ff89d9558-pr2mw" podUID="e25c9c50-eb09-419b-a216-dabe2aa24f5e" Jan 24 00:57:31.817990 systemd-networkd[1402]: cali5e010050450: Gained IPv6LL Jan 24 00:57:31.972980 containerd[1500]: time="2026-01-24T00:57:31.972865136Z" level=info msg="StopPodSandbox for \"934533da5a1c76725bbaafd12ab91034fc55767a58215f0bc374fe7cd77e1d5b\"" Jan 24 00:57:31.976509 containerd[1500]: time="2026-01-24T00:57:31.975045767Z" level=info msg="StopPodSandbox for \"77ab0cbafcc89f4443e2cf533c919cdaa642383054c9490d7af6cf2d3be4851d\"" Jan 24 00:57:31.976509 containerd[1500]: time="2026-01-24T00:57:31.975593479Z" level=info msg="StopPodSandbox for \"ba11e042217f8e4b40484292a53b786856df957fe4d7a333bd0a1a6ef8f12481\"" Jan 24 00:57:32.173203 containerd[1500]: 2026-01-24 00:57:32.126 [INFO][4221] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="77ab0cbafcc89f4443e2cf533c919cdaa642383054c9490d7af6cf2d3be4851d" Jan 24 00:57:32.173203 containerd[1500]: 2026-01-24 00:57:32.126 [INFO][4221] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="77ab0cbafcc89f4443e2cf533c919cdaa642383054c9490d7af6cf2d3be4851d" iface="eth0" netns="/var/run/netns/cni-4c622970-e041-f248-3277-a8f42be16961" Jan 24 00:57:32.173203 containerd[1500]: 2026-01-24 00:57:32.128 [INFO][4221] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="77ab0cbafcc89f4443e2cf533c919cdaa642383054c9490d7af6cf2d3be4851d" iface="eth0" netns="/var/run/netns/cni-4c622970-e041-f248-3277-a8f42be16961" Jan 24 00:57:32.173203 containerd[1500]: 2026-01-24 00:57:32.128 [INFO][4221] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="77ab0cbafcc89f4443e2cf533c919cdaa642383054c9490d7af6cf2d3be4851d" iface="eth0" netns="/var/run/netns/cni-4c622970-e041-f248-3277-a8f42be16961" Jan 24 00:57:32.173203 containerd[1500]: 2026-01-24 00:57:32.128 [INFO][4221] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="77ab0cbafcc89f4443e2cf533c919cdaa642383054c9490d7af6cf2d3be4851d" Jan 24 00:57:32.173203 containerd[1500]: 2026-01-24 00:57:32.128 [INFO][4221] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="77ab0cbafcc89f4443e2cf533c919cdaa642383054c9490d7af6cf2d3be4851d" Jan 24 00:57:32.173203 containerd[1500]: 2026-01-24 00:57:32.163 [INFO][4241] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="77ab0cbafcc89f4443e2cf533c919cdaa642383054c9490d7af6cf2d3be4851d" HandleID="k8s-pod-network.77ab0cbafcc89f4443e2cf533c919cdaa642383054c9490d7af6cf2d3be4851d" Workload="ci--4081--3--6--n--32cc93a80b-k8s-goldmane--666569f655--9lcpv-eth0" Jan 24 00:57:32.173203 containerd[1500]: 2026-01-24 00:57:32.163 [INFO][4241] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:32.173203 containerd[1500]: 2026-01-24 00:57:32.163 [INFO][4241] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:32.173203 containerd[1500]: 2026-01-24 00:57:32.168 [WARNING][4241] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="77ab0cbafcc89f4443e2cf533c919cdaa642383054c9490d7af6cf2d3be4851d" HandleID="k8s-pod-network.77ab0cbafcc89f4443e2cf533c919cdaa642383054c9490d7af6cf2d3be4851d" Workload="ci--4081--3--6--n--32cc93a80b-k8s-goldmane--666569f655--9lcpv-eth0" Jan 24 00:57:32.173203 containerd[1500]: 2026-01-24 00:57:32.168 [INFO][4241] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="77ab0cbafcc89f4443e2cf533c919cdaa642383054c9490d7af6cf2d3be4851d" HandleID="k8s-pod-network.77ab0cbafcc89f4443e2cf533c919cdaa642383054c9490d7af6cf2d3be4851d" Workload="ci--4081--3--6--n--32cc93a80b-k8s-goldmane--666569f655--9lcpv-eth0" Jan 24 00:57:32.173203 containerd[1500]: 2026-01-24 00:57:32.169 [INFO][4241] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:32.173203 containerd[1500]: 2026-01-24 00:57:32.171 [INFO][4221] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="77ab0cbafcc89f4443e2cf533c919cdaa642383054c9490d7af6cf2d3be4851d" Jan 24 00:57:32.174468 containerd[1500]: time="2026-01-24T00:57:32.174166315Z" level=info msg="TearDown network for sandbox \"77ab0cbafcc89f4443e2cf533c919cdaa642383054c9490d7af6cf2d3be4851d\" successfully" Jan 24 00:57:32.174468 containerd[1500]: time="2026-01-24T00:57:32.174188466Z" level=info msg="StopPodSandbox for \"77ab0cbafcc89f4443e2cf533c919cdaa642383054c9490d7af6cf2d3be4851d\" returns successfully" Jan 24 00:57:32.178162 containerd[1500]: time="2026-01-24T00:57:32.176369086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-9lcpv,Uid:267130dd-42b7-45fa-9166-0420d7cd47cc,Namespace:calico-system,Attempt:1,}" Jan 24 00:57:32.177842 systemd[1]: run-netns-cni\x2d4c622970\x2de041\x2df248\x2d3277\x2da8f42be16961.mount: Deactivated successfully. Jan 24 00:57:32.188119 containerd[1500]: 2026-01-24 00:57:32.111 [INFO][4208] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="934533da5a1c76725bbaafd12ab91034fc55767a58215f0bc374fe7cd77e1d5b" Jan 24 00:57:32.188119 containerd[1500]: 2026-01-24 00:57:32.111 [INFO][4208] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="934533da5a1c76725bbaafd12ab91034fc55767a58215f0bc374fe7cd77e1d5b" iface="eth0" netns="/var/run/netns/cni-047016bc-53da-9c63-6a23-258c0c8b0abc" Jan 24 00:57:32.188119 containerd[1500]: 2026-01-24 00:57:32.112 [INFO][4208] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="934533da5a1c76725bbaafd12ab91034fc55767a58215f0bc374fe7cd77e1d5b" iface="eth0" netns="/var/run/netns/cni-047016bc-53da-9c63-6a23-258c0c8b0abc" Jan 24 00:57:32.188119 containerd[1500]: 2026-01-24 00:57:32.112 [INFO][4208] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="934533da5a1c76725bbaafd12ab91034fc55767a58215f0bc374fe7cd77e1d5b" iface="eth0" netns="/var/run/netns/cni-047016bc-53da-9c63-6a23-258c0c8b0abc" Jan 24 00:57:32.188119 containerd[1500]: 2026-01-24 00:57:32.112 [INFO][4208] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="934533da5a1c76725bbaafd12ab91034fc55767a58215f0bc374fe7cd77e1d5b" Jan 24 00:57:32.188119 containerd[1500]: 2026-01-24 00:57:32.112 [INFO][4208] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="934533da5a1c76725bbaafd12ab91034fc55767a58215f0bc374fe7cd77e1d5b" Jan 24 00:57:32.188119 containerd[1500]: 2026-01-24 00:57:32.167 [INFO][4235] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="934533da5a1c76725bbaafd12ab91034fc55767a58215f0bc374fe7cd77e1d5b" HandleID="k8s-pod-network.934533da5a1c76725bbaafd12ab91034fc55767a58215f0bc374fe7cd77e1d5b" Workload="ci--4081--3--6--n--32cc93a80b-k8s-calico--apiserver--59667657--b8mx9-eth0" Jan 24 00:57:32.188119 containerd[1500]: 2026-01-24 00:57:32.167 [INFO][4235] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:32.188119 containerd[1500]: 2026-01-24 00:57:32.171 [INFO][4235] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:32.188119 containerd[1500]: 2026-01-24 00:57:32.180 [WARNING][4235] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="934533da5a1c76725bbaafd12ab91034fc55767a58215f0bc374fe7cd77e1d5b" HandleID="k8s-pod-network.934533da5a1c76725bbaafd12ab91034fc55767a58215f0bc374fe7cd77e1d5b" Workload="ci--4081--3--6--n--32cc93a80b-k8s-calico--apiserver--59667657--b8mx9-eth0" Jan 24 00:57:32.188119 containerd[1500]: 2026-01-24 00:57:32.180 [INFO][4235] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="934533da5a1c76725bbaafd12ab91034fc55767a58215f0bc374fe7cd77e1d5b" HandleID="k8s-pod-network.934533da5a1c76725bbaafd12ab91034fc55767a58215f0bc374fe7cd77e1d5b" Workload="ci--4081--3--6--n--32cc93a80b-k8s-calico--apiserver--59667657--b8mx9-eth0" Jan 24 00:57:32.188119 containerd[1500]: 2026-01-24 00:57:32.181 [INFO][4235] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:32.188119 containerd[1500]: 2026-01-24 00:57:32.184 [INFO][4208] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="934533da5a1c76725bbaafd12ab91034fc55767a58215f0bc374fe7cd77e1d5b" Jan 24 00:57:32.189920 containerd[1500]: time="2026-01-24T00:57:32.189779028Z" level=info msg="TearDown network for sandbox \"934533da5a1c76725bbaafd12ab91034fc55767a58215f0bc374fe7cd77e1d5b\" successfully" Jan 24 00:57:32.189920 containerd[1500]: time="2026-01-24T00:57:32.189810888Z" level=info msg="StopPodSandbox for \"934533da5a1c76725bbaafd12ab91034fc55767a58215f0bc374fe7cd77e1d5b\" returns successfully" Jan 24 00:57:32.191515 containerd[1500]: time="2026-01-24T00:57:32.191452565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59667657-b8mx9,Uid:3be98e24-0896-49a9-8666-4ca8f66cf2c8,Namespace:calico-apiserver,Attempt:1,}" Jan 24 00:57:32.192604 systemd[1]: run-netns-cni\x2d047016bc\x2d53da\x2d9c63\x2d6a23\x2d258c0c8b0abc.mount: Deactivated successfully. Jan 24 00:57:32.211556 containerd[1500]: 2026-01-24 00:57:32.134 [INFO][4217] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ba11e042217f8e4b40484292a53b786856df957fe4d7a333bd0a1a6ef8f12481" Jan 24 00:57:32.211556 containerd[1500]: 2026-01-24 00:57:32.135 [INFO][4217] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ba11e042217f8e4b40484292a53b786856df957fe4d7a333bd0a1a6ef8f12481" iface="eth0" netns="/var/run/netns/cni-9fdb7592-d68f-04f8-2b15-323338356e60" Jan 24 00:57:32.211556 containerd[1500]: 2026-01-24 00:57:32.136 [INFO][4217] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ba11e042217f8e4b40484292a53b786856df957fe4d7a333bd0a1a6ef8f12481" iface="eth0" netns="/var/run/netns/cni-9fdb7592-d68f-04f8-2b15-323338356e60" Jan 24 00:57:32.211556 containerd[1500]: 2026-01-24 00:57:32.136 [INFO][4217] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ba11e042217f8e4b40484292a53b786856df957fe4d7a333bd0a1a6ef8f12481" iface="eth0" netns="/var/run/netns/cni-9fdb7592-d68f-04f8-2b15-323338356e60" Jan 24 00:57:32.211556 containerd[1500]: 2026-01-24 00:57:32.136 [INFO][4217] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ba11e042217f8e4b40484292a53b786856df957fe4d7a333bd0a1a6ef8f12481" Jan 24 00:57:32.211556 containerd[1500]: 2026-01-24 00:57:32.136 [INFO][4217] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ba11e042217f8e4b40484292a53b786856df957fe4d7a333bd0a1a6ef8f12481" Jan 24 00:57:32.211556 containerd[1500]: 2026-01-24 00:57:32.186 [INFO][4243] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ba11e042217f8e4b40484292a53b786856df957fe4d7a333bd0a1a6ef8f12481" HandleID="k8s-pod-network.ba11e042217f8e4b40484292a53b786856df957fe4d7a333bd0a1a6ef8f12481" Workload="ci--4081--3--6--n--32cc93a80b-k8s-coredns--668d6bf9bc--7fv4k-eth0" Jan 24 00:57:32.211556 containerd[1500]: 2026-01-24 00:57:32.186 [INFO][4243] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:32.211556 containerd[1500]: 2026-01-24 00:57:32.186 [INFO][4243] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:32.211556 containerd[1500]: 2026-01-24 00:57:32.199 [WARNING][4243] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ba11e042217f8e4b40484292a53b786856df957fe4d7a333bd0a1a6ef8f12481" HandleID="k8s-pod-network.ba11e042217f8e4b40484292a53b786856df957fe4d7a333bd0a1a6ef8f12481" Workload="ci--4081--3--6--n--32cc93a80b-k8s-coredns--668d6bf9bc--7fv4k-eth0" Jan 24 00:57:32.211556 containerd[1500]: 2026-01-24 00:57:32.199 [INFO][4243] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ba11e042217f8e4b40484292a53b786856df957fe4d7a333bd0a1a6ef8f12481" HandleID="k8s-pod-network.ba11e042217f8e4b40484292a53b786856df957fe4d7a333bd0a1a6ef8f12481" Workload="ci--4081--3--6--n--32cc93a80b-k8s-coredns--668d6bf9bc--7fv4k-eth0" Jan 24 00:57:32.211556 containerd[1500]: 2026-01-24 00:57:32.203 [INFO][4243] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:32.211556 containerd[1500]: 2026-01-24 00:57:32.206 [INFO][4217] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ba11e042217f8e4b40484292a53b786856df957fe4d7a333bd0a1a6ef8f12481" Jan 24 00:57:32.213687 containerd[1500]: time="2026-01-24T00:57:32.211844450Z" level=info msg="TearDown network for sandbox \"ba11e042217f8e4b40484292a53b786856df957fe4d7a333bd0a1a6ef8f12481\" successfully" Jan 24 00:57:32.213687 containerd[1500]: time="2026-01-24T00:57:32.211864800Z" level=info msg="StopPodSandbox for \"ba11e042217f8e4b40484292a53b786856df957fe4d7a333bd0a1a6ef8f12481\" returns successfully" Jan 24 00:57:32.213687 containerd[1500]: time="2026-01-24T00:57:32.213157476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7fv4k,Uid:1d12baac-e259-43f8-8c34-2fc70e4e9750,Namespace:kube-system,Attempt:1,}" Jan 24 00:57:32.215510 systemd[1]: run-netns-cni\x2d9fdb7592\x2dd68f\x2d04f8\x2d2b15\x2d323338356e60.mount: Deactivated successfully. Jan 24 00:57:32.241552 kubelet[2546]: E0124 00:57:32.241022 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6ff89d9558-pr2mw" podUID="e25c9c50-eb09-419b-a216-dabe2aa24f5e" Jan 24 00:57:32.333072 systemd-networkd[1402]: cali1ba17ad0cbe: Link UP Jan 24 00:57:32.333886 systemd-networkd[1402]: cali1ba17ad0cbe: Gained carrier Jan 24 00:57:32.346487 containerd[1500]: 2026-01-24 00:57:32.266 [INFO][4276] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--32cc93a80b-k8s-coredns--668d6bf9bc--7fv4k-eth0 coredns-668d6bf9bc- kube-system 1d12baac-e259-43f8-8c34-2fc70e4e9750 958 0 2026-01-24 00:56:53 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-n-32cc93a80b coredns-668d6bf9bc-7fv4k eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1ba17ad0cbe [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="4b274f356428dd87c7c3d03c47c118cf4850d38eb78d1d46788c81c8e993f2b4" Namespace="kube-system" Pod="coredns-668d6bf9bc-7fv4k" WorkloadEndpoint="ci--4081--3--6--n--32cc93a80b-k8s-coredns--668d6bf9bc--7fv4k-" Jan 24 00:57:32.346487 containerd[1500]: 2026-01-24 00:57:32.266 [INFO][4276] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4b274f356428dd87c7c3d03c47c118cf4850d38eb78d1d46788c81c8e993f2b4" Namespace="kube-system" Pod="coredns-668d6bf9bc-7fv4k" WorkloadEndpoint="ci--4081--3--6--n--32cc93a80b-k8s-coredns--668d6bf9bc--7fv4k-eth0" Jan 24 00:57:32.346487 containerd[1500]: 2026-01-24 00:57:32.297 [INFO][4296] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4b274f356428dd87c7c3d03c47c118cf4850d38eb78d1d46788c81c8e993f2b4" HandleID="k8s-pod-network.4b274f356428dd87c7c3d03c47c118cf4850d38eb78d1d46788c81c8e993f2b4" Workload="ci--4081--3--6--n--32cc93a80b-k8s-coredns--668d6bf9bc--7fv4k-eth0" Jan 24 00:57:32.346487 containerd[1500]: 2026-01-24 00:57:32.297 [INFO][4296] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4b274f356428dd87c7c3d03c47c118cf4850d38eb78d1d46788c81c8e993f2b4" HandleID="k8s-pod-network.4b274f356428dd87c7c3d03c47c118cf4850d38eb78d1d46788c81c8e993f2b4" Workload="ci--4081--3--6--n--32cc93a80b-k8s-coredns--668d6bf9bc--7fv4k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d55a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-n-32cc93a80b", "pod":"coredns-668d6bf9bc-7fv4k", "timestamp":"2026-01-24 00:57:32.297231895 +0000 UTC"}, Hostname:"ci-4081-3-6-n-32cc93a80b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:57:32.346487 containerd[1500]: 2026-01-24 00:57:32.297 [INFO][4296] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:32.346487 containerd[1500]: 2026-01-24 00:57:32.297 [INFO][4296] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:32.346487 containerd[1500]: 2026-01-24 00:57:32.297 [INFO][4296] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-32cc93a80b' Jan 24 00:57:32.346487 containerd[1500]: 2026-01-24 00:57:32.302 [INFO][4296] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4b274f356428dd87c7c3d03c47c118cf4850d38eb78d1d46788c81c8e993f2b4" host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:32.346487 containerd[1500]: 2026-01-24 00:57:32.307 [INFO][4296] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:32.346487 containerd[1500]: 2026-01-24 00:57:32.311 [INFO][4296] ipam/ipam.go 511: Trying affinity for 192.168.24.128/26 host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:32.346487 containerd[1500]: 2026-01-24 00:57:32.312 [INFO][4296] ipam/ipam.go 158: Attempting to load block cidr=192.168.24.128/26 host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:32.346487 containerd[1500]: 2026-01-24 00:57:32.314 [INFO][4296] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.24.128/26 host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:32.346487 containerd[1500]: 2026-01-24 00:57:32.314 [INFO][4296] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.24.128/26 handle="k8s-pod-network.4b274f356428dd87c7c3d03c47c118cf4850d38eb78d1d46788c81c8e993f2b4" host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:32.346487 containerd[1500]: 2026-01-24 00:57:32.315 [INFO][4296] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4b274f356428dd87c7c3d03c47c118cf4850d38eb78d1d46788c81c8e993f2b4 Jan 24 00:57:32.346487 containerd[1500]: 2026-01-24 00:57:32.318 [INFO][4296] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.24.128/26 handle="k8s-pod-network.4b274f356428dd87c7c3d03c47c118cf4850d38eb78d1d46788c81c8e993f2b4" host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:32.346487 containerd[1500]: 2026-01-24 00:57:32.323 [INFO][4296] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.24.131/26] block=192.168.24.128/26 handle="k8s-pod-network.4b274f356428dd87c7c3d03c47c118cf4850d38eb78d1d46788c81c8e993f2b4" host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:32.346487 containerd[1500]: 2026-01-24 00:57:32.323 [INFO][4296] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.24.131/26] handle="k8s-pod-network.4b274f356428dd87c7c3d03c47c118cf4850d38eb78d1d46788c81c8e993f2b4" host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:32.346487 containerd[1500]: 2026-01-24 00:57:32.323 [INFO][4296] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:32.346487 containerd[1500]: 2026-01-24 00:57:32.323 [INFO][4296] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.24.131/26] IPv6=[] ContainerID="4b274f356428dd87c7c3d03c47c118cf4850d38eb78d1d46788c81c8e993f2b4" HandleID="k8s-pod-network.4b274f356428dd87c7c3d03c47c118cf4850d38eb78d1d46788c81c8e993f2b4" Workload="ci--4081--3--6--n--32cc93a80b-k8s-coredns--668d6bf9bc--7fv4k-eth0" Jan 24 00:57:32.347529 containerd[1500]: 2026-01-24 00:57:32.326 [INFO][4276] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4b274f356428dd87c7c3d03c47c118cf4850d38eb78d1d46788c81c8e993f2b4" Namespace="kube-system" Pod="coredns-668d6bf9bc-7fv4k" WorkloadEndpoint="ci--4081--3--6--n--32cc93a80b-k8s-coredns--668d6bf9bc--7fv4k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--32cc93a80b-k8s-coredns--668d6bf9bc--7fv4k-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1d12baac-e259-43f8-8c34-2fc70e4e9750", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 56, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-32cc93a80b", ContainerID:"", Pod:"coredns-668d6bf9bc-7fv4k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.24.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1ba17ad0cbe", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:32.347529 containerd[1500]: 2026-01-24 00:57:32.327 [INFO][4276] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.24.131/32] ContainerID="4b274f356428dd87c7c3d03c47c118cf4850d38eb78d1d46788c81c8e993f2b4" Namespace="kube-system" Pod="coredns-668d6bf9bc-7fv4k" WorkloadEndpoint="ci--4081--3--6--n--32cc93a80b-k8s-coredns--668d6bf9bc--7fv4k-eth0" Jan 24 00:57:32.347529 containerd[1500]: 2026-01-24 00:57:32.327 [INFO][4276] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1ba17ad0cbe ContainerID="4b274f356428dd87c7c3d03c47c118cf4850d38eb78d1d46788c81c8e993f2b4" Namespace="kube-system" Pod="coredns-668d6bf9bc-7fv4k" WorkloadEndpoint="ci--4081--3--6--n--32cc93a80b-k8s-coredns--668d6bf9bc--7fv4k-eth0" Jan 24 00:57:32.347529 containerd[1500]: 2026-01-24 00:57:32.334 [INFO][4276] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4b274f356428dd87c7c3d03c47c118cf4850d38eb78d1d46788c81c8e993f2b4" Namespace="kube-system" Pod="coredns-668d6bf9bc-7fv4k" WorkloadEndpoint="ci--4081--3--6--n--32cc93a80b-k8s-coredns--668d6bf9bc--7fv4k-eth0" Jan 24 00:57:32.347529 containerd[1500]: 2026-01-24 00:57:32.334 [INFO][4276] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4b274f356428dd87c7c3d03c47c118cf4850d38eb78d1d46788c81c8e993f2b4" Namespace="kube-system" Pod="coredns-668d6bf9bc-7fv4k" WorkloadEndpoint="ci--4081--3--6--n--32cc93a80b-k8s-coredns--668d6bf9bc--7fv4k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--32cc93a80b-k8s-coredns--668d6bf9bc--7fv4k-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1d12baac-e259-43f8-8c34-2fc70e4e9750", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 56, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-32cc93a80b", ContainerID:"4b274f356428dd87c7c3d03c47c118cf4850d38eb78d1d46788c81c8e993f2b4", Pod:"coredns-668d6bf9bc-7fv4k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.24.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1ba17ad0cbe", MAC:"7e:b5:5a:46:26:ae", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:32.347529 containerd[1500]: 2026-01-24 00:57:32.343 [INFO][4276] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4b274f356428dd87c7c3d03c47c118cf4850d38eb78d1d46788c81c8e993f2b4" Namespace="kube-system" Pod="coredns-668d6bf9bc-7fv4k" WorkloadEndpoint="ci--4081--3--6--n--32cc93a80b-k8s-coredns--668d6bf9bc--7fv4k-eth0" Jan 24 00:57:32.362578 containerd[1500]: time="2026-01-24T00:57:32.362463497Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:57:32.362817 containerd[1500]: time="2026-01-24T00:57:32.362539908Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:57:32.362817 containerd[1500]: time="2026-01-24T00:57:32.362569188Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:57:32.362817 containerd[1500]: time="2026-01-24T00:57:32.362636058Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:57:32.383863 systemd[1]: Started cri-containerd-4b274f356428dd87c7c3d03c47c118cf4850d38eb78d1d46788c81c8e993f2b4.scope - libcontainer container 4b274f356428dd87c7c3d03c47c118cf4850d38eb78d1d46788c81c8e993f2b4. Jan 24 00:57:32.425227 containerd[1500]: time="2026-01-24T00:57:32.425121017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7fv4k,Uid:1d12baac-e259-43f8-8c34-2fc70e4e9750,Namespace:kube-system,Attempt:1,} returns sandbox id \"4b274f356428dd87c7c3d03c47c118cf4850d38eb78d1d46788c81c8e993f2b4\"" Jan 24 00:57:32.429661 containerd[1500]: time="2026-01-24T00:57:32.429427697Z" level=info msg="CreateContainer within sandbox \"4b274f356428dd87c7c3d03c47c118cf4850d38eb78d1d46788c81c8e993f2b4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 24 00:57:32.442876 systemd-networkd[1402]: calic8a3aa58e2d: Link UP Jan 24 00:57:32.443055 systemd-networkd[1402]: calic8a3aa58e2d: Gained carrier Jan 24 00:57:32.449533 containerd[1500]: time="2026-01-24T00:57:32.448899447Z" level=info msg="CreateContainer within sandbox \"4b274f356428dd87c7c3d03c47c118cf4850d38eb78d1d46788c81c8e993f2b4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"766dc24595da5b2fced403671b03f79251dede1edcb4d7658484a73ad8d36006\"" Jan 24 00:57:32.451216 containerd[1500]: time="2026-01-24T00:57:32.451198328Z" level=info msg="StartContainer for \"766dc24595da5b2fced403671b03f79251dede1edcb4d7658484a73ad8d36006\"" Jan 24 00:57:32.461788 containerd[1500]: 2026-01-24 00:57:32.262 [INFO][4257] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--32cc93a80b-k8s-goldmane--666569f655--9lcpv-eth0 goldmane-666569f655- calico-system 267130dd-42b7-45fa-9166-0420d7cd47cc 957 0 2026-01-24 00:57:04 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-6-n-32cc93a80b goldmane-666569f655-9lcpv eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calic8a3aa58e2d [] [] }} ContainerID="0c8fb2d444327c17d0d4ca97727576ac926b98527fcc6f645ef8138912c10c89" Namespace="calico-system" Pod="goldmane-666569f655-9lcpv" WorkloadEndpoint="ci--4081--3--6--n--32cc93a80b-k8s-goldmane--666569f655--9lcpv-" Jan 24 00:57:32.461788 containerd[1500]: 2026-01-24 00:57:32.263 [INFO][4257] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0c8fb2d444327c17d0d4ca97727576ac926b98527fcc6f645ef8138912c10c89" Namespace="calico-system" Pod="goldmane-666569f655-9lcpv" WorkloadEndpoint="ci--4081--3--6--n--32cc93a80b-k8s-goldmane--666569f655--9lcpv-eth0" Jan 24 00:57:32.461788 containerd[1500]: 2026-01-24 00:57:32.303 [INFO][4293] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0c8fb2d444327c17d0d4ca97727576ac926b98527fcc6f645ef8138912c10c89" HandleID="k8s-pod-network.0c8fb2d444327c17d0d4ca97727576ac926b98527fcc6f645ef8138912c10c89" Workload="ci--4081--3--6--n--32cc93a80b-k8s-goldmane--666569f655--9lcpv-eth0" Jan 24 00:57:32.461788 containerd[1500]: 2026-01-24 00:57:32.304 [INFO][4293] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0c8fb2d444327c17d0d4ca97727576ac926b98527fcc6f645ef8138912c10c89" HandleID="k8s-pod-network.0c8fb2d444327c17d0d4ca97727576ac926b98527fcc6f645ef8138912c10c89" Workload="ci--4081--3--6--n--32cc93a80b-k8s-goldmane--666569f655--9lcpv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5660), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-32cc93a80b", "pod":"goldmane-666569f655-9lcpv", "timestamp":"2026-01-24 00:57:32.303335613 +0000 UTC"}, Hostname:"ci-4081-3-6-n-32cc93a80b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:57:32.461788 containerd[1500]: 2026-01-24 00:57:32.304 [INFO][4293] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:32.461788 containerd[1500]: 2026-01-24 00:57:32.323 [INFO][4293] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:32.461788 containerd[1500]: 2026-01-24 00:57:32.323 [INFO][4293] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-32cc93a80b' Jan 24 00:57:32.461788 containerd[1500]: 2026-01-24 00:57:32.406 [INFO][4293] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0c8fb2d444327c17d0d4ca97727576ac926b98527fcc6f645ef8138912c10c89" host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:32.461788 containerd[1500]: 2026-01-24 00:57:32.410 [INFO][4293] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:32.461788 containerd[1500]: 2026-01-24 00:57:32.414 [INFO][4293] ipam/ipam.go 511: Trying affinity for 192.168.24.128/26 host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:32.461788 containerd[1500]: 2026-01-24 00:57:32.416 [INFO][4293] ipam/ipam.go 158: Attempting to load block cidr=192.168.24.128/26 host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:32.461788 containerd[1500]: 2026-01-24 00:57:32.418 [INFO][4293] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.24.128/26 host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:32.461788 containerd[1500]: 2026-01-24 00:57:32.418 [INFO][4293] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.24.128/26 handle="k8s-pod-network.0c8fb2d444327c17d0d4ca97727576ac926b98527fcc6f645ef8138912c10c89" host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:32.461788 containerd[1500]: 2026-01-24 00:57:32.420 [INFO][4293] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0c8fb2d444327c17d0d4ca97727576ac926b98527fcc6f645ef8138912c10c89 Jan 24 00:57:32.461788 containerd[1500]: 2026-01-24 00:57:32.425 [INFO][4293] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.24.128/26 handle="k8s-pod-network.0c8fb2d444327c17d0d4ca97727576ac926b98527fcc6f645ef8138912c10c89" host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:32.461788 containerd[1500]: 2026-01-24 00:57:32.432 [INFO][4293] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.24.132/26] block=192.168.24.128/26 handle="k8s-pod-network.0c8fb2d444327c17d0d4ca97727576ac926b98527fcc6f645ef8138912c10c89" host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:32.461788 containerd[1500]: 2026-01-24 00:57:32.433 [INFO][4293] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.24.132/26] handle="k8s-pod-network.0c8fb2d444327c17d0d4ca97727576ac926b98527fcc6f645ef8138912c10c89" host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:32.461788 containerd[1500]: 2026-01-24 00:57:32.433 [INFO][4293] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:32.461788 containerd[1500]: 2026-01-24 00:57:32.433 [INFO][4293] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.24.132/26] IPv6=[] ContainerID="0c8fb2d444327c17d0d4ca97727576ac926b98527fcc6f645ef8138912c10c89" HandleID="k8s-pod-network.0c8fb2d444327c17d0d4ca97727576ac926b98527fcc6f645ef8138912c10c89" Workload="ci--4081--3--6--n--32cc93a80b-k8s-goldmane--666569f655--9lcpv-eth0" Jan 24 00:57:32.462229 containerd[1500]: 2026-01-24 00:57:32.438 [INFO][4257] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0c8fb2d444327c17d0d4ca97727576ac926b98527fcc6f645ef8138912c10c89" Namespace="calico-system" Pod="goldmane-666569f655-9lcpv" WorkloadEndpoint="ci--4081--3--6--n--32cc93a80b-k8s-goldmane--666569f655--9lcpv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--32cc93a80b-k8s-goldmane--666569f655--9lcpv-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"267130dd-42b7-45fa-9166-0420d7cd47cc", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 57, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-32cc93a80b", ContainerID:"", Pod:"goldmane-666569f655-9lcpv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.24.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic8a3aa58e2d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:32.462229 containerd[1500]: 2026-01-24 00:57:32.438 [INFO][4257] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.24.132/32] ContainerID="0c8fb2d444327c17d0d4ca97727576ac926b98527fcc6f645ef8138912c10c89" Namespace="calico-system" Pod="goldmane-666569f655-9lcpv" WorkloadEndpoint="ci--4081--3--6--n--32cc93a80b-k8s-goldmane--666569f655--9lcpv-eth0" Jan 24 00:57:32.462229 containerd[1500]: 2026-01-24 00:57:32.438 [INFO][4257] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic8a3aa58e2d ContainerID="0c8fb2d444327c17d0d4ca97727576ac926b98527fcc6f645ef8138912c10c89" Namespace="calico-system" Pod="goldmane-666569f655-9lcpv" WorkloadEndpoint="ci--4081--3--6--n--32cc93a80b-k8s-goldmane--666569f655--9lcpv-eth0" Jan 24 00:57:32.462229 containerd[1500]: 2026-01-24 00:57:32.443 [INFO][4257] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0c8fb2d444327c17d0d4ca97727576ac926b98527fcc6f645ef8138912c10c89" Namespace="calico-system" Pod="goldmane-666569f655-9lcpv" WorkloadEndpoint="ci--4081--3--6--n--32cc93a80b-k8s-goldmane--666569f655--9lcpv-eth0" Jan 24 00:57:32.462229 containerd[1500]: 2026-01-24 00:57:32.444 [INFO][4257] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0c8fb2d444327c17d0d4ca97727576ac926b98527fcc6f645ef8138912c10c89" Namespace="calico-system" Pod="goldmane-666569f655-9lcpv" WorkloadEndpoint="ci--4081--3--6--n--32cc93a80b-k8s-goldmane--666569f655--9lcpv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--32cc93a80b-k8s-goldmane--666569f655--9lcpv-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"267130dd-42b7-45fa-9166-0420d7cd47cc", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 57, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-32cc93a80b", ContainerID:"0c8fb2d444327c17d0d4ca97727576ac926b98527fcc6f645ef8138912c10c89", Pod:"goldmane-666569f655-9lcpv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.24.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic8a3aa58e2d", MAC:"4e:af:0b:91:6d:a6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:32.462229 containerd[1500]: 2026-01-24 00:57:32.453 [INFO][4257] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0c8fb2d444327c17d0d4ca97727576ac926b98527fcc6f645ef8138912c10c89" Namespace="calico-system" Pod="goldmane-666569f655-9lcpv" WorkloadEndpoint="ci--4081--3--6--n--32cc93a80b-k8s-goldmane--666569f655--9lcpv-eth0" Jan 24 00:57:32.480978 systemd[1]: Started cri-containerd-766dc24595da5b2fced403671b03f79251dede1edcb4d7658484a73ad8d36006.scope - libcontainer container 766dc24595da5b2fced403671b03f79251dede1edcb4d7658484a73ad8d36006. Jan 24 00:57:32.491712 containerd[1500]: time="2026-01-24T00:57:32.491502275Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:57:32.491956 containerd[1500]: time="2026-01-24T00:57:32.491858856Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:57:32.492102 containerd[1500]: time="2026-01-24T00:57:32.491895676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:57:32.492755 containerd[1500]: time="2026-01-24T00:57:32.492591380Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:57:32.514951 systemd[1]: Started cri-containerd-0c8fb2d444327c17d0d4ca97727576ac926b98527fcc6f645ef8138912c10c89.scope - libcontainer container 0c8fb2d444327c17d0d4ca97727576ac926b98527fcc6f645ef8138912c10c89. Jan 24 00:57:32.518749 containerd[1500]: time="2026-01-24T00:57:32.518707271Z" level=info msg="StartContainer for \"766dc24595da5b2fced403671b03f79251dede1edcb4d7658484a73ad8d36006\" returns successfully" Jan 24 00:57:32.546755 systemd-networkd[1402]: cali23d5b56d9b1: Link UP Jan 24 00:57:32.546957 systemd-networkd[1402]: cali23d5b56d9b1: Gained carrier Jan 24 00:57:32.563715 containerd[1500]: 2026-01-24 00:57:32.268 [INFO][4266] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--32cc93a80b-k8s-calico--apiserver--59667657--b8mx9-eth0 calico-apiserver-59667657- calico-apiserver 3be98e24-0896-49a9-8666-4ca8f66cf2c8 956 0 2026-01-24 00:57:03 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:59667657 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-n-32cc93a80b calico-apiserver-59667657-b8mx9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali23d5b56d9b1 [] [] }} ContainerID="9abde062edfac768314121e4edfd7041bf4adc91bb909ef37aa0d65b67c7e589" Namespace="calico-apiserver" Pod="calico-apiserver-59667657-b8mx9" WorkloadEndpoint="ci--4081--3--6--n--32cc93a80b-k8s-calico--apiserver--59667657--b8mx9-" Jan 24 00:57:32.563715 containerd[1500]: 2026-01-24 00:57:32.269 [INFO][4266] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9abde062edfac768314121e4edfd7041bf4adc91bb909ef37aa0d65b67c7e589" Namespace="calico-apiserver" Pod="calico-apiserver-59667657-b8mx9" WorkloadEndpoint="ci--4081--3--6--n--32cc93a80b-k8s-calico--apiserver--59667657--b8mx9-eth0" Jan 24 00:57:32.563715 containerd[1500]: 2026-01-24 00:57:32.310 [INFO][4303] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9abde062edfac768314121e4edfd7041bf4adc91bb909ef37aa0d65b67c7e589" HandleID="k8s-pod-network.9abde062edfac768314121e4edfd7041bf4adc91bb909ef37aa0d65b67c7e589" Workload="ci--4081--3--6--n--32cc93a80b-k8s-calico--apiserver--59667657--b8mx9-eth0" Jan 24 00:57:32.563715 containerd[1500]: 2026-01-24 00:57:32.310 [INFO][4303] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9abde062edfac768314121e4edfd7041bf4adc91bb909ef37aa0d65b67c7e589" HandleID="k8s-pod-network.9abde062edfac768314121e4edfd7041bf4adc91bb909ef37aa0d65b67c7e589" Workload="ci--4081--3--6--n--32cc93a80b-k8s-calico--apiserver--59667657--b8mx9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d55a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-6-n-32cc93a80b", "pod":"calico-apiserver-59667657-b8mx9", "timestamp":"2026-01-24 00:57:32.310407026 +0000 UTC"}, Hostname:"ci-4081-3-6-n-32cc93a80b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:57:32.563715 containerd[1500]: 2026-01-24 00:57:32.310 [INFO][4303] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:32.563715 containerd[1500]: 2026-01-24 00:57:32.433 [INFO][4303] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:32.563715 containerd[1500]: 2026-01-24 00:57:32.433 [INFO][4303] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-32cc93a80b' Jan 24 00:57:32.563715 containerd[1500]: 2026-01-24 00:57:32.505 [INFO][4303] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9abde062edfac768314121e4edfd7041bf4adc91bb909ef37aa0d65b67c7e589" host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:32.563715 containerd[1500]: 2026-01-24 00:57:32.511 [INFO][4303] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:32.563715 containerd[1500]: 2026-01-24 00:57:32.516 [INFO][4303] ipam/ipam.go 511: Trying affinity for 192.168.24.128/26 host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:32.563715 containerd[1500]: 2026-01-24 00:57:32.519 [INFO][4303] ipam/ipam.go 158: Attempting to load block cidr=192.168.24.128/26 host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:32.563715 containerd[1500]: 2026-01-24 00:57:32.522 [INFO][4303] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.24.128/26 host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:32.563715 containerd[1500]: 2026-01-24 00:57:32.522 [INFO][4303] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.24.128/26 handle="k8s-pod-network.9abde062edfac768314121e4edfd7041bf4adc91bb909ef37aa0d65b67c7e589" host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:32.563715 containerd[1500]: 2026-01-24 00:57:32.524 [INFO][4303] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9abde062edfac768314121e4edfd7041bf4adc91bb909ef37aa0d65b67c7e589 Jan 24 00:57:32.563715 containerd[1500]: 2026-01-24 00:57:32.529 [INFO][4303] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.24.128/26 handle="k8s-pod-network.9abde062edfac768314121e4edfd7041bf4adc91bb909ef37aa0d65b67c7e589" host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:32.563715 containerd[1500]: 2026-01-24 00:57:32.536 [INFO][4303] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.24.133/26] block=192.168.24.128/26 handle="k8s-pod-network.9abde062edfac768314121e4edfd7041bf4adc91bb909ef37aa0d65b67c7e589" host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:32.563715 containerd[1500]: 2026-01-24 00:57:32.536 [INFO][4303] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.24.133/26] handle="k8s-pod-network.9abde062edfac768314121e4edfd7041bf4adc91bb909ef37aa0d65b67c7e589" host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:32.563715 containerd[1500]: 2026-01-24 00:57:32.537 [INFO][4303] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:32.563715 containerd[1500]: 2026-01-24 00:57:32.537 [INFO][4303] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.24.133/26] IPv6=[] ContainerID="9abde062edfac768314121e4edfd7041bf4adc91bb909ef37aa0d65b67c7e589" HandleID="k8s-pod-network.9abde062edfac768314121e4edfd7041bf4adc91bb909ef37aa0d65b67c7e589" Workload="ci--4081--3--6--n--32cc93a80b-k8s-calico--apiserver--59667657--b8mx9-eth0" Jan 24 00:57:32.564319 containerd[1500]: 2026-01-24 00:57:32.542 [INFO][4266] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9abde062edfac768314121e4edfd7041bf4adc91bb909ef37aa0d65b67c7e589" Namespace="calico-apiserver" Pod="calico-apiserver-59667657-b8mx9" WorkloadEndpoint="ci--4081--3--6--n--32cc93a80b-k8s-calico--apiserver--59667657--b8mx9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--32cc93a80b-k8s-calico--apiserver--59667657--b8mx9-eth0", GenerateName:"calico-apiserver-59667657-", Namespace:"calico-apiserver", SelfLink:"", UID:"3be98e24-0896-49a9-8666-4ca8f66cf2c8", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 57, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59667657", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-32cc93a80b", ContainerID:"", Pod:"calico-apiserver-59667657-b8mx9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali23d5b56d9b1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:32.564319 containerd[1500]: 2026-01-24 00:57:32.542 [INFO][4266] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.24.133/32] ContainerID="9abde062edfac768314121e4edfd7041bf4adc91bb909ef37aa0d65b67c7e589" Namespace="calico-apiserver" Pod="calico-apiserver-59667657-b8mx9" WorkloadEndpoint="ci--4081--3--6--n--32cc93a80b-k8s-calico--apiserver--59667657--b8mx9-eth0" Jan 24 00:57:32.564319 containerd[1500]: 2026-01-24 00:57:32.542 [INFO][4266] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali23d5b56d9b1 ContainerID="9abde062edfac768314121e4edfd7041bf4adc91bb909ef37aa0d65b67c7e589" Namespace="calico-apiserver" Pod="calico-apiserver-59667657-b8mx9" WorkloadEndpoint="ci--4081--3--6--n--32cc93a80b-k8s-calico--apiserver--59667657--b8mx9-eth0" Jan 24 00:57:32.564319 containerd[1500]: 2026-01-24 00:57:32.547 [INFO][4266] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9abde062edfac768314121e4edfd7041bf4adc91bb909ef37aa0d65b67c7e589" Namespace="calico-apiserver" Pod="calico-apiserver-59667657-b8mx9" WorkloadEndpoint="ci--4081--3--6--n--32cc93a80b-k8s-calico--apiserver--59667657--b8mx9-eth0" Jan 24 00:57:32.564319 containerd[1500]: 2026-01-24 00:57:32.547 [INFO][4266] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9abde062edfac768314121e4edfd7041bf4adc91bb909ef37aa0d65b67c7e589" Namespace="calico-apiserver" Pod="calico-apiserver-59667657-b8mx9" WorkloadEndpoint="ci--4081--3--6--n--32cc93a80b-k8s-calico--apiserver--59667657--b8mx9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--32cc93a80b-k8s-calico--apiserver--59667657--b8mx9-eth0", GenerateName:"calico-apiserver-59667657-", Namespace:"calico-apiserver", SelfLink:"", UID:"3be98e24-0896-49a9-8666-4ca8f66cf2c8", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 57, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59667657", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-32cc93a80b", ContainerID:"9abde062edfac768314121e4edfd7041bf4adc91bb909ef37aa0d65b67c7e589", Pod:"calico-apiserver-59667657-b8mx9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali23d5b56d9b1", MAC:"1a:f3:99:ce:b1:a3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:32.564319 containerd[1500]: 2026-01-24 00:57:32.554 [INFO][4266] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9abde062edfac768314121e4edfd7041bf4adc91bb909ef37aa0d65b67c7e589" Namespace="calico-apiserver" Pod="calico-apiserver-59667657-b8mx9" WorkloadEndpoint="ci--4081--3--6--n--32cc93a80b-k8s-calico--apiserver--59667657--b8mx9-eth0" Jan 24 00:57:32.587578 containerd[1500]: time="2026-01-24T00:57:32.587542049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-9lcpv,Uid:267130dd-42b7-45fa-9166-0420d7cd47cc,Namespace:calico-system,Attempt:1,} returns sandbox id \"0c8fb2d444327c17d0d4ca97727576ac926b98527fcc6f645ef8138912c10c89\"" Jan 24 00:57:32.592336 containerd[1500]: time="2026-01-24T00:57:32.592244091Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:57:32.594903 containerd[1500]: time="2026-01-24T00:57:32.594669432Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:57:32.594903 containerd[1500]: time="2026-01-24T00:57:32.594717372Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:57:32.594903 containerd[1500]: time="2026-01-24T00:57:32.594727162Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:57:32.594903 containerd[1500]: time="2026-01-24T00:57:32.594865093Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:57:32.611884 systemd[1]: Started cri-containerd-9abde062edfac768314121e4edfd7041bf4adc91bb909ef37aa0d65b67c7e589.scope - libcontainer container 9abde062edfac768314121e4edfd7041bf4adc91bb909ef37aa0d65b67c7e589. Jan 24 00:57:32.658760 containerd[1500]: time="2026-01-24T00:57:32.658208336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59667657-b8mx9,Uid:3be98e24-0896-49a9-8666-4ca8f66cf2c8,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"9abde062edfac768314121e4edfd7041bf4adc91bb909ef37aa0d65b67c7e589\"" Jan 24 00:57:32.970654 containerd[1500]: time="2026-01-24T00:57:32.969610158Z" level=info msg="StopPodSandbox for \"a76459f809f24468977c0635d780f8572d989f71d76e163ecda73729d954f19c\"" Jan 24 00:57:33.031165 containerd[1500]: time="2026-01-24T00:57:33.030938513Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:57:33.033610 containerd[1500]: time="2026-01-24T00:57:33.033430424Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:57:33.033610 containerd[1500]: time="2026-01-24T00:57:33.033549905Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 24 00:57:33.034601 kubelet[2546]: E0124 00:57:33.034010 2546 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:57:33.034601 kubelet[2546]: E0124 00:57:33.034095 2546 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:57:33.034601 kubelet[2546]: E0124 00:57:33.034466 2546 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dhfqz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-9lcpv_calico-system(267130dd-42b7-45fa-9166-0420d7cd47cc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:57:33.036278 containerd[1500]: time="2026-01-24T00:57:33.036155936Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:57:33.036532 kubelet[2546]: E0124 00:57:33.036471 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9lcpv" podUID="267130dd-42b7-45fa-9166-0420d7cd47cc" Jan 24 00:57:33.137337 containerd[1500]: 2026-01-24 00:57:33.071 [INFO][4511] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a76459f809f24468977c0635d780f8572d989f71d76e163ecda73729d954f19c" Jan 24 00:57:33.137337 containerd[1500]: 2026-01-24 00:57:33.072 [INFO][4511] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a76459f809f24468977c0635d780f8572d989f71d76e163ecda73729d954f19c" iface="eth0" netns="/var/run/netns/cni-8d54fda8-ea83-2cb0-11c3-279aa1691545" Jan 24 00:57:33.137337 containerd[1500]: 2026-01-24 00:57:33.074 [INFO][4511] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a76459f809f24468977c0635d780f8572d989f71d76e163ecda73729d954f19c" iface="eth0" netns="/var/run/netns/cni-8d54fda8-ea83-2cb0-11c3-279aa1691545" Jan 24 00:57:33.137337 containerd[1500]: 2026-01-24 00:57:33.075 [INFO][4511] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a76459f809f24468977c0635d780f8572d989f71d76e163ecda73729d954f19c" iface="eth0" netns="/var/run/netns/cni-8d54fda8-ea83-2cb0-11c3-279aa1691545" Jan 24 00:57:33.137337 containerd[1500]: 2026-01-24 00:57:33.075 [INFO][4511] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a76459f809f24468977c0635d780f8572d989f71d76e163ecda73729d954f19c" Jan 24 00:57:33.137337 containerd[1500]: 2026-01-24 00:57:33.075 [INFO][4511] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a76459f809f24468977c0635d780f8572d989f71d76e163ecda73729d954f19c" Jan 24 00:57:33.137337 containerd[1500]: 2026-01-24 00:57:33.117 [INFO][4518] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a76459f809f24468977c0635d780f8572d989f71d76e163ecda73729d954f19c" HandleID="k8s-pod-network.a76459f809f24468977c0635d780f8572d989f71d76e163ecda73729d954f19c" Workload="ci--4081--3--6--n--32cc93a80b-k8s-calico--kube--controllers--85cdccf5--5whtp-eth0" Jan 24 00:57:33.137337 containerd[1500]: 2026-01-24 00:57:33.117 [INFO][4518] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:33.137337 containerd[1500]: 2026-01-24 00:57:33.117 [INFO][4518] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:33.137337 containerd[1500]: 2026-01-24 00:57:33.126 [WARNING][4518] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a76459f809f24468977c0635d780f8572d989f71d76e163ecda73729d954f19c" HandleID="k8s-pod-network.a76459f809f24468977c0635d780f8572d989f71d76e163ecda73729d954f19c" Workload="ci--4081--3--6--n--32cc93a80b-k8s-calico--kube--controllers--85cdccf5--5whtp-eth0" Jan 24 00:57:33.137337 containerd[1500]: 2026-01-24 00:57:33.126 [INFO][4518] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a76459f809f24468977c0635d780f8572d989f71d76e163ecda73729d954f19c" HandleID="k8s-pod-network.a76459f809f24468977c0635d780f8572d989f71d76e163ecda73729d954f19c" Workload="ci--4081--3--6--n--32cc93a80b-k8s-calico--kube--controllers--85cdccf5--5whtp-eth0" Jan 24 00:57:33.137337 containerd[1500]: 2026-01-24 00:57:33.129 [INFO][4518] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:33.137337 containerd[1500]: 2026-01-24 00:57:33.132 [INFO][4511] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a76459f809f24468977c0635d780f8572d989f71d76e163ecda73729d954f19c" Jan 24 00:57:33.138315 containerd[1500]: time="2026-01-24T00:57:33.137685177Z" level=info msg="TearDown network for sandbox \"a76459f809f24468977c0635d780f8572d989f71d76e163ecda73729d954f19c\" successfully" Jan 24 00:57:33.138315 containerd[1500]: time="2026-01-24T00:57:33.137723057Z" level=info msg="StopPodSandbox for \"a76459f809f24468977c0635d780f8572d989f71d76e163ecda73729d954f19c\" returns successfully" Jan 24 00:57:33.139101 containerd[1500]: time="2026-01-24T00:57:33.139031812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85cdccf5-5whtp,Uid:92edd234-ce88-420a-bb1b-56d2f203263f,Namespace:calico-system,Attempt:1,}" Jan 24 00:57:33.200415 systemd[1]: run-netns-cni\x2d8d54fda8\x2dea83\x2d2cb0\x2d11c3\x2d279aa1691545.mount: Deactivated successfully. Jan 24 00:57:33.254263 kubelet[2546]: E0124 00:57:33.254075 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9lcpv" podUID="267130dd-42b7-45fa-9166-0420d7cd47cc" Jan 24 00:57:33.354219 systemd-networkd[1402]: calia9f236ee9a3: Link UP Jan 24 00:57:33.357542 systemd-networkd[1402]: calia9f236ee9a3: Gained carrier Jan 24 00:57:33.370179 kubelet[2546]: I0124 00:57:33.370131 2546 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-7fv4k" podStartSLOduration=40.369724794 podStartE2EDuration="40.369724794s" podCreationTimestamp="2026-01-24 00:56:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:57:33.301084476 +0000 UTC m=+45.451025969" watchObservedRunningTime="2026-01-24 00:57:33.369724794 +0000 UTC m=+45.519666287" Jan 24 00:57:33.373789 containerd[1500]: 2026-01-24 00:57:33.233 [INFO][4525] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--32cc93a80b-k8s-calico--kube--controllers--85cdccf5--5whtp-eth0 calico-kube-controllers-85cdccf5- calico-system 92edd234-ce88-420a-bb1b-56d2f203263f 981 0 2026-01-24 00:57:06 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:85cdccf5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-6-n-32cc93a80b calico-kube-controllers-85cdccf5-5whtp eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia9f236ee9a3 [] [] }} ContainerID="991b54c87e49d50b252fa6a63fc45181e272d57e631f8cfbf0d9c74a82ed9d56" Namespace="calico-system" Pod="calico-kube-controllers-85cdccf5-5whtp" WorkloadEndpoint="ci--4081--3--6--n--32cc93a80b-k8s-calico--kube--controllers--85cdccf5--5whtp-" Jan 24 00:57:33.373789 containerd[1500]: 2026-01-24 00:57:33.234 [INFO][4525] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="991b54c87e49d50b252fa6a63fc45181e272d57e631f8cfbf0d9c74a82ed9d56" Namespace="calico-system" Pod="calico-kube-controllers-85cdccf5-5whtp" WorkloadEndpoint="ci--4081--3--6--n--32cc93a80b-k8s-calico--kube--controllers--85cdccf5--5whtp-eth0" Jan 24 00:57:33.373789 containerd[1500]: 2026-01-24 00:57:33.279 [INFO][4537] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="991b54c87e49d50b252fa6a63fc45181e272d57e631f8cfbf0d9c74a82ed9d56" HandleID="k8s-pod-network.991b54c87e49d50b252fa6a63fc45181e272d57e631f8cfbf0d9c74a82ed9d56" Workload="ci--4081--3--6--n--32cc93a80b-k8s-calico--kube--controllers--85cdccf5--5whtp-eth0" Jan 24 00:57:33.373789 containerd[1500]: 2026-01-24 00:57:33.280 [INFO][4537] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="991b54c87e49d50b252fa6a63fc45181e272d57e631f8cfbf0d9c74a82ed9d56" HandleID="k8s-pod-network.991b54c87e49d50b252fa6a63fc45181e272d57e631f8cfbf0d9c74a82ed9d56" Workload="ci--4081--3--6--n--32cc93a80b-k8s-calico--kube--controllers--85cdccf5--5whtp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5960), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-32cc93a80b", "pod":"calico-kube-controllers-85cdccf5-5whtp", "timestamp":"2026-01-24 00:57:33.279482782 +0000 UTC"}, Hostname:"ci-4081-3-6-n-32cc93a80b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:57:33.373789 containerd[1500]: 2026-01-24 00:57:33.280 [INFO][4537] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:33.373789 containerd[1500]: 2026-01-24 00:57:33.281 [INFO][4537] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:33.373789 containerd[1500]: 2026-01-24 00:57:33.281 [INFO][4537] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-32cc93a80b' Jan 24 00:57:33.373789 containerd[1500]: 2026-01-24 00:57:33.298 [INFO][4537] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.991b54c87e49d50b252fa6a63fc45181e272d57e631f8cfbf0d9c74a82ed9d56" host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:33.373789 containerd[1500]: 2026-01-24 00:57:33.308 [INFO][4537] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:33.373789 containerd[1500]: 2026-01-24 00:57:33.316 [INFO][4537] ipam/ipam.go 511: Trying affinity for 192.168.24.128/26 host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:33.373789 containerd[1500]: 2026-01-24 00:57:33.322 [INFO][4537] ipam/ipam.go 158: Attempting to load block cidr=192.168.24.128/26 host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:33.373789 containerd[1500]: 2026-01-24 00:57:33.326 [INFO][4537] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.24.128/26 host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:33.373789 containerd[1500]: 2026-01-24 00:57:33.326 [INFO][4537] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.24.128/26 handle="k8s-pod-network.991b54c87e49d50b252fa6a63fc45181e272d57e631f8cfbf0d9c74a82ed9d56" host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:33.373789 containerd[1500]: 2026-01-24 00:57:33.330 [INFO][4537] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.991b54c87e49d50b252fa6a63fc45181e272d57e631f8cfbf0d9c74a82ed9d56 Jan 24 00:57:33.373789 containerd[1500]: 2026-01-24 00:57:33.335 [INFO][4537] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.24.128/26 handle="k8s-pod-network.991b54c87e49d50b252fa6a63fc45181e272d57e631f8cfbf0d9c74a82ed9d56" host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:33.373789 containerd[1500]: 2026-01-24 00:57:33.347 [INFO][4537] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.24.134/26] block=192.168.24.128/26 handle="k8s-pod-network.991b54c87e49d50b252fa6a63fc45181e272d57e631f8cfbf0d9c74a82ed9d56" host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:33.373789 containerd[1500]: 2026-01-24 00:57:33.347 [INFO][4537] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.24.134/26] handle="k8s-pod-network.991b54c87e49d50b252fa6a63fc45181e272d57e631f8cfbf0d9c74a82ed9d56" host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:33.373789 containerd[1500]: 2026-01-24 00:57:33.347 [INFO][4537] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:33.373789 containerd[1500]: 2026-01-24 00:57:33.347 [INFO][4537] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.24.134/26] IPv6=[] ContainerID="991b54c87e49d50b252fa6a63fc45181e272d57e631f8cfbf0d9c74a82ed9d56" HandleID="k8s-pod-network.991b54c87e49d50b252fa6a63fc45181e272d57e631f8cfbf0d9c74a82ed9d56" Workload="ci--4081--3--6--n--32cc93a80b-k8s-calico--kube--controllers--85cdccf5--5whtp-eth0" Jan 24 00:57:33.374227 containerd[1500]: 2026-01-24 00:57:33.350 [INFO][4525] cni-plugin/k8s.go 418: Populated endpoint ContainerID="991b54c87e49d50b252fa6a63fc45181e272d57e631f8cfbf0d9c74a82ed9d56" Namespace="calico-system" Pod="calico-kube-controllers-85cdccf5-5whtp" WorkloadEndpoint="ci--4081--3--6--n--32cc93a80b-k8s-calico--kube--controllers--85cdccf5--5whtp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--32cc93a80b-k8s-calico--kube--controllers--85cdccf5--5whtp-eth0", GenerateName:"calico-kube-controllers-85cdccf5-", Namespace:"calico-system", SelfLink:"", UID:"92edd234-ce88-420a-bb1b-56d2f203263f", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 57, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85cdccf5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-32cc93a80b", ContainerID:"", Pod:"calico-kube-controllers-85cdccf5-5whtp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.24.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia9f236ee9a3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:33.374227 containerd[1500]: 2026-01-24 00:57:33.350 [INFO][4525] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.24.134/32] ContainerID="991b54c87e49d50b252fa6a63fc45181e272d57e631f8cfbf0d9c74a82ed9d56" Namespace="calico-system" Pod="calico-kube-controllers-85cdccf5-5whtp" WorkloadEndpoint="ci--4081--3--6--n--32cc93a80b-k8s-calico--kube--controllers--85cdccf5--5whtp-eth0" Jan 24 00:57:33.374227 containerd[1500]: 2026-01-24 00:57:33.350 [INFO][4525] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia9f236ee9a3 ContainerID="991b54c87e49d50b252fa6a63fc45181e272d57e631f8cfbf0d9c74a82ed9d56" Namespace="calico-system" Pod="calico-kube-controllers-85cdccf5-5whtp" WorkloadEndpoint="ci--4081--3--6--n--32cc93a80b-k8s-calico--kube--controllers--85cdccf5--5whtp-eth0" Jan 24 00:57:33.374227 containerd[1500]: 2026-01-24 00:57:33.358 [INFO][4525] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="991b54c87e49d50b252fa6a63fc45181e272d57e631f8cfbf0d9c74a82ed9d56" Namespace="calico-system" Pod="calico-kube-controllers-85cdccf5-5whtp" WorkloadEndpoint="ci--4081--3--6--n--32cc93a80b-k8s-calico--kube--controllers--85cdccf5--5whtp-eth0" Jan 24 00:57:33.374227 containerd[1500]: 2026-01-24 00:57:33.358 [INFO][4525] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="991b54c87e49d50b252fa6a63fc45181e272d57e631f8cfbf0d9c74a82ed9d56" Namespace="calico-system" Pod="calico-kube-controllers-85cdccf5-5whtp" WorkloadEndpoint="ci--4081--3--6--n--32cc93a80b-k8s-calico--kube--controllers--85cdccf5--5whtp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--32cc93a80b-k8s-calico--kube--controllers--85cdccf5--5whtp-eth0", GenerateName:"calico-kube-controllers-85cdccf5-", Namespace:"calico-system", SelfLink:"", UID:"92edd234-ce88-420a-bb1b-56d2f203263f", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 57, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85cdccf5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-32cc93a80b", ContainerID:"991b54c87e49d50b252fa6a63fc45181e272d57e631f8cfbf0d9c74a82ed9d56", Pod:"calico-kube-controllers-85cdccf5-5whtp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.24.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia9f236ee9a3", MAC:"6e:97:b5:2f:70:26", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:33.374227 containerd[1500]: 2026-01-24 00:57:33.369 [INFO][4525] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="991b54c87e49d50b252fa6a63fc45181e272d57e631f8cfbf0d9c74a82ed9d56" Namespace="calico-system" Pod="calico-kube-controllers-85cdccf5-5whtp" WorkloadEndpoint="ci--4081--3--6--n--32cc93a80b-k8s-calico--kube--controllers--85cdccf5--5whtp-eth0" Jan 24 00:57:33.396204 containerd[1500]: time="2026-01-24T00:57:33.395663676Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:57:33.396204 containerd[1500]: time="2026-01-24T00:57:33.395707386Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:57:33.396204 containerd[1500]: time="2026-01-24T00:57:33.395718667Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:57:33.396204 containerd[1500]: time="2026-01-24T00:57:33.395800877Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:57:33.419140 systemd[1]: Started cri-containerd-991b54c87e49d50b252fa6a63fc45181e272d57e631f8cfbf0d9c74a82ed9d56.scope - libcontainer container 991b54c87e49d50b252fa6a63fc45181e272d57e631f8cfbf0d9c74a82ed9d56. Jan 24 00:57:33.462020 containerd[1500]: time="2026-01-24T00:57:33.461934324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85cdccf5-5whtp,Uid:92edd234-ce88-420a-bb1b-56d2f203263f,Namespace:calico-system,Attempt:1,} returns sandbox id \"991b54c87e49d50b252fa6a63fc45181e272d57e631f8cfbf0d9c74a82ed9d56\"" Jan 24 00:57:33.473960 containerd[1500]: time="2026-01-24T00:57:33.473917736Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:57:33.475081 containerd[1500]: time="2026-01-24T00:57:33.475057721Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:57:33.475169 containerd[1500]: time="2026-01-24T00:57:33.475128071Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:57:33.475258 kubelet[2546]: E0124 00:57:33.475231 2546 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:57:33.475298 kubelet[2546]: E0124 00:57:33.475264 2546 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:57:33.475452 kubelet[2546]: E0124 00:57:33.475407 2546 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nvjcq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-59667657-b8mx9_calico-apiserver(3be98e24-0896-49a9-8666-4ca8f66cf2c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:57:33.475860 containerd[1500]: time="2026-01-24T00:57:33.475838514Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:57:33.476524 kubelet[2546]: E0124 00:57:33.476499 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59667657-b8mx9" podUID="3be98e24-0896-49a9-8666-4ca8f66cf2c8" Jan 24 00:57:33.546139 systemd-networkd[1402]: cali1ba17ad0cbe: Gained IPv6LL Jan 24 00:57:33.868686 systemd-networkd[1402]: cali23d5b56d9b1: Gained IPv6LL Jan 24 00:57:33.915058 containerd[1500]: time="2026-01-24T00:57:33.914930860Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:57:33.916847 containerd[1500]: time="2026-01-24T00:57:33.916679257Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:57:33.916988 containerd[1500]: time="2026-01-24T00:57:33.916880118Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 24 00:57:33.918556 kubelet[2546]: E0124 00:57:33.917337 2546 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:57:33.918556 kubelet[2546]: E0124 00:57:33.917473 2546 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:57:33.919054 kubelet[2546]: E0124 00:57:33.918862 2546 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wjqwt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-85cdccf5-5whtp_calico-system(92edd234-ce88-420a-bb1b-56d2f203263f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:57:33.921526 kubelet[2546]: E0124 00:57:33.921454 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85cdccf5-5whtp" podUID="92edd234-ce88-420a-bb1b-56d2f203263f" Jan 24 00:57:33.971231 containerd[1500]: time="2026-01-24T00:57:33.971137584Z" level=info msg="StopPodSandbox for \"bb268af4557ca6655434d066efced3db85a612e145e290fe843fe0365ad7bdd5\"" Jan 24 00:57:33.972569 containerd[1500]: time="2026-01-24T00:57:33.972019018Z" level=info msg="StopPodSandbox for \"cc13e5b333c492a15e7614e5d51324566f831788a9cd5cc424e87f5203496bc1\"" Jan 24 00:57:34.153239 containerd[1500]: 2026-01-24 00:57:34.092 [INFO][4617] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bb268af4557ca6655434d066efced3db85a612e145e290fe843fe0365ad7bdd5" Jan 24 00:57:34.153239 containerd[1500]: 2026-01-24 00:57:34.092 [INFO][4617] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bb268af4557ca6655434d066efced3db85a612e145e290fe843fe0365ad7bdd5" iface="eth0" netns="/var/run/netns/cni-07d1c0cd-09c4-464a-5dbc-91e075cf3cd9" Jan 24 00:57:34.153239 containerd[1500]: 2026-01-24 00:57:34.096 [INFO][4617] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bb268af4557ca6655434d066efced3db85a612e145e290fe843fe0365ad7bdd5" iface="eth0" netns="/var/run/netns/cni-07d1c0cd-09c4-464a-5dbc-91e075cf3cd9" Jan 24 00:57:34.153239 containerd[1500]: 2026-01-24 00:57:34.096 [INFO][4617] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bb268af4557ca6655434d066efced3db85a612e145e290fe843fe0365ad7bdd5" iface="eth0" netns="/var/run/netns/cni-07d1c0cd-09c4-464a-5dbc-91e075cf3cd9" Jan 24 00:57:34.153239 containerd[1500]: 2026-01-24 00:57:34.097 [INFO][4617] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bb268af4557ca6655434d066efced3db85a612e145e290fe843fe0365ad7bdd5" Jan 24 00:57:34.153239 containerd[1500]: 2026-01-24 00:57:34.097 [INFO][4617] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bb268af4557ca6655434d066efced3db85a612e145e290fe843fe0365ad7bdd5" Jan 24 00:57:34.153239 containerd[1500]: 2026-01-24 00:57:34.128 [INFO][4632] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="bb268af4557ca6655434d066efced3db85a612e145e290fe843fe0365ad7bdd5" HandleID="k8s-pod-network.bb268af4557ca6655434d066efced3db85a612e145e290fe843fe0365ad7bdd5" Workload="ci--4081--3--6--n--32cc93a80b-k8s-coredns--668d6bf9bc--mnh95-eth0" Jan 24 00:57:34.153239 containerd[1500]: 2026-01-24 00:57:34.129 [INFO][4632] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:34.153239 containerd[1500]: 2026-01-24 00:57:34.129 [INFO][4632] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:34.153239 containerd[1500]: 2026-01-24 00:57:34.141 [WARNING][4632] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="bb268af4557ca6655434d066efced3db85a612e145e290fe843fe0365ad7bdd5" HandleID="k8s-pod-network.bb268af4557ca6655434d066efced3db85a612e145e290fe843fe0365ad7bdd5" Workload="ci--4081--3--6--n--32cc93a80b-k8s-coredns--668d6bf9bc--mnh95-eth0" Jan 24 00:57:34.153239 containerd[1500]: 2026-01-24 00:57:34.141 [INFO][4632] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="bb268af4557ca6655434d066efced3db85a612e145e290fe843fe0365ad7bdd5" HandleID="k8s-pod-network.bb268af4557ca6655434d066efced3db85a612e145e290fe843fe0365ad7bdd5" Workload="ci--4081--3--6--n--32cc93a80b-k8s-coredns--668d6bf9bc--mnh95-eth0" Jan 24 00:57:34.153239 containerd[1500]: 2026-01-24 00:57:34.145 [INFO][4632] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:34.153239 containerd[1500]: 2026-01-24 00:57:34.150 [INFO][4617] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bb268af4557ca6655434d066efced3db85a612e145e290fe843fe0365ad7bdd5" Jan 24 00:57:34.156748 containerd[1500]: time="2026-01-24T00:57:34.155259331Z" level=info msg="TearDown network for sandbox \"bb268af4557ca6655434d066efced3db85a612e145e290fe843fe0365ad7bdd5\" successfully" Jan 24 00:57:34.156871 containerd[1500]: time="2026-01-24T00:57:34.156815257Z" level=info msg="StopPodSandbox for \"bb268af4557ca6655434d066efced3db85a612e145e290fe843fe0365ad7bdd5\" returns successfully" Jan 24 00:57:34.157848 systemd[1]: run-netns-cni\x2d07d1c0cd\x2d09c4\x2d464a\x2d5dbc\x2d91e075cf3cd9.mount: Deactivated successfully. Jan 24 00:57:34.169663 containerd[1500]: 2026-01-24 00:57:34.080 [INFO][4611] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cc13e5b333c492a15e7614e5d51324566f831788a9cd5cc424e87f5203496bc1" Jan 24 00:57:34.169663 containerd[1500]: 2026-01-24 00:57:34.082 [INFO][4611] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cc13e5b333c492a15e7614e5d51324566f831788a9cd5cc424e87f5203496bc1" iface="eth0" netns="/var/run/netns/cni-d04d99f7-b4d3-da78-b9d8-586f996236eb" Jan 24 00:57:34.169663 containerd[1500]: 2026-01-24 00:57:34.082 [INFO][4611] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cc13e5b333c492a15e7614e5d51324566f831788a9cd5cc424e87f5203496bc1" iface="eth0" netns="/var/run/netns/cni-d04d99f7-b4d3-da78-b9d8-586f996236eb" Jan 24 00:57:34.169663 containerd[1500]: 2026-01-24 00:57:34.082 [INFO][4611] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cc13e5b333c492a15e7614e5d51324566f831788a9cd5cc424e87f5203496bc1" iface="eth0" netns="/var/run/netns/cni-d04d99f7-b4d3-da78-b9d8-586f996236eb" Jan 24 00:57:34.169663 containerd[1500]: 2026-01-24 00:57:34.082 [INFO][4611] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cc13e5b333c492a15e7614e5d51324566f831788a9cd5cc424e87f5203496bc1" Jan 24 00:57:34.169663 containerd[1500]: 2026-01-24 00:57:34.083 [INFO][4611] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cc13e5b333c492a15e7614e5d51324566f831788a9cd5cc424e87f5203496bc1" Jan 24 00:57:34.169663 containerd[1500]: 2026-01-24 00:57:34.146 [INFO][4627] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="cc13e5b333c492a15e7614e5d51324566f831788a9cd5cc424e87f5203496bc1" HandleID="k8s-pod-network.cc13e5b333c492a15e7614e5d51324566f831788a9cd5cc424e87f5203496bc1" Workload="ci--4081--3--6--n--32cc93a80b-k8s-calico--apiserver--6ff89d9558--qsdz4-eth0" Jan 24 00:57:34.169663 containerd[1500]: 2026-01-24 00:57:34.146 [INFO][4627] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:34.169663 containerd[1500]: 2026-01-24 00:57:34.146 [INFO][4627] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:34.169663 containerd[1500]: 2026-01-24 00:57:34.160 [WARNING][4627] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="cc13e5b333c492a15e7614e5d51324566f831788a9cd5cc424e87f5203496bc1" HandleID="k8s-pod-network.cc13e5b333c492a15e7614e5d51324566f831788a9cd5cc424e87f5203496bc1" Workload="ci--4081--3--6--n--32cc93a80b-k8s-calico--apiserver--6ff89d9558--qsdz4-eth0" Jan 24 00:57:34.169663 containerd[1500]: 2026-01-24 00:57:34.160 [INFO][4627] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="cc13e5b333c492a15e7614e5d51324566f831788a9cd5cc424e87f5203496bc1" HandleID="k8s-pod-network.cc13e5b333c492a15e7614e5d51324566f831788a9cd5cc424e87f5203496bc1" Workload="ci--4081--3--6--n--32cc93a80b-k8s-calico--apiserver--6ff89d9558--qsdz4-eth0" Jan 24 00:57:34.169663 containerd[1500]: 2026-01-24 00:57:34.162 [INFO][4627] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:34.169663 containerd[1500]: 2026-01-24 00:57:34.165 [INFO][4611] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cc13e5b333c492a15e7614e5d51324566f831788a9cd5cc424e87f5203496bc1" Jan 24 00:57:34.170185 containerd[1500]: time="2026-01-24T00:57:34.169770460Z" level=info msg="TearDown network for sandbox \"cc13e5b333c492a15e7614e5d51324566f831788a9cd5cc424e87f5203496bc1\" successfully" Jan 24 00:57:34.170185 containerd[1500]: time="2026-01-24T00:57:34.169791800Z" level=info msg="StopPodSandbox for \"cc13e5b333c492a15e7614e5d51324566f831788a9cd5cc424e87f5203496bc1\" returns successfully" Jan 24 00:57:34.173722 containerd[1500]: time="2026-01-24T00:57:34.172234050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mnh95,Uid:60844a2d-0038-4132-8338-140b75e01a74,Namespace:kube-system,Attempt:1,}" Jan 24 00:57:34.173722 containerd[1500]: time="2026-01-24T00:57:34.172850133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ff89d9558-qsdz4,Uid:abee6eff-7ee6-4417-a4eb-5f0514e6e7e9,Namespace:calico-apiserver,Attempt:1,}" Jan 24 00:57:34.173274 systemd[1]: run-netns-cni\x2dd04d99f7\x2db4d3\x2dda78\x2db9d8\x2d586f996236eb.mount: Deactivated successfully. Jan 24 00:57:34.284899 kubelet[2546]: E0124 00:57:34.284512 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85cdccf5-5whtp" podUID="92edd234-ce88-420a-bb1b-56d2f203263f" Jan 24 00:57:34.284899 kubelet[2546]: E0124 00:57:34.284684 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59667657-b8mx9" podUID="3be98e24-0896-49a9-8666-4ca8f66cf2c8" Jan 24 00:57:34.285986 kubelet[2546]: E0124 00:57:34.285694 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9lcpv" podUID="267130dd-42b7-45fa-9166-0420d7cd47cc" Jan 24 00:57:34.331755 systemd-networkd[1402]: calide3e79c8cdd: Link UP Jan 24 00:57:34.332881 systemd-networkd[1402]: calide3e79c8cdd: Gained carrier Jan 24 00:57:34.351694 containerd[1500]: 2026-01-24 00:57:34.249 [INFO][4640] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--32cc93a80b-k8s-coredns--668d6bf9bc--mnh95-eth0 coredns-668d6bf9bc- kube-system 60844a2d-0038-4132-8338-140b75e01a74 1004 0 2026-01-24 00:56:53 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-n-32cc93a80b coredns-668d6bf9bc-mnh95 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calide3e79c8cdd [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="8857b600946e7338fcbf18612e1f35d326cc503f09384d59178532a4d2e9b280" Namespace="kube-system" Pod="coredns-668d6bf9bc-mnh95" WorkloadEndpoint="ci--4081--3--6--n--32cc93a80b-k8s-coredns--668d6bf9bc--mnh95-" Jan 24 00:57:34.351694 containerd[1500]: 2026-01-24 00:57:34.249 [INFO][4640] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8857b600946e7338fcbf18612e1f35d326cc503f09384d59178532a4d2e9b280" Namespace="kube-system" Pod="coredns-668d6bf9bc-mnh95" WorkloadEndpoint="ci--4081--3--6--n--32cc93a80b-k8s-coredns--668d6bf9bc--mnh95-eth0" Jan 24 00:57:34.351694 containerd[1500]: 2026-01-24 00:57:34.275 [INFO][4665] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8857b600946e7338fcbf18612e1f35d326cc503f09384d59178532a4d2e9b280" HandleID="k8s-pod-network.8857b600946e7338fcbf18612e1f35d326cc503f09384d59178532a4d2e9b280" Workload="ci--4081--3--6--n--32cc93a80b-k8s-coredns--668d6bf9bc--mnh95-eth0" Jan 24 00:57:34.351694 containerd[1500]: 2026-01-24 00:57:34.275 [INFO][4665] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8857b600946e7338fcbf18612e1f35d326cc503f09384d59178532a4d2e9b280" HandleID="k8s-pod-network.8857b600946e7338fcbf18612e1f35d326cc503f09384d59178532a4d2e9b280" Workload="ci--4081--3--6--n--32cc93a80b-k8s-coredns--668d6bf9bc--mnh95-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad5e0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-n-32cc93a80b", "pod":"coredns-668d6bf9bc-mnh95", "timestamp":"2026-01-24 00:57:34.275231859 +0000 UTC"}, Hostname:"ci-4081-3-6-n-32cc93a80b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:57:34.351694 containerd[1500]: 2026-01-24 00:57:34.275 [INFO][4665] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:34.351694 containerd[1500]: 2026-01-24 00:57:34.275 [INFO][4665] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:34.351694 containerd[1500]: 2026-01-24 00:57:34.275 [INFO][4665] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-32cc93a80b' Jan 24 00:57:34.351694 containerd[1500]: 2026-01-24 00:57:34.281 [INFO][4665] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8857b600946e7338fcbf18612e1f35d326cc503f09384d59178532a4d2e9b280" host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:34.351694 containerd[1500]: 2026-01-24 00:57:34.290 [INFO][4665] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:34.351694 containerd[1500]: 2026-01-24 00:57:34.296 [INFO][4665] ipam/ipam.go 511: Trying affinity for 192.168.24.128/26 host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:34.351694 containerd[1500]: 2026-01-24 00:57:34.299 [INFO][4665] ipam/ipam.go 158: Attempting to load block cidr=192.168.24.128/26 host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:34.351694 containerd[1500]: 2026-01-24 00:57:34.303 [INFO][4665] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.24.128/26 host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:34.351694 containerd[1500]: 2026-01-24 00:57:34.303 [INFO][4665] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.24.128/26 handle="k8s-pod-network.8857b600946e7338fcbf18612e1f35d326cc503f09384d59178532a4d2e9b280" host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:34.351694 containerd[1500]: 2026-01-24 00:57:34.307 [INFO][4665] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8857b600946e7338fcbf18612e1f35d326cc503f09384d59178532a4d2e9b280 Jan 24 00:57:34.351694 containerd[1500]: 2026-01-24 00:57:34.312 [INFO][4665] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.24.128/26 handle="k8s-pod-network.8857b600946e7338fcbf18612e1f35d326cc503f09384d59178532a4d2e9b280" host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:34.351694 containerd[1500]: 2026-01-24 00:57:34.324 [INFO][4665] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.24.135/26] block=192.168.24.128/26 handle="k8s-pod-network.8857b600946e7338fcbf18612e1f35d326cc503f09384d59178532a4d2e9b280" host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:34.351694 containerd[1500]: 2026-01-24 00:57:34.324 [INFO][4665] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.24.135/26] handle="k8s-pod-network.8857b600946e7338fcbf18612e1f35d326cc503f09384d59178532a4d2e9b280" host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:34.351694 containerd[1500]: 2026-01-24 00:57:34.324 [INFO][4665] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:34.351694 containerd[1500]: 2026-01-24 00:57:34.324 [INFO][4665] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.24.135/26] IPv6=[] ContainerID="8857b600946e7338fcbf18612e1f35d326cc503f09384d59178532a4d2e9b280" HandleID="k8s-pod-network.8857b600946e7338fcbf18612e1f35d326cc503f09384d59178532a4d2e9b280" Workload="ci--4081--3--6--n--32cc93a80b-k8s-coredns--668d6bf9bc--mnh95-eth0" Jan 24 00:57:34.352354 containerd[1500]: 2026-01-24 00:57:34.328 [INFO][4640] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8857b600946e7338fcbf18612e1f35d326cc503f09384d59178532a4d2e9b280" Namespace="kube-system" Pod="coredns-668d6bf9bc-mnh95" WorkloadEndpoint="ci--4081--3--6--n--32cc93a80b-k8s-coredns--668d6bf9bc--mnh95-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--32cc93a80b-k8s-coredns--668d6bf9bc--mnh95-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"60844a2d-0038-4132-8338-140b75e01a74", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 56, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-32cc93a80b", ContainerID:"", Pod:"coredns-668d6bf9bc-mnh95", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.24.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calide3e79c8cdd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:34.352354 containerd[1500]: 2026-01-24 00:57:34.329 [INFO][4640] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.24.135/32] ContainerID="8857b600946e7338fcbf18612e1f35d326cc503f09384d59178532a4d2e9b280" Namespace="kube-system" Pod="coredns-668d6bf9bc-mnh95" WorkloadEndpoint="ci--4081--3--6--n--32cc93a80b-k8s-coredns--668d6bf9bc--mnh95-eth0" Jan 24 00:57:34.352354 containerd[1500]: 2026-01-24 00:57:34.329 [INFO][4640] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calide3e79c8cdd ContainerID="8857b600946e7338fcbf18612e1f35d326cc503f09384d59178532a4d2e9b280" Namespace="kube-system" Pod="coredns-668d6bf9bc-mnh95" WorkloadEndpoint="ci--4081--3--6--n--32cc93a80b-k8s-coredns--668d6bf9bc--mnh95-eth0" Jan 24 00:57:34.352354 containerd[1500]: 2026-01-24 00:57:34.333 [INFO][4640] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8857b600946e7338fcbf18612e1f35d326cc503f09384d59178532a4d2e9b280" Namespace="kube-system" Pod="coredns-668d6bf9bc-mnh95" WorkloadEndpoint="ci--4081--3--6--n--32cc93a80b-k8s-coredns--668d6bf9bc--mnh95-eth0" Jan 24 00:57:34.352354 containerd[1500]: 2026-01-24 00:57:34.334 [INFO][4640] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8857b600946e7338fcbf18612e1f35d326cc503f09384d59178532a4d2e9b280" Namespace="kube-system" Pod="coredns-668d6bf9bc-mnh95" WorkloadEndpoint="ci--4081--3--6--n--32cc93a80b-k8s-coredns--668d6bf9bc--mnh95-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--32cc93a80b-k8s-coredns--668d6bf9bc--mnh95-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"60844a2d-0038-4132-8338-140b75e01a74", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 56, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-32cc93a80b", ContainerID:"8857b600946e7338fcbf18612e1f35d326cc503f09384d59178532a4d2e9b280", Pod:"coredns-668d6bf9bc-mnh95", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.24.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calide3e79c8cdd", MAC:"12:0d:7d:ea:74:4b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:34.352354 containerd[1500]: 2026-01-24 00:57:34.349 [INFO][4640] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8857b600946e7338fcbf18612e1f35d326cc503f09384d59178532a4d2e9b280" Namespace="kube-system" Pod="coredns-668d6bf9bc-mnh95" WorkloadEndpoint="ci--4081--3--6--n--32cc93a80b-k8s-coredns--668d6bf9bc--mnh95-eth0" Jan 24 00:57:34.369834 containerd[1500]: time="2026-01-24T00:57:34.369590843Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:57:34.369834 containerd[1500]: time="2026-01-24T00:57:34.369682003Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:57:34.369834 containerd[1500]: time="2026-01-24T00:57:34.369707614Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:57:34.370905 containerd[1500]: time="2026-01-24T00:57:34.370849588Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:57:34.396869 systemd[1]: Started cri-containerd-8857b600946e7338fcbf18612e1f35d326cc503f09384d59178532a4d2e9b280.scope - libcontainer container 8857b600946e7338fcbf18612e1f35d326cc503f09384d59178532a4d2e9b280. Jan 24 00:57:34.430988 systemd-networkd[1402]: cali09b5cb2c02c: Link UP Jan 24 00:57:34.432965 systemd-networkd[1402]: cali09b5cb2c02c: Gained carrier Jan 24 00:57:34.443160 systemd-networkd[1402]: calic8a3aa58e2d: Gained IPv6LL Jan 24 00:57:34.452496 containerd[1500]: 2026-01-24 00:57:34.250 [INFO][4650] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--32cc93a80b-k8s-calico--apiserver--6ff89d9558--qsdz4-eth0 calico-apiserver-6ff89d9558- calico-apiserver abee6eff-7ee6-4417-a4eb-5f0514e6e7e9 1003 0 2026-01-24 00:57:02 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6ff89d9558 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-n-32cc93a80b calico-apiserver-6ff89d9558-qsdz4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali09b5cb2c02c [] [] }} ContainerID="0cfad3d37f50920f29fba2784c573c59708a2d2b0697b591f4e2ac32bdb24067" Namespace="calico-apiserver" Pod="calico-apiserver-6ff89d9558-qsdz4" WorkloadEndpoint="ci--4081--3--6--n--32cc93a80b-k8s-calico--apiserver--6ff89d9558--qsdz4-" Jan 24 00:57:34.452496 containerd[1500]: 2026-01-24 00:57:34.250 [INFO][4650] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0cfad3d37f50920f29fba2784c573c59708a2d2b0697b591f4e2ac32bdb24067" Namespace="calico-apiserver" Pod="calico-apiserver-6ff89d9558-qsdz4" WorkloadEndpoint="ci--4081--3--6--n--32cc93a80b-k8s-calico--apiserver--6ff89d9558--qsdz4-eth0" Jan 24 00:57:34.452496 containerd[1500]: 2026-01-24 00:57:34.275 [INFO][4670] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0cfad3d37f50920f29fba2784c573c59708a2d2b0697b591f4e2ac32bdb24067" HandleID="k8s-pod-network.0cfad3d37f50920f29fba2784c573c59708a2d2b0697b591f4e2ac32bdb24067" Workload="ci--4081--3--6--n--32cc93a80b-k8s-calico--apiserver--6ff89d9558--qsdz4-eth0" Jan 24 00:57:34.452496 containerd[1500]: 2026-01-24 00:57:34.275 [INFO][4670] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0cfad3d37f50920f29fba2784c573c59708a2d2b0697b591f4e2ac32bdb24067" HandleID="k8s-pod-network.0cfad3d37f50920f29fba2784c573c59708a2d2b0697b591f4e2ac32bdb24067" Workload="ci--4081--3--6--n--32cc93a80b-k8s-calico--apiserver--6ff89d9558--qsdz4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df5a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-6-n-32cc93a80b", "pod":"calico-apiserver-6ff89d9558-qsdz4", "timestamp":"2026-01-24 00:57:34.275317299 +0000 UTC"}, Hostname:"ci-4081-3-6-n-32cc93a80b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:57:34.452496 containerd[1500]: 2026-01-24 00:57:34.275 [INFO][4670] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:34.452496 containerd[1500]: 2026-01-24 00:57:34.324 [INFO][4670] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:34.452496 containerd[1500]: 2026-01-24 00:57:34.324 [INFO][4670] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-32cc93a80b' Jan 24 00:57:34.452496 containerd[1500]: 2026-01-24 00:57:34.382 [INFO][4670] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0cfad3d37f50920f29fba2784c573c59708a2d2b0697b591f4e2ac32bdb24067" host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:34.452496 containerd[1500]: 2026-01-24 00:57:34.391 [INFO][4670] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:34.452496 containerd[1500]: 2026-01-24 00:57:34.399 [INFO][4670] ipam/ipam.go 511: Trying affinity for 192.168.24.128/26 host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:34.452496 containerd[1500]: 2026-01-24 00:57:34.402 [INFO][4670] ipam/ipam.go 158: Attempting to load block cidr=192.168.24.128/26 host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:34.452496 containerd[1500]: 2026-01-24 00:57:34.404 [INFO][4670] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.24.128/26 host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:34.452496 containerd[1500]: 2026-01-24 00:57:34.404 [INFO][4670] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.24.128/26 handle="k8s-pod-network.0cfad3d37f50920f29fba2784c573c59708a2d2b0697b591f4e2ac32bdb24067" host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:34.452496 containerd[1500]: 2026-01-24 00:57:34.406 [INFO][4670] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0cfad3d37f50920f29fba2784c573c59708a2d2b0697b591f4e2ac32bdb24067 Jan 24 00:57:34.452496 containerd[1500]: 2026-01-24 00:57:34.411 [INFO][4670] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.24.128/26 handle="k8s-pod-network.0cfad3d37f50920f29fba2784c573c59708a2d2b0697b591f4e2ac32bdb24067" host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:34.452496 containerd[1500]: 2026-01-24 00:57:34.421 [INFO][4670] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.24.136/26] block=192.168.24.128/26 handle="k8s-pod-network.0cfad3d37f50920f29fba2784c573c59708a2d2b0697b591f4e2ac32bdb24067" host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:34.452496 containerd[1500]: 2026-01-24 00:57:34.421 [INFO][4670] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.24.136/26] handle="k8s-pod-network.0cfad3d37f50920f29fba2784c573c59708a2d2b0697b591f4e2ac32bdb24067" host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:34.452496 containerd[1500]: 2026-01-24 00:57:34.421 [INFO][4670] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:34.452496 containerd[1500]: 2026-01-24 00:57:34.421 [INFO][4670] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.24.136/26] IPv6=[] ContainerID="0cfad3d37f50920f29fba2784c573c59708a2d2b0697b591f4e2ac32bdb24067" HandleID="k8s-pod-network.0cfad3d37f50920f29fba2784c573c59708a2d2b0697b591f4e2ac32bdb24067" Workload="ci--4081--3--6--n--32cc93a80b-k8s-calico--apiserver--6ff89d9558--qsdz4-eth0" Jan 24 00:57:34.454236 containerd[1500]: 2026-01-24 00:57:34.425 [INFO][4650] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0cfad3d37f50920f29fba2784c573c59708a2d2b0697b591f4e2ac32bdb24067" Namespace="calico-apiserver" Pod="calico-apiserver-6ff89d9558-qsdz4" WorkloadEndpoint="ci--4081--3--6--n--32cc93a80b-k8s-calico--apiserver--6ff89d9558--qsdz4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--32cc93a80b-k8s-calico--apiserver--6ff89d9558--qsdz4-eth0", GenerateName:"calico-apiserver-6ff89d9558-", Namespace:"calico-apiserver", SelfLink:"", UID:"abee6eff-7ee6-4417-a4eb-5f0514e6e7e9", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 57, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6ff89d9558", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-32cc93a80b", ContainerID:"", Pod:"calico-apiserver-6ff89d9558-qsdz4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali09b5cb2c02c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:34.454236 containerd[1500]: 2026-01-24 00:57:34.425 [INFO][4650] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.24.136/32] ContainerID="0cfad3d37f50920f29fba2784c573c59708a2d2b0697b591f4e2ac32bdb24067" Namespace="calico-apiserver" Pod="calico-apiserver-6ff89d9558-qsdz4" WorkloadEndpoint="ci--4081--3--6--n--32cc93a80b-k8s-calico--apiserver--6ff89d9558--qsdz4-eth0" Jan 24 00:57:34.454236 containerd[1500]: 2026-01-24 00:57:34.425 [INFO][4650] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali09b5cb2c02c ContainerID="0cfad3d37f50920f29fba2784c573c59708a2d2b0697b591f4e2ac32bdb24067" Namespace="calico-apiserver" Pod="calico-apiserver-6ff89d9558-qsdz4" WorkloadEndpoint="ci--4081--3--6--n--32cc93a80b-k8s-calico--apiserver--6ff89d9558--qsdz4-eth0" Jan 24 00:57:34.454236 containerd[1500]: 2026-01-24 00:57:34.431 [INFO][4650] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0cfad3d37f50920f29fba2784c573c59708a2d2b0697b591f4e2ac32bdb24067" Namespace="calico-apiserver" Pod="calico-apiserver-6ff89d9558-qsdz4" WorkloadEndpoint="ci--4081--3--6--n--32cc93a80b-k8s-calico--apiserver--6ff89d9558--qsdz4-eth0" Jan 24 00:57:34.454236 containerd[1500]: 2026-01-24 00:57:34.431 [INFO][4650] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0cfad3d37f50920f29fba2784c573c59708a2d2b0697b591f4e2ac32bdb24067" Namespace="calico-apiserver" Pod="calico-apiserver-6ff89d9558-qsdz4" WorkloadEndpoint="ci--4081--3--6--n--32cc93a80b-k8s-calico--apiserver--6ff89d9558--qsdz4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--32cc93a80b-k8s-calico--apiserver--6ff89d9558--qsdz4-eth0", GenerateName:"calico-apiserver-6ff89d9558-", Namespace:"calico-apiserver", SelfLink:"", UID:"abee6eff-7ee6-4417-a4eb-5f0514e6e7e9", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 57, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6ff89d9558", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-32cc93a80b", ContainerID:"0cfad3d37f50920f29fba2784c573c59708a2d2b0697b591f4e2ac32bdb24067", Pod:"calico-apiserver-6ff89d9558-qsdz4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali09b5cb2c02c", MAC:"f6:66:97:48:8a:45", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:34.454236 containerd[1500]: 2026-01-24 00:57:34.443 [INFO][4650] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0cfad3d37f50920f29fba2784c573c59708a2d2b0697b591f4e2ac32bdb24067" Namespace="calico-apiserver" Pod="calico-apiserver-6ff89d9558-qsdz4" WorkloadEndpoint="ci--4081--3--6--n--32cc93a80b-k8s-calico--apiserver--6ff89d9558--qsdz4-eth0" Jan 24 00:57:34.466060 containerd[1500]: time="2026-01-24T00:57:34.466036955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mnh95,Uid:60844a2d-0038-4132-8338-140b75e01a74,Namespace:kube-system,Attempt:1,} returns sandbox id \"8857b600946e7338fcbf18612e1f35d326cc503f09384d59178532a4d2e9b280\"" Jan 24 00:57:34.481038 containerd[1500]: time="2026-01-24T00:57:34.480903946Z" level=info msg="CreateContainer within sandbox \"8857b600946e7338fcbf18612e1f35d326cc503f09384d59178532a4d2e9b280\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 24 00:57:34.489768 containerd[1500]: time="2026-01-24T00:57:34.489628531Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:57:34.490285 containerd[1500]: time="2026-01-24T00:57:34.490241844Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:57:34.490285 containerd[1500]: time="2026-01-24T00:57:34.490257194Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:57:34.490407 containerd[1500]: time="2026-01-24T00:57:34.490361994Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:57:34.493261 containerd[1500]: time="2026-01-24T00:57:34.493010995Z" level=info msg="CreateContainer within sandbox \"8857b600946e7338fcbf18612e1f35d326cc503f09384d59178532a4d2e9b280\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bad8902ef4ead5bf047f34f73aab6946da2fe8cc1a92c6861f575078d0456850\"" Jan 24 00:57:34.496535 containerd[1500]: time="2026-01-24T00:57:34.496513449Z" level=info msg="StartContainer for \"bad8902ef4ead5bf047f34f73aab6946da2fe8cc1a92c6861f575078d0456850\"" Jan 24 00:57:34.510873 systemd[1]: Started cri-containerd-0cfad3d37f50920f29fba2784c573c59708a2d2b0697b591f4e2ac32bdb24067.scope - libcontainer container 0cfad3d37f50920f29fba2784c573c59708a2d2b0697b591f4e2ac32bdb24067. Jan 24 00:57:34.526871 systemd[1]: Started cri-containerd-bad8902ef4ead5bf047f34f73aab6946da2fe8cc1a92c6861f575078d0456850.scope - libcontainer container bad8902ef4ead5bf047f34f73aab6946da2fe8cc1a92c6861f575078d0456850. Jan 24 00:57:34.559168 containerd[1500]: time="2026-01-24T00:57:34.559132764Z" level=info msg="StartContainer for \"bad8902ef4ead5bf047f34f73aab6946da2fe8cc1a92c6861f575078d0456850\" returns successfully" Jan 24 00:57:34.579049 containerd[1500]: time="2026-01-24T00:57:34.578989815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ff89d9558-qsdz4,Uid:abee6eff-7ee6-4417-a4eb-5f0514e6e7e9,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"0cfad3d37f50920f29fba2784c573c59708a2d2b0697b591f4e2ac32bdb24067\"" Jan 24 00:57:34.580448 containerd[1500]: time="2026-01-24T00:57:34.580431371Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:57:34.762005 systemd-networkd[1402]: calia9f236ee9a3: Gained IPv6LL Jan 24 00:57:34.971455 containerd[1500]: time="2026-01-24T00:57:34.971393962Z" level=info msg="StopPodSandbox for \"35e4953a296a961a054181d1aa896e675da60d2594a6e7b913242f403f394e88\"" Jan 24 00:57:35.012756 containerd[1500]: time="2026-01-24T00:57:35.011447262Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:57:35.014648 containerd[1500]: time="2026-01-24T00:57:35.014623884Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:57:35.014779 containerd[1500]: time="2026-01-24T00:57:35.014700654Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:57:35.015095 kubelet[2546]: E0124 00:57:35.015066 2546 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:57:35.015269 kubelet[2546]: E0124 00:57:35.015194 2546 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:57:35.018502 kubelet[2546]: E0124 00:57:35.018450 2546 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gvgfk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6ff89d9558-qsdz4_calico-apiserver(abee6eff-7ee6-4417-a4eb-5f0514e6e7e9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:57:35.019918 kubelet[2546]: E0124 00:57:35.019856 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6ff89d9558-qsdz4" podUID="abee6eff-7ee6-4417-a4eb-5f0514e6e7e9" Jan 24 00:57:35.098645 containerd[1500]: 2026-01-24 00:57:35.037 [INFO][4834] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="35e4953a296a961a054181d1aa896e675da60d2594a6e7b913242f403f394e88" Jan 24 00:57:35.098645 containerd[1500]: 2026-01-24 00:57:35.038 [INFO][4834] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="35e4953a296a961a054181d1aa896e675da60d2594a6e7b913242f403f394e88" iface="eth0" netns="/var/run/netns/cni-4a8cb4bf-2d7c-be7e-98e9-4ab081d66689" Jan 24 00:57:35.098645 containerd[1500]: 2026-01-24 00:57:35.044 [INFO][4834] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="35e4953a296a961a054181d1aa896e675da60d2594a6e7b913242f403f394e88" iface="eth0" netns="/var/run/netns/cni-4a8cb4bf-2d7c-be7e-98e9-4ab081d66689" Jan 24 00:57:35.098645 containerd[1500]: 2026-01-24 00:57:35.044 [INFO][4834] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="35e4953a296a961a054181d1aa896e675da60d2594a6e7b913242f403f394e88" iface="eth0" netns="/var/run/netns/cni-4a8cb4bf-2d7c-be7e-98e9-4ab081d66689" Jan 24 00:57:35.098645 containerd[1500]: 2026-01-24 00:57:35.044 [INFO][4834] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="35e4953a296a961a054181d1aa896e675da60d2594a6e7b913242f403f394e88" Jan 24 00:57:35.098645 containerd[1500]: 2026-01-24 00:57:35.044 [INFO][4834] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="35e4953a296a961a054181d1aa896e675da60d2594a6e7b913242f403f394e88" Jan 24 00:57:35.098645 containerd[1500]: 2026-01-24 00:57:35.077 [INFO][4841] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="35e4953a296a961a054181d1aa896e675da60d2594a6e7b913242f403f394e88" HandleID="k8s-pod-network.35e4953a296a961a054181d1aa896e675da60d2594a6e7b913242f403f394e88" Workload="ci--4081--3--6--n--32cc93a80b-k8s-csi--node--driver--ftl5s-eth0" Jan 24 00:57:35.098645 containerd[1500]: 2026-01-24 00:57:35.077 [INFO][4841] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:35.098645 containerd[1500]: 2026-01-24 00:57:35.077 [INFO][4841] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:35.098645 containerd[1500]: 2026-01-24 00:57:35.088 [WARNING][4841] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="35e4953a296a961a054181d1aa896e675da60d2594a6e7b913242f403f394e88" HandleID="k8s-pod-network.35e4953a296a961a054181d1aa896e675da60d2594a6e7b913242f403f394e88" Workload="ci--4081--3--6--n--32cc93a80b-k8s-csi--node--driver--ftl5s-eth0" Jan 24 00:57:35.098645 containerd[1500]: 2026-01-24 00:57:35.088 [INFO][4841] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="35e4953a296a961a054181d1aa896e675da60d2594a6e7b913242f403f394e88" HandleID="k8s-pod-network.35e4953a296a961a054181d1aa896e675da60d2594a6e7b913242f403f394e88" Workload="ci--4081--3--6--n--32cc93a80b-k8s-csi--node--driver--ftl5s-eth0" Jan 24 00:57:35.098645 containerd[1500]: 2026-01-24 00:57:35.091 [INFO][4841] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:35.098645 containerd[1500]: 2026-01-24 00:57:35.094 [INFO][4834] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="35e4953a296a961a054181d1aa896e675da60d2594a6e7b913242f403f394e88" Jan 24 00:57:35.099470 containerd[1500]: time="2026-01-24T00:57:35.099020896Z" level=info msg="TearDown network for sandbox \"35e4953a296a961a054181d1aa896e675da60d2594a6e7b913242f403f394e88\" successfully" Jan 24 00:57:35.099470 containerd[1500]: time="2026-01-24T00:57:35.099052636Z" level=info msg="StopPodSandbox for \"35e4953a296a961a054181d1aa896e675da60d2594a6e7b913242f403f394e88\" returns successfully" Jan 24 00:57:35.099948 containerd[1500]: time="2026-01-24T00:57:35.099914119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ftl5s,Uid:43bd5f1f-4a0c-4b9f-b986-69bf7780bcee,Namespace:calico-system,Attempt:1,}" Jan 24 00:57:35.195079 systemd[1]: run-netns-cni\x2d4a8cb4bf\x2d2d7c\x2dbe7e\x2d98e9\x2d4ab081d66689.mount: Deactivated successfully. Jan 24 00:57:35.215007 systemd-networkd[1402]: cali90b2fb94ca3: Link UP Jan 24 00:57:35.216149 systemd-networkd[1402]: cali90b2fb94ca3: Gained carrier Jan 24 00:57:35.229949 containerd[1500]: 2026-01-24 00:57:35.155 [INFO][4848] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--32cc93a80b-k8s-csi--node--driver--ftl5s-eth0 csi-node-driver- calico-system 43bd5f1f-4a0c-4b9f-b986-69bf7780bcee 1036 0 2026-01-24 00:57:06 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-6-n-32cc93a80b csi-node-driver-ftl5s eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali90b2fb94ca3 [] [] }} ContainerID="c6152d39adbb0240b8f24fb80cd6aab643d876dac40c4ae6e0d02a8799576135" Namespace="calico-system" Pod="csi-node-driver-ftl5s" WorkloadEndpoint="ci--4081--3--6--n--32cc93a80b-k8s-csi--node--driver--ftl5s-" Jan 24 00:57:35.229949 containerd[1500]: 2026-01-24 00:57:35.155 [INFO][4848] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c6152d39adbb0240b8f24fb80cd6aab643d876dac40c4ae6e0d02a8799576135" Namespace="calico-system" Pod="csi-node-driver-ftl5s" WorkloadEndpoint="ci--4081--3--6--n--32cc93a80b-k8s-csi--node--driver--ftl5s-eth0" Jan 24 00:57:35.229949 containerd[1500]: 2026-01-24 00:57:35.177 [INFO][4859] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c6152d39adbb0240b8f24fb80cd6aab643d876dac40c4ae6e0d02a8799576135" HandleID="k8s-pod-network.c6152d39adbb0240b8f24fb80cd6aab643d876dac40c4ae6e0d02a8799576135" Workload="ci--4081--3--6--n--32cc93a80b-k8s-csi--node--driver--ftl5s-eth0" Jan 24 00:57:35.229949 containerd[1500]: 2026-01-24 00:57:35.177 [INFO][4859] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c6152d39adbb0240b8f24fb80cd6aab643d876dac40c4ae6e0d02a8799576135" HandleID="k8s-pod-network.c6152d39adbb0240b8f24fb80cd6aab643d876dac40c4ae6e0d02a8799576135" Workload="ci--4081--3--6--n--32cc93a80b-k8s-csi--node--driver--ftl5s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-32cc93a80b", "pod":"csi-node-driver-ftl5s", "timestamp":"2026-01-24 00:57:35.177302665 +0000 UTC"}, Hostname:"ci-4081-3-6-n-32cc93a80b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:57:35.229949 containerd[1500]: 2026-01-24 00:57:35.177 [INFO][4859] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:35.229949 containerd[1500]: 2026-01-24 00:57:35.177 [INFO][4859] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:35.229949 containerd[1500]: 2026-01-24 00:57:35.177 [INFO][4859] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-32cc93a80b' Jan 24 00:57:35.229949 containerd[1500]: 2026-01-24 00:57:35.182 [INFO][4859] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c6152d39adbb0240b8f24fb80cd6aab643d876dac40c4ae6e0d02a8799576135" host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:35.229949 containerd[1500]: 2026-01-24 00:57:35.188 [INFO][4859] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:35.229949 containerd[1500]: 2026-01-24 00:57:35.194 [INFO][4859] ipam/ipam.go 511: Trying affinity for 192.168.24.128/26 host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:35.229949 containerd[1500]: 2026-01-24 00:57:35.196 [INFO][4859] ipam/ipam.go 158: Attempting to load block cidr=192.168.24.128/26 host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:35.229949 containerd[1500]: 2026-01-24 00:57:35.198 [INFO][4859] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.24.128/26 host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:35.229949 containerd[1500]: 2026-01-24 00:57:35.198 [INFO][4859] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.24.128/26 handle="k8s-pod-network.c6152d39adbb0240b8f24fb80cd6aab643d876dac40c4ae6e0d02a8799576135" host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:35.229949 containerd[1500]: 2026-01-24 00:57:35.199 [INFO][4859] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c6152d39adbb0240b8f24fb80cd6aab643d876dac40c4ae6e0d02a8799576135 Jan 24 00:57:35.229949 containerd[1500]: 2026-01-24 00:57:35.203 [INFO][4859] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.24.128/26 handle="k8s-pod-network.c6152d39adbb0240b8f24fb80cd6aab643d876dac40c4ae6e0d02a8799576135" host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:35.229949 containerd[1500]: 2026-01-24 00:57:35.208 [INFO][4859] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.24.137/26] block=192.168.24.128/26 handle="k8s-pod-network.c6152d39adbb0240b8f24fb80cd6aab643d876dac40c4ae6e0d02a8799576135" host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:35.229949 containerd[1500]: 2026-01-24 00:57:35.208 [INFO][4859] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.24.137/26] handle="k8s-pod-network.c6152d39adbb0240b8f24fb80cd6aab643d876dac40c4ae6e0d02a8799576135" host="ci-4081-3-6-n-32cc93a80b" Jan 24 00:57:35.229949 containerd[1500]: 2026-01-24 00:57:35.209 [INFO][4859] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:35.229949 containerd[1500]: 2026-01-24 00:57:35.209 [INFO][4859] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.24.137/26] IPv6=[] ContainerID="c6152d39adbb0240b8f24fb80cd6aab643d876dac40c4ae6e0d02a8799576135" HandleID="k8s-pod-network.c6152d39adbb0240b8f24fb80cd6aab643d876dac40c4ae6e0d02a8799576135" Workload="ci--4081--3--6--n--32cc93a80b-k8s-csi--node--driver--ftl5s-eth0" Jan 24 00:57:35.230547 containerd[1500]: 2026-01-24 00:57:35.211 [INFO][4848] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c6152d39adbb0240b8f24fb80cd6aab643d876dac40c4ae6e0d02a8799576135" Namespace="calico-system" Pod="csi-node-driver-ftl5s" WorkloadEndpoint="ci--4081--3--6--n--32cc93a80b-k8s-csi--node--driver--ftl5s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--32cc93a80b-k8s-csi--node--driver--ftl5s-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"43bd5f1f-4a0c-4b9f-b986-69bf7780bcee", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 57, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-32cc93a80b", ContainerID:"", Pod:"csi-node-driver-ftl5s", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.24.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali90b2fb94ca3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:35.230547 containerd[1500]: 2026-01-24 00:57:35.211 [INFO][4848] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.24.137/32] ContainerID="c6152d39adbb0240b8f24fb80cd6aab643d876dac40c4ae6e0d02a8799576135" Namespace="calico-system" Pod="csi-node-driver-ftl5s" WorkloadEndpoint="ci--4081--3--6--n--32cc93a80b-k8s-csi--node--driver--ftl5s-eth0" Jan 24 00:57:35.230547 containerd[1500]: 2026-01-24 00:57:35.211 [INFO][4848] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali90b2fb94ca3 ContainerID="c6152d39adbb0240b8f24fb80cd6aab643d876dac40c4ae6e0d02a8799576135" Namespace="calico-system" Pod="csi-node-driver-ftl5s" WorkloadEndpoint="ci--4081--3--6--n--32cc93a80b-k8s-csi--node--driver--ftl5s-eth0" Jan 24 00:57:35.230547 containerd[1500]: 2026-01-24 00:57:35.217 [INFO][4848] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c6152d39adbb0240b8f24fb80cd6aab643d876dac40c4ae6e0d02a8799576135" Namespace="calico-system" Pod="csi-node-driver-ftl5s" WorkloadEndpoint="ci--4081--3--6--n--32cc93a80b-k8s-csi--node--driver--ftl5s-eth0" Jan 24 00:57:35.230547 containerd[1500]: 2026-01-24 00:57:35.217 [INFO][4848] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c6152d39adbb0240b8f24fb80cd6aab643d876dac40c4ae6e0d02a8799576135" Namespace="calico-system" Pod="csi-node-driver-ftl5s" WorkloadEndpoint="ci--4081--3--6--n--32cc93a80b-k8s-csi--node--driver--ftl5s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--32cc93a80b-k8s-csi--node--driver--ftl5s-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"43bd5f1f-4a0c-4b9f-b986-69bf7780bcee", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 57, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-32cc93a80b", ContainerID:"c6152d39adbb0240b8f24fb80cd6aab643d876dac40c4ae6e0d02a8799576135", Pod:"csi-node-driver-ftl5s", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.24.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali90b2fb94ca3", MAC:"ee:fc:c3:22:13:c2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:35.230547 containerd[1500]: 2026-01-24 00:57:35.227 [INFO][4848] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c6152d39adbb0240b8f24fb80cd6aab643d876dac40c4ae6e0d02a8799576135" Namespace="calico-system" Pod="csi-node-driver-ftl5s" WorkloadEndpoint="ci--4081--3--6--n--32cc93a80b-k8s-csi--node--driver--ftl5s-eth0" Jan 24 00:57:35.252281 containerd[1500]: time="2026-01-24T00:57:35.252199630Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:57:35.252429 containerd[1500]: time="2026-01-24T00:57:35.252297171Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:57:35.252429 containerd[1500]: time="2026-01-24T00:57:35.252326811Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:57:35.253428 containerd[1500]: time="2026-01-24T00:57:35.253392435Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:57:35.274667 systemd[1]: run-containerd-runc-k8s.io-c6152d39adbb0240b8f24fb80cd6aab643d876dac40c4ae6e0d02a8799576135-runc.La5K8T.mount: Deactivated successfully. Jan 24 00:57:35.283857 systemd[1]: Started cri-containerd-c6152d39adbb0240b8f24fb80cd6aab643d876dac40c4ae6e0d02a8799576135.scope - libcontainer container c6152d39adbb0240b8f24fb80cd6aab643d876dac40c4ae6e0d02a8799576135. Jan 24 00:57:35.291415 kubelet[2546]: E0124 00:57:35.291381 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6ff89d9558-qsdz4" podUID="abee6eff-7ee6-4417-a4eb-5f0514e6e7e9" Jan 24 00:57:35.295093 kubelet[2546]: E0124 00:57:35.295002 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85cdccf5-5whtp" podUID="92edd234-ce88-420a-bb1b-56d2f203263f" Jan 24 00:57:35.344456 containerd[1500]: time="2026-01-24T00:57:35.344283591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ftl5s,Uid:43bd5f1f-4a0c-4b9f-b986-69bf7780bcee,Namespace:calico-system,Attempt:1,} returns sandbox id \"c6152d39adbb0240b8f24fb80cd6aab643d876dac40c4ae6e0d02a8799576135\"" Jan 24 00:57:35.347002 containerd[1500]: time="2026-01-24T00:57:35.345814097Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:57:35.351504 kubelet[2546]: I0124 00:57:35.351464 2546 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-mnh95" podStartSLOduration=42.351441849 podStartE2EDuration="42.351441849s" podCreationTimestamp="2026-01-24 00:56:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:57:35.346357649 +0000 UTC m=+47.496299142" watchObservedRunningTime="2026-01-24 00:57:35.351441849 +0000 UTC m=+47.501383332" Jan 24 00:57:35.784600 containerd[1500]: time="2026-01-24T00:57:35.784518811Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:57:35.786027 systemd-networkd[1402]: calide3e79c8cdd: Gained IPv6LL Jan 24 00:57:35.792516 kubelet[2546]: E0124 00:57:35.788078 2546 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:57:35.792516 kubelet[2546]: E0124 00:57:35.788155 2546 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:57:35.792663 containerd[1500]: time="2026-01-24T00:57:35.786726379Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:57:35.792663 containerd[1500]: time="2026-01-24T00:57:35.786904290Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 24 00:57:35.794003 kubelet[2546]: E0124 00:57:35.793091 2546 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zdst7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-ftl5s_calico-system(43bd5f1f-4a0c-4b9f-b986-69bf7780bcee): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:57:35.797729 containerd[1500]: time="2026-01-24T00:57:35.797632191Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:57:35.913977 systemd-networkd[1402]: cali09b5cb2c02c: Gained IPv6LL Jan 24 00:57:36.237455 containerd[1500]: time="2026-01-24T00:57:36.237394712Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:57:36.238940 containerd[1500]: time="2026-01-24T00:57:36.238879897Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:57:36.238940 containerd[1500]: time="2026-01-24T00:57:36.238985038Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 24 00:57:36.239212 kubelet[2546]: E0124 00:57:36.239151 2546 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:57:36.239290 kubelet[2546]: E0124 00:57:36.239215 2546 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:57:36.239423 kubelet[2546]: E0124 00:57:36.239355 2546 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zdst7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-ftl5s_calico-system(43bd5f1f-4a0c-4b9f-b986-69bf7780bcee): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:57:36.240913 kubelet[2546]: E0124 00:57:36.240782 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ftl5s" podUID="43bd5f1f-4a0c-4b9f-b986-69bf7780bcee" Jan 24 00:57:36.309344 kubelet[2546]: E0124 00:57:36.309263 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ftl5s" podUID="43bd5f1f-4a0c-4b9f-b986-69bf7780bcee" Jan 24 00:57:36.312283 kubelet[2546]: E0124 00:57:36.312245 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6ff89d9558-qsdz4" podUID="abee6eff-7ee6-4417-a4eb-5f0514e6e7e9" Jan 24 00:57:37.194654 systemd-networkd[1402]: cali90b2fb94ca3: Gained IPv6LL Jan 24 00:57:37.311340 kubelet[2546]: E0124 00:57:37.311032 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ftl5s" podUID="43bd5f1f-4a0c-4b9f-b986-69bf7780bcee" Jan 24 00:57:41.972395 containerd[1500]: time="2026-01-24T00:57:41.972308383Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:57:42.423450 containerd[1500]: time="2026-01-24T00:57:42.423202582Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:57:42.425176 containerd[1500]: time="2026-01-24T00:57:42.424807616Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:57:42.425176 containerd[1500]: time="2026-01-24T00:57:42.424958276Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 24 00:57:42.425400 kubelet[2546]: E0124 00:57:42.425259 2546 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:57:42.425400 kubelet[2546]: E0124 00:57:42.425322 2546 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:57:42.426048 kubelet[2546]: E0124 00:57:42.425448 2546 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e116ca17b1744963b9e4b3aac3adf522,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zkcx7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6bf994bc7f-8g8k6_calico-system(52940e35-8fee-4532-9c73-0644eb969513): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:57:42.428524 containerd[1500]: time="2026-01-24T00:57:42.428220654Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:57:42.870019 containerd[1500]: time="2026-01-24T00:57:42.869672045Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:57:42.871822 containerd[1500]: time="2026-01-24T00:57:42.871614200Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:57:42.871822 containerd[1500]: time="2026-01-24T00:57:42.871696870Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 24 00:57:42.872025 kubelet[2546]: E0124 00:57:42.871941 2546 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:57:42.872025 kubelet[2546]: E0124 00:57:42.872000 2546 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:57:42.872214 kubelet[2546]: E0124 00:57:42.872130 2546 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zkcx7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6bf994bc7f-8g8k6_calico-system(52940e35-8fee-4532-9c73-0644eb969513): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:57:42.873833 kubelet[2546]: E0124 00:57:42.873695 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6bf994bc7f-8g8k6" podUID="52940e35-8fee-4532-9c73-0644eb969513" Jan 24 00:57:44.971170 containerd[1500]: time="2026-01-24T00:57:44.970962161Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:57:45.401292 containerd[1500]: time="2026-01-24T00:57:45.401064125Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:57:45.403059 containerd[1500]: time="2026-01-24T00:57:45.402619198Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:57:45.403059 containerd[1500]: time="2026-01-24T00:57:45.402705669Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:57:45.403232 kubelet[2546]: E0124 00:57:45.402932 2546 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:57:45.403232 kubelet[2546]: E0124 00:57:45.402989 2546 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:57:45.403232 kubelet[2546]: E0124 00:57:45.403135 2546 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pbb4z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6ff89d9558-pr2mw_calico-apiserver(e25c9c50-eb09-419b-a216-dabe2aa24f5e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:57:45.404817 kubelet[2546]: E0124 00:57:45.404766 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6ff89d9558-pr2mw" podUID="e25c9c50-eb09-419b-a216-dabe2aa24f5e" Jan 24 00:57:47.959190 containerd[1500]: time="2026-01-24T00:57:47.959142696Z" level=info msg="StopPodSandbox for \"77ab0cbafcc89f4443e2cf533c919cdaa642383054c9490d7af6cf2d3be4851d\"" Jan 24 00:57:47.980021 containerd[1500]: time="2026-01-24T00:57:47.977809229Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:57:48.093353 containerd[1500]: 2026-01-24 00:57:48.035 [WARNING][4941] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="77ab0cbafcc89f4443e2cf533c919cdaa642383054c9490d7af6cf2d3be4851d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--32cc93a80b-k8s-goldmane--666569f655--9lcpv-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"267130dd-42b7-45fa-9166-0420d7cd47cc", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 57, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-32cc93a80b", ContainerID:"0c8fb2d444327c17d0d4ca97727576ac926b98527fcc6f645ef8138912c10c89", Pod:"goldmane-666569f655-9lcpv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.24.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic8a3aa58e2d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:48.093353 containerd[1500]: 2026-01-24 00:57:48.036 [INFO][4941] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="77ab0cbafcc89f4443e2cf533c919cdaa642383054c9490d7af6cf2d3be4851d" Jan 24 00:57:48.093353 containerd[1500]: 2026-01-24 00:57:48.036 [INFO][4941] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="77ab0cbafcc89f4443e2cf533c919cdaa642383054c9490d7af6cf2d3be4851d" iface="eth0" netns="" Jan 24 00:57:48.093353 containerd[1500]: 2026-01-24 00:57:48.036 [INFO][4941] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="77ab0cbafcc89f4443e2cf533c919cdaa642383054c9490d7af6cf2d3be4851d" Jan 24 00:57:48.093353 containerd[1500]: 2026-01-24 00:57:48.036 [INFO][4941] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="77ab0cbafcc89f4443e2cf533c919cdaa642383054c9490d7af6cf2d3be4851d" Jan 24 00:57:48.093353 containerd[1500]: 2026-01-24 00:57:48.078 [INFO][4950] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="77ab0cbafcc89f4443e2cf533c919cdaa642383054c9490d7af6cf2d3be4851d" HandleID="k8s-pod-network.77ab0cbafcc89f4443e2cf533c919cdaa642383054c9490d7af6cf2d3be4851d" Workload="ci--4081--3--6--n--32cc93a80b-k8s-goldmane--666569f655--9lcpv-eth0" Jan 24 00:57:48.093353 containerd[1500]: 2026-01-24 00:57:48.079 [INFO][4950] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:48.093353 containerd[1500]: 2026-01-24 00:57:48.079 [INFO][4950] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:48.093353 containerd[1500]: 2026-01-24 00:57:48.085 [WARNING][4950] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="77ab0cbafcc89f4443e2cf533c919cdaa642383054c9490d7af6cf2d3be4851d" HandleID="k8s-pod-network.77ab0cbafcc89f4443e2cf533c919cdaa642383054c9490d7af6cf2d3be4851d" Workload="ci--4081--3--6--n--32cc93a80b-k8s-goldmane--666569f655--9lcpv-eth0" Jan 24 00:57:48.093353 containerd[1500]: 2026-01-24 00:57:48.085 [INFO][4950] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="77ab0cbafcc89f4443e2cf533c919cdaa642383054c9490d7af6cf2d3be4851d" HandleID="k8s-pod-network.77ab0cbafcc89f4443e2cf533c919cdaa642383054c9490d7af6cf2d3be4851d" Workload="ci--4081--3--6--n--32cc93a80b-k8s-goldmane--666569f655--9lcpv-eth0" Jan 24 00:57:48.093353 containerd[1500]: 2026-01-24 00:57:48.087 [INFO][4950] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:48.093353 containerd[1500]: 2026-01-24 00:57:48.090 [INFO][4941] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="77ab0cbafcc89f4443e2cf533c919cdaa642383054c9490d7af6cf2d3be4851d" Jan 24 00:57:48.094020 containerd[1500]: time="2026-01-24T00:57:48.093369452Z" level=info msg="TearDown network for sandbox \"77ab0cbafcc89f4443e2cf533c919cdaa642383054c9490d7af6cf2d3be4851d\" successfully" Jan 24 00:57:48.094020 containerd[1500]: time="2026-01-24T00:57:48.093390032Z" level=info msg="StopPodSandbox for \"77ab0cbafcc89f4443e2cf533c919cdaa642383054c9490d7af6cf2d3be4851d\" returns successfully" Jan 24 00:57:48.094789 containerd[1500]: time="2026-01-24T00:57:48.094749825Z" level=info msg="RemovePodSandbox for \"77ab0cbafcc89f4443e2cf533c919cdaa642383054c9490d7af6cf2d3be4851d\"" Jan 24 00:57:48.094789 containerd[1500]: time="2026-01-24T00:57:48.094773645Z" level=info msg="Forcibly stopping sandbox \"77ab0cbafcc89f4443e2cf533c919cdaa642383054c9490d7af6cf2d3be4851d\"" Jan 24 00:57:48.178559 containerd[1500]: 2026-01-24 00:57:48.126 [WARNING][4965] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="77ab0cbafcc89f4443e2cf533c919cdaa642383054c9490d7af6cf2d3be4851d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--32cc93a80b-k8s-goldmane--666569f655--9lcpv-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"267130dd-42b7-45fa-9166-0420d7cd47cc", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 57, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-32cc93a80b", ContainerID:"0c8fb2d444327c17d0d4ca97727576ac926b98527fcc6f645ef8138912c10c89", Pod:"goldmane-666569f655-9lcpv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.24.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic8a3aa58e2d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:48.178559 containerd[1500]: 2026-01-24 00:57:48.126 [INFO][4965] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="77ab0cbafcc89f4443e2cf533c919cdaa642383054c9490d7af6cf2d3be4851d" Jan 24 00:57:48.178559 containerd[1500]: 2026-01-24 00:57:48.126 [INFO][4965] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="77ab0cbafcc89f4443e2cf533c919cdaa642383054c9490d7af6cf2d3be4851d" iface="eth0" netns="" Jan 24 00:57:48.178559 containerd[1500]: 2026-01-24 00:57:48.126 [INFO][4965] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="77ab0cbafcc89f4443e2cf533c919cdaa642383054c9490d7af6cf2d3be4851d" Jan 24 00:57:48.178559 containerd[1500]: 2026-01-24 00:57:48.126 [INFO][4965] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="77ab0cbafcc89f4443e2cf533c919cdaa642383054c9490d7af6cf2d3be4851d" Jan 24 00:57:48.178559 containerd[1500]: 2026-01-24 00:57:48.156 [INFO][4973] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="77ab0cbafcc89f4443e2cf533c919cdaa642383054c9490d7af6cf2d3be4851d" HandleID="k8s-pod-network.77ab0cbafcc89f4443e2cf533c919cdaa642383054c9490d7af6cf2d3be4851d" Workload="ci--4081--3--6--n--32cc93a80b-k8s-goldmane--666569f655--9lcpv-eth0" Jan 24 00:57:48.178559 containerd[1500]: 2026-01-24 00:57:48.157 [INFO][4973] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:48.178559 containerd[1500]: 2026-01-24 00:57:48.158 [INFO][4973] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:48.178559 containerd[1500]: 2026-01-24 00:57:48.169 [WARNING][4973] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="77ab0cbafcc89f4443e2cf533c919cdaa642383054c9490d7af6cf2d3be4851d" HandleID="k8s-pod-network.77ab0cbafcc89f4443e2cf533c919cdaa642383054c9490d7af6cf2d3be4851d" Workload="ci--4081--3--6--n--32cc93a80b-k8s-goldmane--666569f655--9lcpv-eth0" Jan 24 00:57:48.178559 containerd[1500]: 2026-01-24 00:57:48.169 [INFO][4973] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="77ab0cbafcc89f4443e2cf533c919cdaa642383054c9490d7af6cf2d3be4851d" HandleID="k8s-pod-network.77ab0cbafcc89f4443e2cf533c919cdaa642383054c9490d7af6cf2d3be4851d" Workload="ci--4081--3--6--n--32cc93a80b-k8s-goldmane--666569f655--9lcpv-eth0" Jan 24 00:57:48.178559 containerd[1500]: 2026-01-24 00:57:48.171 [INFO][4973] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:48.178559 containerd[1500]: 2026-01-24 00:57:48.174 [INFO][4965] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="77ab0cbafcc89f4443e2cf533c919cdaa642383054c9490d7af6cf2d3be4851d" Jan 24 00:57:48.179417 containerd[1500]: time="2026-01-24T00:57:48.178674983Z" level=info msg="TearDown network for sandbox \"77ab0cbafcc89f4443e2cf533c919cdaa642383054c9490d7af6cf2d3be4851d\" successfully" Jan 24 00:57:48.185379 containerd[1500]: time="2026-01-24T00:57:48.185339144Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"77ab0cbafcc89f4443e2cf533c919cdaa642383054c9490d7af6cf2d3be4851d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:57:48.186518 containerd[1500]: time="2026-01-24T00:57:48.186474196Z" level=info msg="RemovePodSandbox \"77ab0cbafcc89f4443e2cf533c919cdaa642383054c9490d7af6cf2d3be4851d\" returns successfully" Jan 24 00:57:48.187365 containerd[1500]: time="2026-01-24T00:57:48.187294647Z" level=info msg="StopPodSandbox for \"a76459f809f24468977c0635d780f8572d989f71d76e163ecda73729d954f19c\"" Jan 24 00:57:48.312455 containerd[1500]: 2026-01-24 00:57:48.243 [WARNING][4988] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a76459f809f24468977c0635d780f8572d989f71d76e163ecda73729d954f19c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--32cc93a80b-k8s-calico--kube--controllers--85cdccf5--5whtp-eth0", GenerateName:"calico-kube-controllers-85cdccf5-", Namespace:"calico-system", SelfLink:"", UID:"92edd234-ce88-420a-bb1b-56d2f203263f", ResourceVersion:"1054", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 57, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85cdccf5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-32cc93a80b", ContainerID:"991b54c87e49d50b252fa6a63fc45181e272d57e631f8cfbf0d9c74a82ed9d56", Pod:"calico-kube-controllers-85cdccf5-5whtp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.24.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia9f236ee9a3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:48.312455 containerd[1500]: 2026-01-24 00:57:48.243 [INFO][4988] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a76459f809f24468977c0635d780f8572d989f71d76e163ecda73729d954f19c" Jan 24 00:57:48.312455 containerd[1500]: 2026-01-24 00:57:48.243 [INFO][4988] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a76459f809f24468977c0635d780f8572d989f71d76e163ecda73729d954f19c" iface="eth0" netns="" Jan 24 00:57:48.312455 containerd[1500]: 2026-01-24 00:57:48.243 [INFO][4988] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a76459f809f24468977c0635d780f8572d989f71d76e163ecda73729d954f19c" Jan 24 00:57:48.312455 containerd[1500]: 2026-01-24 00:57:48.243 [INFO][4988] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a76459f809f24468977c0635d780f8572d989f71d76e163ecda73729d954f19c" Jan 24 00:57:48.312455 containerd[1500]: 2026-01-24 00:57:48.279 [INFO][4995] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a76459f809f24468977c0635d780f8572d989f71d76e163ecda73729d954f19c" HandleID="k8s-pod-network.a76459f809f24468977c0635d780f8572d989f71d76e163ecda73729d954f19c" Workload="ci--4081--3--6--n--32cc93a80b-k8s-calico--kube--controllers--85cdccf5--5whtp-eth0" Jan 24 00:57:48.312455 containerd[1500]: 2026-01-24 00:57:48.279 [INFO][4995] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:48.312455 containerd[1500]: 2026-01-24 00:57:48.279 [INFO][4995] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:48.312455 containerd[1500]: 2026-01-24 00:57:48.292 [WARNING][4995] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a76459f809f24468977c0635d780f8572d989f71d76e163ecda73729d954f19c" HandleID="k8s-pod-network.a76459f809f24468977c0635d780f8572d989f71d76e163ecda73729d954f19c" Workload="ci--4081--3--6--n--32cc93a80b-k8s-calico--kube--controllers--85cdccf5--5whtp-eth0" Jan 24 00:57:48.312455 containerd[1500]: 2026-01-24 00:57:48.292 [INFO][4995] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a76459f809f24468977c0635d780f8572d989f71d76e163ecda73729d954f19c" HandleID="k8s-pod-network.a76459f809f24468977c0635d780f8572d989f71d76e163ecda73729d954f19c" Workload="ci--4081--3--6--n--32cc93a80b-k8s-calico--kube--controllers--85cdccf5--5whtp-eth0" Jan 24 00:57:48.312455 containerd[1500]: 2026-01-24 00:57:48.296 [INFO][4995] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:48.312455 containerd[1500]: 2026-01-24 00:57:48.301 [INFO][4988] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a76459f809f24468977c0635d780f8572d989f71d76e163ecda73729d954f19c" Jan 24 00:57:48.312455 containerd[1500]: time="2026-01-24T00:57:48.312349223Z" level=info msg="TearDown network for sandbox \"a76459f809f24468977c0635d780f8572d989f71d76e163ecda73729d954f19c\" successfully" Jan 24 00:57:48.312455 containerd[1500]: time="2026-01-24T00:57:48.312391513Z" level=info msg="StopPodSandbox for \"a76459f809f24468977c0635d780f8572d989f71d76e163ecda73729d954f19c\" returns successfully" Jan 24 00:57:48.317871 containerd[1500]: time="2026-01-24T00:57:48.314532267Z" level=info msg="RemovePodSandbox for \"a76459f809f24468977c0635d780f8572d989f71d76e163ecda73729d954f19c\"" Jan 24 00:57:48.318312 containerd[1500]: time="2026-01-24T00:57:48.317836602Z" level=info msg="Forcibly stopping sandbox \"a76459f809f24468977c0635d780f8572d989f71d76e163ecda73729d954f19c\"" Jan 24 00:57:48.409743 containerd[1500]: time="2026-01-24T00:57:48.409571314Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:57:48.410973 containerd[1500]: time="2026-01-24T00:57:48.410949506Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:57:48.411687 containerd[1500]: time="2026-01-24T00:57:48.410996076Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:57:48.411721 kubelet[2546]: E0124 00:57:48.411238 2546 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:57:48.411721 kubelet[2546]: E0124 00:57:48.411286 2546 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:57:48.411721 kubelet[2546]: E0124 00:57:48.411375 2546 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nvjcq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-59667657-b8mx9_calico-apiserver(3be98e24-0896-49a9-8666-4ca8f66cf2c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:57:48.412796 kubelet[2546]: E0124 00:57:48.412776 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59667657-b8mx9" podUID="3be98e24-0896-49a9-8666-4ca8f66cf2c8" Jan 24 00:57:48.419504 containerd[1500]: 2026-01-24 00:57:48.380 [WARNING][5009] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a76459f809f24468977c0635d780f8572d989f71d76e163ecda73729d954f19c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--32cc93a80b-k8s-calico--kube--controllers--85cdccf5--5whtp-eth0", GenerateName:"calico-kube-controllers-85cdccf5-", Namespace:"calico-system", SelfLink:"", UID:"92edd234-ce88-420a-bb1b-56d2f203263f", ResourceVersion:"1054", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 57, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85cdccf5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-32cc93a80b", ContainerID:"991b54c87e49d50b252fa6a63fc45181e272d57e631f8cfbf0d9c74a82ed9d56", Pod:"calico-kube-controllers-85cdccf5-5whtp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.24.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia9f236ee9a3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:48.419504 containerd[1500]: 2026-01-24 00:57:48.380 [INFO][5009] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a76459f809f24468977c0635d780f8572d989f71d76e163ecda73729d954f19c" Jan 24 00:57:48.419504 containerd[1500]: 2026-01-24 00:57:48.380 [INFO][5009] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a76459f809f24468977c0635d780f8572d989f71d76e163ecda73729d954f19c" iface="eth0" netns="" Jan 24 00:57:48.419504 containerd[1500]: 2026-01-24 00:57:48.380 [INFO][5009] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a76459f809f24468977c0635d780f8572d989f71d76e163ecda73729d954f19c" Jan 24 00:57:48.419504 containerd[1500]: 2026-01-24 00:57:48.380 [INFO][5009] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a76459f809f24468977c0635d780f8572d989f71d76e163ecda73729d954f19c" Jan 24 00:57:48.419504 containerd[1500]: 2026-01-24 00:57:48.397 [INFO][5017] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a76459f809f24468977c0635d780f8572d989f71d76e163ecda73729d954f19c" HandleID="k8s-pod-network.a76459f809f24468977c0635d780f8572d989f71d76e163ecda73729d954f19c" Workload="ci--4081--3--6--n--32cc93a80b-k8s-calico--kube--controllers--85cdccf5--5whtp-eth0" Jan 24 00:57:48.419504 containerd[1500]: 2026-01-24 00:57:48.398 [INFO][5017] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:48.419504 containerd[1500]: 2026-01-24 00:57:48.398 [INFO][5017] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:48.419504 containerd[1500]: 2026-01-24 00:57:48.406 [WARNING][5017] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a76459f809f24468977c0635d780f8572d989f71d76e163ecda73729d954f19c" HandleID="k8s-pod-network.a76459f809f24468977c0635d780f8572d989f71d76e163ecda73729d954f19c" Workload="ci--4081--3--6--n--32cc93a80b-k8s-calico--kube--controllers--85cdccf5--5whtp-eth0" Jan 24 00:57:48.419504 containerd[1500]: 2026-01-24 00:57:48.406 [INFO][5017] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a76459f809f24468977c0635d780f8572d989f71d76e163ecda73729d954f19c" HandleID="k8s-pod-network.a76459f809f24468977c0635d780f8572d989f71d76e163ecda73729d954f19c" Workload="ci--4081--3--6--n--32cc93a80b-k8s-calico--kube--controllers--85cdccf5--5whtp-eth0" Jan 24 00:57:48.419504 containerd[1500]: 2026-01-24 00:57:48.408 [INFO][5017] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:48.419504 containerd[1500]: 2026-01-24 00:57:48.412 [INFO][5009] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a76459f809f24468977c0635d780f8572d989f71d76e163ecda73729d954f19c" Jan 24 00:57:48.420266 containerd[1500]: time="2026-01-24T00:57:48.419589210Z" level=info msg="TearDown network for sandbox \"a76459f809f24468977c0635d780f8572d989f71d76e163ecda73729d954f19c\" successfully" Jan 24 00:57:48.424876 containerd[1500]: time="2026-01-24T00:57:48.424765649Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a76459f809f24468977c0635d780f8572d989f71d76e163ecda73729d954f19c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:57:48.424876 containerd[1500]: time="2026-01-24T00:57:48.424807159Z" level=info msg="RemovePodSandbox \"a76459f809f24468977c0635d780f8572d989f71d76e163ecda73729d954f19c\" returns successfully" Jan 24 00:57:48.425335 containerd[1500]: time="2026-01-24T00:57:48.425295479Z" level=info msg="StopPodSandbox for \"bb268af4557ca6655434d066efced3db85a612e145e290fe843fe0365ad7bdd5\"" Jan 24 00:57:48.518472 containerd[1500]: 2026-01-24 00:57:48.472 [WARNING][5032] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bb268af4557ca6655434d066efced3db85a612e145e290fe843fe0365ad7bdd5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--32cc93a80b-k8s-coredns--668d6bf9bc--mnh95-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"60844a2d-0038-4132-8338-140b75e01a74", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 56, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-32cc93a80b", ContainerID:"8857b600946e7338fcbf18612e1f35d326cc503f09384d59178532a4d2e9b280", Pod:"coredns-668d6bf9bc-mnh95", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.24.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calide3e79c8cdd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:48.518472 containerd[1500]: 2026-01-24 00:57:48.472 [INFO][5032] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bb268af4557ca6655434d066efced3db85a612e145e290fe843fe0365ad7bdd5" Jan 24 00:57:48.518472 containerd[1500]: 2026-01-24 00:57:48.472 [INFO][5032] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bb268af4557ca6655434d066efced3db85a612e145e290fe843fe0365ad7bdd5" iface="eth0" netns="" Jan 24 00:57:48.518472 containerd[1500]: 2026-01-24 00:57:48.472 [INFO][5032] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bb268af4557ca6655434d066efced3db85a612e145e290fe843fe0365ad7bdd5" Jan 24 00:57:48.518472 containerd[1500]: 2026-01-24 00:57:48.472 [INFO][5032] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bb268af4557ca6655434d066efced3db85a612e145e290fe843fe0365ad7bdd5" Jan 24 00:57:48.518472 containerd[1500]: 2026-01-24 00:57:48.501 [INFO][5040] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="bb268af4557ca6655434d066efced3db85a612e145e290fe843fe0365ad7bdd5" HandleID="k8s-pod-network.bb268af4557ca6655434d066efced3db85a612e145e290fe843fe0365ad7bdd5" Workload="ci--4081--3--6--n--32cc93a80b-k8s-coredns--668d6bf9bc--mnh95-eth0" Jan 24 00:57:48.518472 containerd[1500]: 2026-01-24 00:57:48.501 [INFO][5040] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:48.518472 containerd[1500]: 2026-01-24 00:57:48.501 [INFO][5040] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:48.518472 containerd[1500]: 2026-01-24 00:57:48.508 [WARNING][5040] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="bb268af4557ca6655434d066efced3db85a612e145e290fe843fe0365ad7bdd5" HandleID="k8s-pod-network.bb268af4557ca6655434d066efced3db85a612e145e290fe843fe0365ad7bdd5" Workload="ci--4081--3--6--n--32cc93a80b-k8s-coredns--668d6bf9bc--mnh95-eth0" Jan 24 00:57:48.518472 containerd[1500]: 2026-01-24 00:57:48.508 [INFO][5040] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="bb268af4557ca6655434d066efced3db85a612e145e290fe843fe0365ad7bdd5" HandleID="k8s-pod-network.bb268af4557ca6655434d066efced3db85a612e145e290fe843fe0365ad7bdd5" Workload="ci--4081--3--6--n--32cc93a80b-k8s-coredns--668d6bf9bc--mnh95-eth0" Jan 24 00:57:48.518472 containerd[1500]: 2026-01-24 00:57:48.510 [INFO][5040] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:48.518472 containerd[1500]: 2026-01-24 00:57:48.514 [INFO][5032] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bb268af4557ca6655434d066efced3db85a612e145e290fe843fe0365ad7bdd5" Jan 24 00:57:48.518472 containerd[1500]: time="2026-01-24T00:57:48.518262303Z" level=info msg="TearDown network for sandbox \"bb268af4557ca6655434d066efced3db85a612e145e290fe843fe0365ad7bdd5\" successfully" Jan 24 00:57:48.518472 containerd[1500]: time="2026-01-24T00:57:48.518291703Z" level=info msg="StopPodSandbox for \"bb268af4557ca6655434d066efced3db85a612e145e290fe843fe0365ad7bdd5\" returns successfully" Jan 24 00:57:48.519126 containerd[1500]: time="2026-01-24T00:57:48.519001014Z" level=info msg="RemovePodSandbox for \"bb268af4557ca6655434d066efced3db85a612e145e290fe843fe0365ad7bdd5\"" Jan 24 00:57:48.519126 containerd[1500]: time="2026-01-24T00:57:48.519044724Z" level=info msg="Forcibly stopping sandbox \"bb268af4557ca6655434d066efced3db85a612e145e290fe843fe0365ad7bdd5\"" Jan 24 00:57:48.627167 containerd[1500]: 2026-01-24 00:57:48.571 [WARNING][5054] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bb268af4557ca6655434d066efced3db85a612e145e290fe843fe0365ad7bdd5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--32cc93a80b-k8s-coredns--668d6bf9bc--mnh95-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"60844a2d-0038-4132-8338-140b75e01a74", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 56, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-32cc93a80b", ContainerID:"8857b600946e7338fcbf18612e1f35d326cc503f09384d59178532a4d2e9b280", Pod:"coredns-668d6bf9bc-mnh95", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.24.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calide3e79c8cdd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:48.627167 containerd[1500]: 2026-01-24 00:57:48.571 [INFO][5054] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bb268af4557ca6655434d066efced3db85a612e145e290fe843fe0365ad7bdd5" Jan 24 00:57:48.627167 containerd[1500]: 2026-01-24 00:57:48.571 [INFO][5054] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bb268af4557ca6655434d066efced3db85a612e145e290fe843fe0365ad7bdd5" iface="eth0" netns="" Jan 24 00:57:48.627167 containerd[1500]: 2026-01-24 00:57:48.571 [INFO][5054] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bb268af4557ca6655434d066efced3db85a612e145e290fe843fe0365ad7bdd5" Jan 24 00:57:48.627167 containerd[1500]: 2026-01-24 00:57:48.571 [INFO][5054] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bb268af4557ca6655434d066efced3db85a612e145e290fe843fe0365ad7bdd5" Jan 24 00:57:48.627167 containerd[1500]: 2026-01-24 00:57:48.607 [INFO][5061] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="bb268af4557ca6655434d066efced3db85a612e145e290fe843fe0365ad7bdd5" HandleID="k8s-pod-network.bb268af4557ca6655434d066efced3db85a612e145e290fe843fe0365ad7bdd5" Workload="ci--4081--3--6--n--32cc93a80b-k8s-coredns--668d6bf9bc--mnh95-eth0" Jan 24 00:57:48.627167 containerd[1500]: 2026-01-24 00:57:48.607 [INFO][5061] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:48.627167 containerd[1500]: 2026-01-24 00:57:48.607 [INFO][5061] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:48.627167 containerd[1500]: 2026-01-24 00:57:48.617 [WARNING][5061] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="bb268af4557ca6655434d066efced3db85a612e145e290fe843fe0365ad7bdd5" HandleID="k8s-pod-network.bb268af4557ca6655434d066efced3db85a612e145e290fe843fe0365ad7bdd5" Workload="ci--4081--3--6--n--32cc93a80b-k8s-coredns--668d6bf9bc--mnh95-eth0" Jan 24 00:57:48.627167 containerd[1500]: 2026-01-24 00:57:48.617 [INFO][5061] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="bb268af4557ca6655434d066efced3db85a612e145e290fe843fe0365ad7bdd5" HandleID="k8s-pod-network.bb268af4557ca6655434d066efced3db85a612e145e290fe843fe0365ad7bdd5" Workload="ci--4081--3--6--n--32cc93a80b-k8s-coredns--668d6bf9bc--mnh95-eth0" Jan 24 00:57:48.627167 containerd[1500]: 2026-01-24 00:57:48.619 [INFO][5061] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:48.627167 containerd[1500]: 2026-01-24 00:57:48.623 [INFO][5054] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bb268af4557ca6655434d066efced3db85a612e145e290fe843fe0365ad7bdd5" Jan 24 00:57:48.627167 containerd[1500]: time="2026-01-24T00:57:48.627074362Z" level=info msg="TearDown network for sandbox \"bb268af4557ca6655434d066efced3db85a612e145e290fe843fe0365ad7bdd5\" successfully" Jan 24 00:57:48.636470 containerd[1500]: time="2026-01-24T00:57:48.636385187Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bb268af4557ca6655434d066efced3db85a612e145e290fe843fe0365ad7bdd5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:57:48.636470 containerd[1500]: time="2026-01-24T00:57:48.636458178Z" level=info msg="RemovePodSandbox \"bb268af4557ca6655434d066efced3db85a612e145e290fe843fe0365ad7bdd5\" returns successfully" Jan 24 00:57:48.637450 containerd[1500]: time="2026-01-24T00:57:48.637064499Z" level=info msg="StopPodSandbox for \"ba11e042217f8e4b40484292a53b786856df957fe4d7a333bd0a1a6ef8f12481\"" Jan 24 00:57:48.742348 containerd[1500]: 2026-01-24 00:57:48.686 [WARNING][5076] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ba11e042217f8e4b40484292a53b786856df957fe4d7a333bd0a1a6ef8f12481" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--32cc93a80b-k8s-coredns--668d6bf9bc--7fv4k-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1d12baac-e259-43f8-8c34-2fc70e4e9750", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 56, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-32cc93a80b", ContainerID:"4b274f356428dd87c7c3d03c47c118cf4850d38eb78d1d46788c81c8e993f2b4", Pod:"coredns-668d6bf9bc-7fv4k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.24.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1ba17ad0cbe", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:48.742348 containerd[1500]: 2026-01-24 00:57:48.686 [INFO][5076] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ba11e042217f8e4b40484292a53b786856df957fe4d7a333bd0a1a6ef8f12481" Jan 24 00:57:48.742348 containerd[1500]: 2026-01-24 00:57:48.686 [INFO][5076] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ba11e042217f8e4b40484292a53b786856df957fe4d7a333bd0a1a6ef8f12481" iface="eth0" netns="" Jan 24 00:57:48.742348 containerd[1500]: 2026-01-24 00:57:48.686 [INFO][5076] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ba11e042217f8e4b40484292a53b786856df957fe4d7a333bd0a1a6ef8f12481" Jan 24 00:57:48.742348 containerd[1500]: 2026-01-24 00:57:48.686 [INFO][5076] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ba11e042217f8e4b40484292a53b786856df957fe4d7a333bd0a1a6ef8f12481" Jan 24 00:57:48.742348 containerd[1500]: 2026-01-24 00:57:48.718 [INFO][5083] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ba11e042217f8e4b40484292a53b786856df957fe4d7a333bd0a1a6ef8f12481" HandleID="k8s-pod-network.ba11e042217f8e4b40484292a53b786856df957fe4d7a333bd0a1a6ef8f12481" Workload="ci--4081--3--6--n--32cc93a80b-k8s-coredns--668d6bf9bc--7fv4k-eth0" Jan 24 00:57:48.742348 containerd[1500]: 2026-01-24 00:57:48.718 [INFO][5083] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:48.742348 containerd[1500]: 2026-01-24 00:57:48.719 [INFO][5083] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:48.742348 containerd[1500]: 2026-01-24 00:57:48.728 [WARNING][5083] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ba11e042217f8e4b40484292a53b786856df957fe4d7a333bd0a1a6ef8f12481" HandleID="k8s-pod-network.ba11e042217f8e4b40484292a53b786856df957fe4d7a333bd0a1a6ef8f12481" Workload="ci--4081--3--6--n--32cc93a80b-k8s-coredns--668d6bf9bc--7fv4k-eth0" Jan 24 00:57:48.742348 containerd[1500]: 2026-01-24 00:57:48.728 [INFO][5083] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ba11e042217f8e4b40484292a53b786856df957fe4d7a333bd0a1a6ef8f12481" HandleID="k8s-pod-network.ba11e042217f8e4b40484292a53b786856df957fe4d7a333bd0a1a6ef8f12481" Workload="ci--4081--3--6--n--32cc93a80b-k8s-coredns--668d6bf9bc--7fv4k-eth0" Jan 24 00:57:48.742348 containerd[1500]: 2026-01-24 00:57:48.733 [INFO][5083] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:48.742348 containerd[1500]: 2026-01-24 00:57:48.737 [INFO][5076] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ba11e042217f8e4b40484292a53b786856df957fe4d7a333bd0a1a6ef8f12481" Jan 24 00:57:48.742348 containerd[1500]: time="2026-01-24T00:57:48.742254472Z" level=info msg="TearDown network for sandbox \"ba11e042217f8e4b40484292a53b786856df957fe4d7a333bd0a1a6ef8f12481\" successfully" Jan 24 00:57:48.742348 containerd[1500]: time="2026-01-24T00:57:48.742287642Z" level=info msg="StopPodSandbox for \"ba11e042217f8e4b40484292a53b786856df957fe4d7a333bd0a1a6ef8f12481\" returns successfully" Jan 24 00:57:48.743353 containerd[1500]: time="2026-01-24T00:57:48.743032243Z" level=info msg="RemovePodSandbox for \"ba11e042217f8e4b40484292a53b786856df957fe4d7a333bd0a1a6ef8f12481\"" Jan 24 00:57:48.743353 containerd[1500]: time="2026-01-24T00:57:48.743071193Z" level=info msg="Forcibly stopping sandbox \"ba11e042217f8e4b40484292a53b786856df957fe4d7a333bd0a1a6ef8f12481\"" Jan 24 00:57:48.824876 containerd[1500]: 2026-01-24 00:57:48.791 [WARNING][5098] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ba11e042217f8e4b40484292a53b786856df957fe4d7a333bd0a1a6ef8f12481" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--32cc93a80b-k8s-coredns--668d6bf9bc--7fv4k-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1d12baac-e259-43f8-8c34-2fc70e4e9750", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 56, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-32cc93a80b", ContainerID:"4b274f356428dd87c7c3d03c47c118cf4850d38eb78d1d46788c81c8e993f2b4", Pod:"coredns-668d6bf9bc-7fv4k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.24.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1ba17ad0cbe", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:48.824876 containerd[1500]: 2026-01-24 00:57:48.792 [INFO][5098] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ba11e042217f8e4b40484292a53b786856df957fe4d7a333bd0a1a6ef8f12481" Jan 24 00:57:48.824876 containerd[1500]: 2026-01-24 00:57:48.792 [INFO][5098] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ba11e042217f8e4b40484292a53b786856df957fe4d7a333bd0a1a6ef8f12481" iface="eth0" netns="" Jan 24 00:57:48.824876 containerd[1500]: 2026-01-24 00:57:48.792 [INFO][5098] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ba11e042217f8e4b40484292a53b786856df957fe4d7a333bd0a1a6ef8f12481" Jan 24 00:57:48.824876 containerd[1500]: 2026-01-24 00:57:48.792 [INFO][5098] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ba11e042217f8e4b40484292a53b786856df957fe4d7a333bd0a1a6ef8f12481" Jan 24 00:57:48.824876 containerd[1500]: 2026-01-24 00:57:48.809 [INFO][5105] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ba11e042217f8e4b40484292a53b786856df957fe4d7a333bd0a1a6ef8f12481" HandleID="k8s-pod-network.ba11e042217f8e4b40484292a53b786856df957fe4d7a333bd0a1a6ef8f12481" Workload="ci--4081--3--6--n--32cc93a80b-k8s-coredns--668d6bf9bc--7fv4k-eth0" Jan 24 00:57:48.824876 containerd[1500]: 2026-01-24 00:57:48.809 [INFO][5105] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:48.824876 containerd[1500]: 2026-01-24 00:57:48.809 [INFO][5105] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:48.824876 containerd[1500]: 2026-01-24 00:57:48.816 [WARNING][5105] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ba11e042217f8e4b40484292a53b786856df957fe4d7a333bd0a1a6ef8f12481" HandleID="k8s-pod-network.ba11e042217f8e4b40484292a53b786856df957fe4d7a333bd0a1a6ef8f12481" Workload="ci--4081--3--6--n--32cc93a80b-k8s-coredns--668d6bf9bc--7fv4k-eth0" Jan 24 00:57:48.824876 containerd[1500]: 2026-01-24 00:57:48.817 [INFO][5105] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ba11e042217f8e4b40484292a53b786856df957fe4d7a333bd0a1a6ef8f12481" HandleID="k8s-pod-network.ba11e042217f8e4b40484292a53b786856df957fe4d7a333bd0a1a6ef8f12481" Workload="ci--4081--3--6--n--32cc93a80b-k8s-coredns--668d6bf9bc--7fv4k-eth0" Jan 24 00:57:48.824876 containerd[1500]: 2026-01-24 00:57:48.818 [INFO][5105] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:48.824876 containerd[1500]: 2026-01-24 00:57:48.819 [INFO][5098] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ba11e042217f8e4b40484292a53b786856df957fe4d7a333bd0a1a6ef8f12481" Jan 24 00:57:48.824876 containerd[1500]: time="2026-01-24T00:57:48.823541416Z" level=info msg="TearDown network for sandbox \"ba11e042217f8e4b40484292a53b786856df957fe4d7a333bd0a1a6ef8f12481\" successfully" Jan 24 00:57:48.827891 containerd[1500]: time="2026-01-24T00:57:48.827832733Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ba11e042217f8e4b40484292a53b786856df957fe4d7a333bd0a1a6ef8f12481\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:57:48.827891 containerd[1500]: time="2026-01-24T00:57:48.827873823Z" level=info msg="RemovePodSandbox \"ba11e042217f8e4b40484292a53b786856df957fe4d7a333bd0a1a6ef8f12481\" returns successfully" Jan 24 00:57:48.828523 containerd[1500]: time="2026-01-24T00:57:48.828491804Z" level=info msg="StopPodSandbox for \"e21f9a7bedb692f4cb3dbded049ddb229728defded5a1fcdfba2545bf409fd5f\"" Jan 24 00:57:48.910141 containerd[1500]: 2026-01-24 00:57:48.861 [WARNING][5120] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e21f9a7bedb692f4cb3dbded049ddb229728defded5a1fcdfba2545bf409fd5f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--32cc93a80b-k8s-calico--apiserver--6ff89d9558--pr2mw-eth0", GenerateName:"calico-apiserver-6ff89d9558-", Namespace:"calico-apiserver", SelfLink:"", UID:"e25c9c50-eb09-419b-a216-dabe2aa24f5e", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 57, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6ff89d9558", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-32cc93a80b", ContainerID:"edba8423462bd8484e093602438692b10ba4392cb8dc3364d67cd02564608eb3", Pod:"calico-apiserver-6ff89d9558-pr2mw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5e010050450", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:48.910141 containerd[1500]: 2026-01-24 00:57:48.861 [INFO][5120] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e21f9a7bedb692f4cb3dbded049ddb229728defded5a1fcdfba2545bf409fd5f" Jan 24 00:57:48.910141 containerd[1500]: 2026-01-24 00:57:48.861 [INFO][5120] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e21f9a7bedb692f4cb3dbded049ddb229728defded5a1fcdfba2545bf409fd5f" iface="eth0" netns="" Jan 24 00:57:48.910141 containerd[1500]: 2026-01-24 00:57:48.861 [INFO][5120] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e21f9a7bedb692f4cb3dbded049ddb229728defded5a1fcdfba2545bf409fd5f" Jan 24 00:57:48.910141 containerd[1500]: 2026-01-24 00:57:48.861 [INFO][5120] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e21f9a7bedb692f4cb3dbded049ddb229728defded5a1fcdfba2545bf409fd5f" Jan 24 00:57:48.910141 containerd[1500]: 2026-01-24 00:57:48.888 [INFO][5127] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e21f9a7bedb692f4cb3dbded049ddb229728defded5a1fcdfba2545bf409fd5f" HandleID="k8s-pod-network.e21f9a7bedb692f4cb3dbded049ddb229728defded5a1fcdfba2545bf409fd5f" Workload="ci--4081--3--6--n--32cc93a80b-k8s-calico--apiserver--6ff89d9558--pr2mw-eth0" Jan 24 00:57:48.910141 containerd[1500]: 2026-01-24 00:57:48.889 [INFO][5127] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:48.910141 containerd[1500]: 2026-01-24 00:57:48.889 [INFO][5127] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:48.910141 containerd[1500]: 2026-01-24 00:57:48.901 [WARNING][5127] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e21f9a7bedb692f4cb3dbded049ddb229728defded5a1fcdfba2545bf409fd5f" HandleID="k8s-pod-network.e21f9a7bedb692f4cb3dbded049ddb229728defded5a1fcdfba2545bf409fd5f" Workload="ci--4081--3--6--n--32cc93a80b-k8s-calico--apiserver--6ff89d9558--pr2mw-eth0" Jan 24 00:57:48.910141 containerd[1500]: 2026-01-24 00:57:48.902 [INFO][5127] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e21f9a7bedb692f4cb3dbded049ddb229728defded5a1fcdfba2545bf409fd5f" HandleID="k8s-pod-network.e21f9a7bedb692f4cb3dbded049ddb229728defded5a1fcdfba2545bf409fd5f" Workload="ci--4081--3--6--n--32cc93a80b-k8s-calico--apiserver--6ff89d9558--pr2mw-eth0" Jan 24 00:57:48.910141 containerd[1500]: 2026-01-24 00:57:48.903 [INFO][5127] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:48.910141 containerd[1500]: 2026-01-24 00:57:48.907 [INFO][5120] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e21f9a7bedb692f4cb3dbded049ddb229728defded5a1fcdfba2545bf409fd5f" Jan 24 00:57:48.910141 containerd[1500]: time="2026-01-24T00:57:48.910063189Z" level=info msg="TearDown network for sandbox \"e21f9a7bedb692f4cb3dbded049ddb229728defded5a1fcdfba2545bf409fd5f\" successfully" Jan 24 00:57:48.910141 containerd[1500]: time="2026-01-24T00:57:48.910093139Z" level=info msg="StopPodSandbox for \"e21f9a7bedb692f4cb3dbded049ddb229728defded5a1fcdfba2545bf409fd5f\" returns successfully" Jan 24 00:57:48.912572 containerd[1500]: time="2026-01-24T00:57:48.912490243Z" level=info msg="RemovePodSandbox for \"e21f9a7bedb692f4cb3dbded049ddb229728defded5a1fcdfba2545bf409fd5f\"" Jan 24 00:57:48.912572 containerd[1500]: time="2026-01-24T00:57:48.912546013Z" level=info msg="Forcibly stopping sandbox \"e21f9a7bedb692f4cb3dbded049ddb229728defded5a1fcdfba2545bf409fd5f\"" Jan 24 00:57:48.971965 containerd[1500]: time="2026-01-24T00:57:48.971853850Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:57:49.015458 containerd[1500]: 2026-01-24 00:57:48.959 [WARNING][5141] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e21f9a7bedb692f4cb3dbded049ddb229728defded5a1fcdfba2545bf409fd5f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--32cc93a80b-k8s-calico--apiserver--6ff89d9558--pr2mw-eth0", GenerateName:"calico-apiserver-6ff89d9558-", Namespace:"calico-apiserver", SelfLink:"", UID:"e25c9c50-eb09-419b-a216-dabe2aa24f5e", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 57, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6ff89d9558", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-32cc93a80b", ContainerID:"edba8423462bd8484e093602438692b10ba4392cb8dc3364d67cd02564608eb3", Pod:"calico-apiserver-6ff89d9558-pr2mw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5e010050450", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:49.015458 containerd[1500]: 2026-01-24 00:57:48.959 [INFO][5141] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e21f9a7bedb692f4cb3dbded049ddb229728defded5a1fcdfba2545bf409fd5f" Jan 24 00:57:49.015458 containerd[1500]: 2026-01-24 00:57:48.959 [INFO][5141] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e21f9a7bedb692f4cb3dbded049ddb229728defded5a1fcdfba2545bf409fd5f" iface="eth0" netns="" Jan 24 00:57:49.015458 containerd[1500]: 2026-01-24 00:57:48.959 [INFO][5141] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e21f9a7bedb692f4cb3dbded049ddb229728defded5a1fcdfba2545bf409fd5f" Jan 24 00:57:49.015458 containerd[1500]: 2026-01-24 00:57:48.959 [INFO][5141] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e21f9a7bedb692f4cb3dbded049ddb229728defded5a1fcdfba2545bf409fd5f" Jan 24 00:57:49.015458 containerd[1500]: 2026-01-24 00:57:48.996 [INFO][5149] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e21f9a7bedb692f4cb3dbded049ddb229728defded5a1fcdfba2545bf409fd5f" HandleID="k8s-pod-network.e21f9a7bedb692f4cb3dbded049ddb229728defded5a1fcdfba2545bf409fd5f" Workload="ci--4081--3--6--n--32cc93a80b-k8s-calico--apiserver--6ff89d9558--pr2mw-eth0" Jan 24 00:57:49.015458 containerd[1500]: 2026-01-24 00:57:48.996 [INFO][5149] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:49.015458 containerd[1500]: 2026-01-24 00:57:48.997 [INFO][5149] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:49.015458 containerd[1500]: 2026-01-24 00:57:49.006 [WARNING][5149] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e21f9a7bedb692f4cb3dbded049ddb229728defded5a1fcdfba2545bf409fd5f" HandleID="k8s-pod-network.e21f9a7bedb692f4cb3dbded049ddb229728defded5a1fcdfba2545bf409fd5f" Workload="ci--4081--3--6--n--32cc93a80b-k8s-calico--apiserver--6ff89d9558--pr2mw-eth0" Jan 24 00:57:49.015458 containerd[1500]: 2026-01-24 00:57:49.006 [INFO][5149] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e21f9a7bedb692f4cb3dbded049ddb229728defded5a1fcdfba2545bf409fd5f" HandleID="k8s-pod-network.e21f9a7bedb692f4cb3dbded049ddb229728defded5a1fcdfba2545bf409fd5f" Workload="ci--4081--3--6--n--32cc93a80b-k8s-calico--apiserver--6ff89d9558--pr2mw-eth0" Jan 24 00:57:49.015458 containerd[1500]: 2026-01-24 00:57:49.009 [INFO][5149] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:49.015458 containerd[1500]: 2026-01-24 00:57:49.012 [INFO][5141] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e21f9a7bedb692f4cb3dbded049ddb229728defded5a1fcdfba2545bf409fd5f" Jan 24 00:57:49.016598 containerd[1500]: time="2026-01-24T00:57:49.015488971Z" level=info msg="TearDown network for sandbox \"e21f9a7bedb692f4cb3dbded049ddb229728defded5a1fcdfba2545bf409fd5f\" successfully" Jan 24 00:57:49.020413 containerd[1500]: time="2026-01-24T00:57:49.020335858Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e21f9a7bedb692f4cb3dbded049ddb229728defded5a1fcdfba2545bf409fd5f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:57:49.020588 containerd[1500]: time="2026-01-24T00:57:49.020417498Z" level=info msg="RemovePodSandbox \"e21f9a7bedb692f4cb3dbded049ddb229728defded5a1fcdfba2545bf409fd5f\" returns successfully" Jan 24 00:57:49.021218 containerd[1500]: time="2026-01-24T00:57:49.021175100Z" level=info msg="StopPodSandbox for \"caa2653b5e4099534e31eca9e2b062c04e8173a1b4f42cdefa228cfaa1e6b053\"" Jan 24 00:57:49.121859 containerd[1500]: 2026-01-24 00:57:49.070 [WARNING][5164] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="caa2653b5e4099534e31eca9e2b062c04e8173a1b4f42cdefa228cfaa1e6b053" WorkloadEndpoint="ci--4081--3--6--n--32cc93a80b-k8s-whisker--675bdfd5f--2k8rp-eth0" Jan 24 00:57:49.121859 containerd[1500]: 2026-01-24 00:57:49.070 [INFO][5164] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="caa2653b5e4099534e31eca9e2b062c04e8173a1b4f42cdefa228cfaa1e6b053" Jan 24 00:57:49.121859 containerd[1500]: 2026-01-24 00:57:49.070 [INFO][5164] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="caa2653b5e4099534e31eca9e2b062c04e8173a1b4f42cdefa228cfaa1e6b053" iface="eth0" netns="" Jan 24 00:57:49.121859 containerd[1500]: 2026-01-24 00:57:49.070 [INFO][5164] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="caa2653b5e4099534e31eca9e2b062c04e8173a1b4f42cdefa228cfaa1e6b053" Jan 24 00:57:49.121859 containerd[1500]: 2026-01-24 00:57:49.070 [INFO][5164] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="caa2653b5e4099534e31eca9e2b062c04e8173a1b4f42cdefa228cfaa1e6b053" Jan 24 00:57:49.121859 containerd[1500]: 2026-01-24 00:57:49.103 [INFO][5171] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="caa2653b5e4099534e31eca9e2b062c04e8173a1b4f42cdefa228cfaa1e6b053" HandleID="k8s-pod-network.caa2653b5e4099534e31eca9e2b062c04e8173a1b4f42cdefa228cfaa1e6b053" Workload="ci--4081--3--6--n--32cc93a80b-k8s-whisker--675bdfd5f--2k8rp-eth0" Jan 24 00:57:49.121859 containerd[1500]: 2026-01-24 00:57:49.104 [INFO][5171] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:49.121859 containerd[1500]: 2026-01-24 00:57:49.104 [INFO][5171] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:49.121859 containerd[1500]: 2026-01-24 00:57:49.113 [WARNING][5171] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="caa2653b5e4099534e31eca9e2b062c04e8173a1b4f42cdefa228cfaa1e6b053" HandleID="k8s-pod-network.caa2653b5e4099534e31eca9e2b062c04e8173a1b4f42cdefa228cfaa1e6b053" Workload="ci--4081--3--6--n--32cc93a80b-k8s-whisker--675bdfd5f--2k8rp-eth0" Jan 24 00:57:49.121859 containerd[1500]: 2026-01-24 00:57:49.113 [INFO][5171] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="caa2653b5e4099534e31eca9e2b062c04e8173a1b4f42cdefa228cfaa1e6b053" HandleID="k8s-pod-network.caa2653b5e4099534e31eca9e2b062c04e8173a1b4f42cdefa228cfaa1e6b053" Workload="ci--4081--3--6--n--32cc93a80b-k8s-whisker--675bdfd5f--2k8rp-eth0" Jan 24 00:57:49.121859 containerd[1500]: 2026-01-24 00:57:49.115 [INFO][5171] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:49.121859 containerd[1500]: 2026-01-24 00:57:49.118 [INFO][5164] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="caa2653b5e4099534e31eca9e2b062c04e8173a1b4f42cdefa228cfaa1e6b053" Jan 24 00:57:49.123539 containerd[1500]: time="2026-01-24T00:57:49.121889485Z" level=info msg="TearDown network for sandbox \"caa2653b5e4099534e31eca9e2b062c04e8173a1b4f42cdefa228cfaa1e6b053\" successfully" Jan 24 00:57:49.123539 containerd[1500]: time="2026-01-24T00:57:49.121937445Z" level=info msg="StopPodSandbox for \"caa2653b5e4099534e31eca9e2b062c04e8173a1b4f42cdefa228cfaa1e6b053\" returns successfully" Jan 24 00:57:49.123539 containerd[1500]: time="2026-01-24T00:57:49.122819207Z" level=info msg="RemovePodSandbox for \"caa2653b5e4099534e31eca9e2b062c04e8173a1b4f42cdefa228cfaa1e6b053\"" Jan 24 00:57:49.123539 containerd[1500]: time="2026-01-24T00:57:49.122855547Z" level=info msg="Forcibly stopping sandbox \"caa2653b5e4099534e31eca9e2b062c04e8173a1b4f42cdefa228cfaa1e6b053\"" Jan 24 00:57:49.246115 containerd[1500]: 2026-01-24 00:57:49.189 [WARNING][5186] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="caa2653b5e4099534e31eca9e2b062c04e8173a1b4f42cdefa228cfaa1e6b053" WorkloadEndpoint="ci--4081--3--6--n--32cc93a80b-k8s-whisker--675bdfd5f--2k8rp-eth0" Jan 24 00:57:49.246115 containerd[1500]: 2026-01-24 00:57:49.189 [INFO][5186] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="caa2653b5e4099534e31eca9e2b062c04e8173a1b4f42cdefa228cfaa1e6b053" Jan 24 00:57:49.246115 containerd[1500]: 2026-01-24 00:57:49.189 [INFO][5186] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="caa2653b5e4099534e31eca9e2b062c04e8173a1b4f42cdefa228cfaa1e6b053" iface="eth0" netns="" Jan 24 00:57:49.246115 containerd[1500]: 2026-01-24 00:57:49.189 [INFO][5186] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="caa2653b5e4099534e31eca9e2b062c04e8173a1b4f42cdefa228cfaa1e6b053" Jan 24 00:57:49.246115 containerd[1500]: 2026-01-24 00:57:49.189 [INFO][5186] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="caa2653b5e4099534e31eca9e2b062c04e8173a1b4f42cdefa228cfaa1e6b053" Jan 24 00:57:49.246115 containerd[1500]: 2026-01-24 00:57:49.225 [INFO][5193] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="caa2653b5e4099534e31eca9e2b062c04e8173a1b4f42cdefa228cfaa1e6b053" HandleID="k8s-pod-network.caa2653b5e4099534e31eca9e2b062c04e8173a1b4f42cdefa228cfaa1e6b053" Workload="ci--4081--3--6--n--32cc93a80b-k8s-whisker--675bdfd5f--2k8rp-eth0" Jan 24 00:57:49.246115 containerd[1500]: 2026-01-24 00:57:49.226 [INFO][5193] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:49.246115 containerd[1500]: 2026-01-24 00:57:49.226 [INFO][5193] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:49.246115 containerd[1500]: 2026-01-24 00:57:49.234 [WARNING][5193] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="caa2653b5e4099534e31eca9e2b062c04e8173a1b4f42cdefa228cfaa1e6b053" HandleID="k8s-pod-network.caa2653b5e4099534e31eca9e2b062c04e8173a1b4f42cdefa228cfaa1e6b053" Workload="ci--4081--3--6--n--32cc93a80b-k8s-whisker--675bdfd5f--2k8rp-eth0" Jan 24 00:57:49.246115 containerd[1500]: 2026-01-24 00:57:49.234 [INFO][5193] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="caa2653b5e4099534e31eca9e2b062c04e8173a1b4f42cdefa228cfaa1e6b053" HandleID="k8s-pod-network.caa2653b5e4099534e31eca9e2b062c04e8173a1b4f42cdefa228cfaa1e6b053" Workload="ci--4081--3--6--n--32cc93a80b-k8s-whisker--675bdfd5f--2k8rp-eth0" Jan 24 00:57:49.246115 containerd[1500]: 2026-01-24 00:57:49.236 [INFO][5193] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:49.246115 containerd[1500]: 2026-01-24 00:57:49.242 [INFO][5186] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="caa2653b5e4099534e31eca9e2b062c04e8173a1b4f42cdefa228cfaa1e6b053" Jan 24 00:57:49.249042 containerd[1500]: time="2026-01-24T00:57:49.246259867Z" level=info msg="TearDown network for sandbox \"caa2653b5e4099534e31eca9e2b062c04e8173a1b4f42cdefa228cfaa1e6b053\" successfully" Jan 24 00:57:49.252610 containerd[1500]: time="2026-01-24T00:57:49.252533157Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"caa2653b5e4099534e31eca9e2b062c04e8173a1b4f42cdefa228cfaa1e6b053\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:57:49.252610 containerd[1500]: time="2026-01-24T00:57:49.252608267Z" level=info msg="RemovePodSandbox \"caa2653b5e4099534e31eca9e2b062c04e8173a1b4f42cdefa228cfaa1e6b053\" returns successfully" Jan 24 00:57:49.253215 containerd[1500]: time="2026-01-24T00:57:49.253159518Z" level=info msg="StopPodSandbox for \"cc13e5b333c492a15e7614e5d51324566f831788a9cd5cc424e87f5203496bc1\"" Jan 24 00:57:49.373448 containerd[1500]: 2026-01-24 00:57:49.304 [WARNING][5207] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cc13e5b333c492a15e7614e5d51324566f831788a9cd5cc424e87f5203496bc1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--32cc93a80b-k8s-calico--apiserver--6ff89d9558--qsdz4-eth0", GenerateName:"calico-apiserver-6ff89d9558-", Namespace:"calico-apiserver", SelfLink:"", UID:"abee6eff-7ee6-4417-a4eb-5f0514e6e7e9", ResourceVersion:"1070", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 57, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6ff89d9558", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-32cc93a80b", ContainerID:"0cfad3d37f50920f29fba2784c573c59708a2d2b0697b591f4e2ac32bdb24067", Pod:"calico-apiserver-6ff89d9558-qsdz4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali09b5cb2c02c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:49.373448 containerd[1500]: 2026-01-24 00:57:49.305 [INFO][5207] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cc13e5b333c492a15e7614e5d51324566f831788a9cd5cc424e87f5203496bc1" Jan 24 00:57:49.373448 containerd[1500]: 2026-01-24 00:57:49.305 [INFO][5207] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cc13e5b333c492a15e7614e5d51324566f831788a9cd5cc424e87f5203496bc1" iface="eth0" netns="" Jan 24 00:57:49.373448 containerd[1500]: 2026-01-24 00:57:49.305 [INFO][5207] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cc13e5b333c492a15e7614e5d51324566f831788a9cd5cc424e87f5203496bc1" Jan 24 00:57:49.373448 containerd[1500]: 2026-01-24 00:57:49.305 [INFO][5207] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cc13e5b333c492a15e7614e5d51324566f831788a9cd5cc424e87f5203496bc1" Jan 24 00:57:49.373448 containerd[1500]: 2026-01-24 00:57:49.360 [INFO][5214] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="cc13e5b333c492a15e7614e5d51324566f831788a9cd5cc424e87f5203496bc1" HandleID="k8s-pod-network.cc13e5b333c492a15e7614e5d51324566f831788a9cd5cc424e87f5203496bc1" Workload="ci--4081--3--6--n--32cc93a80b-k8s-calico--apiserver--6ff89d9558--qsdz4-eth0" Jan 24 00:57:49.373448 containerd[1500]: 2026-01-24 00:57:49.361 [INFO][5214] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:49.373448 containerd[1500]: 2026-01-24 00:57:49.361 [INFO][5214] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:49.373448 containerd[1500]: 2026-01-24 00:57:49.367 [WARNING][5214] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="cc13e5b333c492a15e7614e5d51324566f831788a9cd5cc424e87f5203496bc1" HandleID="k8s-pod-network.cc13e5b333c492a15e7614e5d51324566f831788a9cd5cc424e87f5203496bc1" Workload="ci--4081--3--6--n--32cc93a80b-k8s-calico--apiserver--6ff89d9558--qsdz4-eth0" Jan 24 00:57:49.373448 containerd[1500]: 2026-01-24 00:57:49.367 [INFO][5214] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="cc13e5b333c492a15e7614e5d51324566f831788a9cd5cc424e87f5203496bc1" HandleID="k8s-pod-network.cc13e5b333c492a15e7614e5d51324566f831788a9cd5cc424e87f5203496bc1" Workload="ci--4081--3--6--n--32cc93a80b-k8s-calico--apiserver--6ff89d9558--qsdz4-eth0" Jan 24 00:57:49.373448 containerd[1500]: 2026-01-24 00:57:49.368 [INFO][5214] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:49.373448 containerd[1500]: 2026-01-24 00:57:49.370 [INFO][5207] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cc13e5b333c492a15e7614e5d51324566f831788a9cd5cc424e87f5203496bc1" Jan 24 00:57:49.373448 containerd[1500]: time="2026-01-24T00:57:49.373360044Z" level=info msg="TearDown network for sandbox \"cc13e5b333c492a15e7614e5d51324566f831788a9cd5cc424e87f5203496bc1\" successfully" Jan 24 00:57:49.373448 containerd[1500]: time="2026-01-24T00:57:49.373378894Z" level=info msg="StopPodSandbox for \"cc13e5b333c492a15e7614e5d51324566f831788a9cd5cc424e87f5203496bc1\" returns successfully" Jan 24 00:57:49.373879 containerd[1500]: time="2026-01-24T00:57:49.373805645Z" level=info msg="RemovePodSandbox for \"cc13e5b333c492a15e7614e5d51324566f831788a9cd5cc424e87f5203496bc1\"" Jan 24 00:57:49.373879 containerd[1500]: time="2026-01-24T00:57:49.373837025Z" level=info msg="Forcibly stopping sandbox \"cc13e5b333c492a15e7614e5d51324566f831788a9cd5cc424e87f5203496bc1\"" Jan 24 00:57:49.411562 containerd[1500]: time="2026-01-24T00:57:49.411456843Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:57:49.413527 containerd[1500]: time="2026-01-24T00:57:49.413336896Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:57:49.413628 containerd[1500]: time="2026-01-24T00:57:49.413579286Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 24 00:57:49.415609 kubelet[2546]: E0124 00:57:49.414064 2546 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:57:49.415609 kubelet[2546]: E0124 00:57:49.414145 2546 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:57:49.415609 kubelet[2546]: E0124 00:57:49.414294 2546 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wjqwt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-85cdccf5-5whtp_calico-system(92edd234-ce88-420a-bb1b-56d2f203263f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:57:49.416864 kubelet[2546]: E0124 00:57:49.416711 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85cdccf5-5whtp" podUID="92edd234-ce88-420a-bb1b-56d2f203263f" Jan 24 00:57:49.458484 containerd[1500]: 2026-01-24 00:57:49.406 [WARNING][5235] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cc13e5b333c492a15e7614e5d51324566f831788a9cd5cc424e87f5203496bc1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--32cc93a80b-k8s-calico--apiserver--6ff89d9558--qsdz4-eth0", GenerateName:"calico-apiserver-6ff89d9558-", Namespace:"calico-apiserver", SelfLink:"", UID:"abee6eff-7ee6-4417-a4eb-5f0514e6e7e9", ResourceVersion:"1070", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 57, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6ff89d9558", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-32cc93a80b", ContainerID:"0cfad3d37f50920f29fba2784c573c59708a2d2b0697b591f4e2ac32bdb24067", Pod:"calico-apiserver-6ff89d9558-qsdz4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali09b5cb2c02c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:49.458484 containerd[1500]: 2026-01-24 00:57:49.407 [INFO][5235] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cc13e5b333c492a15e7614e5d51324566f831788a9cd5cc424e87f5203496bc1" Jan 24 00:57:49.458484 containerd[1500]: 2026-01-24 00:57:49.407 [INFO][5235] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cc13e5b333c492a15e7614e5d51324566f831788a9cd5cc424e87f5203496bc1" iface="eth0" netns="" Jan 24 00:57:49.458484 containerd[1500]: 2026-01-24 00:57:49.407 [INFO][5235] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cc13e5b333c492a15e7614e5d51324566f831788a9cd5cc424e87f5203496bc1" Jan 24 00:57:49.458484 containerd[1500]: 2026-01-24 00:57:49.407 [INFO][5235] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cc13e5b333c492a15e7614e5d51324566f831788a9cd5cc424e87f5203496bc1" Jan 24 00:57:49.458484 containerd[1500]: 2026-01-24 00:57:49.442 [INFO][5243] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="cc13e5b333c492a15e7614e5d51324566f831788a9cd5cc424e87f5203496bc1" HandleID="k8s-pod-network.cc13e5b333c492a15e7614e5d51324566f831788a9cd5cc424e87f5203496bc1" Workload="ci--4081--3--6--n--32cc93a80b-k8s-calico--apiserver--6ff89d9558--qsdz4-eth0" Jan 24 00:57:49.458484 containerd[1500]: 2026-01-24 00:57:49.442 [INFO][5243] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:49.458484 containerd[1500]: 2026-01-24 00:57:49.442 [INFO][5243] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:49.458484 containerd[1500]: 2026-01-24 00:57:49.450 [WARNING][5243] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="cc13e5b333c492a15e7614e5d51324566f831788a9cd5cc424e87f5203496bc1" HandleID="k8s-pod-network.cc13e5b333c492a15e7614e5d51324566f831788a9cd5cc424e87f5203496bc1" Workload="ci--4081--3--6--n--32cc93a80b-k8s-calico--apiserver--6ff89d9558--qsdz4-eth0" Jan 24 00:57:49.458484 containerd[1500]: 2026-01-24 00:57:49.450 [INFO][5243] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="cc13e5b333c492a15e7614e5d51324566f831788a9cd5cc424e87f5203496bc1" HandleID="k8s-pod-network.cc13e5b333c492a15e7614e5d51324566f831788a9cd5cc424e87f5203496bc1" Workload="ci--4081--3--6--n--32cc93a80b-k8s-calico--apiserver--6ff89d9558--qsdz4-eth0" Jan 24 00:57:49.458484 containerd[1500]: 2026-01-24 00:57:49.452 [INFO][5243] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:49.458484 containerd[1500]: 2026-01-24 00:57:49.455 [INFO][5235] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cc13e5b333c492a15e7614e5d51324566f831788a9cd5cc424e87f5203496bc1" Jan 24 00:57:49.459192 containerd[1500]: time="2026-01-24T00:57:49.458538655Z" level=info msg="TearDown network for sandbox \"cc13e5b333c492a15e7614e5d51324566f831788a9cd5cc424e87f5203496bc1\" successfully" Jan 24 00:57:49.463841 containerd[1500]: time="2026-01-24T00:57:49.463788214Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cc13e5b333c492a15e7614e5d51324566f831788a9cd5cc424e87f5203496bc1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:57:49.464836 containerd[1500]: time="2026-01-24T00:57:49.463847584Z" level=info msg="RemovePodSandbox \"cc13e5b333c492a15e7614e5d51324566f831788a9cd5cc424e87f5203496bc1\" returns successfully" Jan 24 00:57:49.464836 containerd[1500]: time="2026-01-24T00:57:49.464440635Z" level=info msg="StopPodSandbox for \"35e4953a296a961a054181d1aa896e675da60d2594a6e7b913242f403f394e88\"" Jan 24 00:57:49.560705 containerd[1500]: 2026-01-24 00:57:49.513 [WARNING][5257] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="35e4953a296a961a054181d1aa896e675da60d2594a6e7b913242f403f394e88" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--32cc93a80b-k8s-csi--node--driver--ftl5s-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"43bd5f1f-4a0c-4b9f-b986-69bf7780bcee", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 57, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-32cc93a80b", ContainerID:"c6152d39adbb0240b8f24fb80cd6aab643d876dac40c4ae6e0d02a8799576135", Pod:"csi-node-driver-ftl5s", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.24.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali90b2fb94ca3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:49.560705 containerd[1500]: 2026-01-24 00:57:49.513 [INFO][5257] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="35e4953a296a961a054181d1aa896e675da60d2594a6e7b913242f403f394e88" Jan 24 00:57:49.560705 containerd[1500]: 2026-01-24 00:57:49.513 [INFO][5257] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="35e4953a296a961a054181d1aa896e675da60d2594a6e7b913242f403f394e88" iface="eth0" netns="" Jan 24 00:57:49.560705 containerd[1500]: 2026-01-24 00:57:49.513 [INFO][5257] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="35e4953a296a961a054181d1aa896e675da60d2594a6e7b913242f403f394e88" Jan 24 00:57:49.560705 containerd[1500]: 2026-01-24 00:57:49.513 [INFO][5257] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="35e4953a296a961a054181d1aa896e675da60d2594a6e7b913242f403f394e88" Jan 24 00:57:49.560705 containerd[1500]: 2026-01-24 00:57:49.541 [INFO][5264] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="35e4953a296a961a054181d1aa896e675da60d2594a6e7b913242f403f394e88" HandleID="k8s-pod-network.35e4953a296a961a054181d1aa896e675da60d2594a6e7b913242f403f394e88" Workload="ci--4081--3--6--n--32cc93a80b-k8s-csi--node--driver--ftl5s-eth0" Jan 24 00:57:49.560705 containerd[1500]: 2026-01-24 00:57:49.541 [INFO][5264] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:49.560705 containerd[1500]: 2026-01-24 00:57:49.541 [INFO][5264] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:49.560705 containerd[1500]: 2026-01-24 00:57:49.550 [WARNING][5264] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="35e4953a296a961a054181d1aa896e675da60d2594a6e7b913242f403f394e88" HandleID="k8s-pod-network.35e4953a296a961a054181d1aa896e675da60d2594a6e7b913242f403f394e88" Workload="ci--4081--3--6--n--32cc93a80b-k8s-csi--node--driver--ftl5s-eth0" Jan 24 00:57:49.560705 containerd[1500]: 2026-01-24 00:57:49.550 [INFO][5264] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="35e4953a296a961a054181d1aa896e675da60d2594a6e7b913242f403f394e88" HandleID="k8s-pod-network.35e4953a296a961a054181d1aa896e675da60d2594a6e7b913242f403f394e88" Workload="ci--4081--3--6--n--32cc93a80b-k8s-csi--node--driver--ftl5s-eth0" Jan 24 00:57:49.560705 containerd[1500]: 2026-01-24 00:57:49.551 [INFO][5264] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:49.560705 containerd[1500]: 2026-01-24 00:57:49.555 [INFO][5257] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="35e4953a296a961a054181d1aa896e675da60d2594a6e7b913242f403f394e88" Jan 24 00:57:49.560705 containerd[1500]: time="2026-01-24T00:57:49.559209361Z" level=info msg="TearDown network for sandbox \"35e4953a296a961a054181d1aa896e675da60d2594a6e7b913242f403f394e88\" successfully" Jan 24 00:57:49.560705 containerd[1500]: time="2026-01-24T00:57:49.559244391Z" level=info msg="StopPodSandbox for \"35e4953a296a961a054181d1aa896e675da60d2594a6e7b913242f403f394e88\" returns successfully" Jan 24 00:57:49.560705 containerd[1500]: time="2026-01-24T00:57:49.559870112Z" level=info msg="RemovePodSandbox for \"35e4953a296a961a054181d1aa896e675da60d2594a6e7b913242f403f394e88\"" Jan 24 00:57:49.560705 containerd[1500]: time="2026-01-24T00:57:49.559905252Z" level=info msg="Forcibly stopping sandbox \"35e4953a296a961a054181d1aa896e675da60d2594a6e7b913242f403f394e88\"" Jan 24 00:57:49.658591 containerd[1500]: 2026-01-24 00:57:49.606 [WARNING][5279] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="35e4953a296a961a054181d1aa896e675da60d2594a6e7b913242f403f394e88" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--32cc93a80b-k8s-csi--node--driver--ftl5s-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"43bd5f1f-4a0c-4b9f-b986-69bf7780bcee", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 57, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-32cc93a80b", ContainerID:"c6152d39adbb0240b8f24fb80cd6aab643d876dac40c4ae6e0d02a8799576135", Pod:"csi-node-driver-ftl5s", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.24.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali90b2fb94ca3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:49.658591 containerd[1500]: 2026-01-24 00:57:49.607 [INFO][5279] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="35e4953a296a961a054181d1aa896e675da60d2594a6e7b913242f403f394e88" Jan 24 00:57:49.658591 containerd[1500]: 2026-01-24 00:57:49.607 [INFO][5279] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="35e4953a296a961a054181d1aa896e675da60d2594a6e7b913242f403f394e88" iface="eth0" netns="" Jan 24 00:57:49.658591 containerd[1500]: 2026-01-24 00:57:49.607 [INFO][5279] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="35e4953a296a961a054181d1aa896e675da60d2594a6e7b913242f403f394e88" Jan 24 00:57:49.658591 containerd[1500]: 2026-01-24 00:57:49.607 [INFO][5279] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="35e4953a296a961a054181d1aa896e675da60d2594a6e7b913242f403f394e88" Jan 24 00:57:49.658591 containerd[1500]: 2026-01-24 00:57:49.640 [INFO][5286] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="35e4953a296a961a054181d1aa896e675da60d2594a6e7b913242f403f394e88" HandleID="k8s-pod-network.35e4953a296a961a054181d1aa896e675da60d2594a6e7b913242f403f394e88" Workload="ci--4081--3--6--n--32cc93a80b-k8s-csi--node--driver--ftl5s-eth0" Jan 24 00:57:49.658591 containerd[1500]: 2026-01-24 00:57:49.640 [INFO][5286] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:49.658591 containerd[1500]: 2026-01-24 00:57:49.641 [INFO][5286] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:49.658591 containerd[1500]: 2026-01-24 00:57:49.650 [WARNING][5286] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="35e4953a296a961a054181d1aa896e675da60d2594a6e7b913242f403f394e88" HandleID="k8s-pod-network.35e4953a296a961a054181d1aa896e675da60d2594a6e7b913242f403f394e88" Workload="ci--4081--3--6--n--32cc93a80b-k8s-csi--node--driver--ftl5s-eth0" Jan 24 00:57:49.658591 containerd[1500]: 2026-01-24 00:57:49.650 [INFO][5286] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="35e4953a296a961a054181d1aa896e675da60d2594a6e7b913242f403f394e88" HandleID="k8s-pod-network.35e4953a296a961a054181d1aa896e675da60d2594a6e7b913242f403f394e88" Workload="ci--4081--3--6--n--32cc93a80b-k8s-csi--node--driver--ftl5s-eth0" Jan 24 00:57:49.658591 containerd[1500]: 2026-01-24 00:57:49.652 [INFO][5286] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:49.658591 containerd[1500]: 2026-01-24 00:57:49.654 [INFO][5279] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="35e4953a296a961a054181d1aa896e675da60d2594a6e7b913242f403f394e88" Jan 24 00:57:49.659398 containerd[1500]: time="2026-01-24T00:57:49.658651905Z" level=info msg="TearDown network for sandbox \"35e4953a296a961a054181d1aa896e675da60d2594a6e7b913242f403f394e88\" successfully" Jan 24 00:57:49.663602 containerd[1500]: time="2026-01-24T00:57:49.663535742Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"35e4953a296a961a054181d1aa896e675da60d2594a6e7b913242f403f394e88\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:57:49.663665 containerd[1500]: time="2026-01-24T00:57:49.663631812Z" level=info msg="RemovePodSandbox \"35e4953a296a961a054181d1aa896e675da60d2594a6e7b913242f403f394e88\" returns successfully" Jan 24 00:57:49.664282 containerd[1500]: time="2026-01-24T00:57:49.664242343Z" level=info msg="StopPodSandbox for \"934533da5a1c76725bbaafd12ab91034fc55767a58215f0bc374fe7cd77e1d5b\"" Jan 24 00:57:49.748832 containerd[1500]: 2026-01-24 00:57:49.709 [WARNING][5300] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="934533da5a1c76725bbaafd12ab91034fc55767a58215f0bc374fe7cd77e1d5b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--32cc93a80b-k8s-calico--apiserver--59667657--b8mx9-eth0", GenerateName:"calico-apiserver-59667657-", Namespace:"calico-apiserver", SelfLink:"", UID:"3be98e24-0896-49a9-8666-4ca8f66cf2c8", ResourceVersion:"1110", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 57, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59667657", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-32cc93a80b", ContainerID:"9abde062edfac768314121e4edfd7041bf4adc91bb909ef37aa0d65b67c7e589", Pod:"calico-apiserver-59667657-b8mx9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali23d5b56d9b1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:49.748832 containerd[1500]: 2026-01-24 00:57:49.710 [INFO][5300] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="934533da5a1c76725bbaafd12ab91034fc55767a58215f0bc374fe7cd77e1d5b" Jan 24 00:57:49.748832 containerd[1500]: 2026-01-24 00:57:49.710 [INFO][5300] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="934533da5a1c76725bbaafd12ab91034fc55767a58215f0bc374fe7cd77e1d5b" iface="eth0" netns="" Jan 24 00:57:49.748832 containerd[1500]: 2026-01-24 00:57:49.710 [INFO][5300] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="934533da5a1c76725bbaafd12ab91034fc55767a58215f0bc374fe7cd77e1d5b" Jan 24 00:57:49.748832 containerd[1500]: 2026-01-24 00:57:49.710 [INFO][5300] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="934533da5a1c76725bbaafd12ab91034fc55767a58215f0bc374fe7cd77e1d5b" Jan 24 00:57:49.748832 containerd[1500]: 2026-01-24 00:57:49.733 [INFO][5307] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="934533da5a1c76725bbaafd12ab91034fc55767a58215f0bc374fe7cd77e1d5b" HandleID="k8s-pod-network.934533da5a1c76725bbaafd12ab91034fc55767a58215f0bc374fe7cd77e1d5b" Workload="ci--4081--3--6--n--32cc93a80b-k8s-calico--apiserver--59667657--b8mx9-eth0" Jan 24 00:57:49.748832 containerd[1500]: 2026-01-24 00:57:49.733 [INFO][5307] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:49.748832 containerd[1500]: 2026-01-24 00:57:49.733 [INFO][5307] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:49.748832 containerd[1500]: 2026-01-24 00:57:49.739 [WARNING][5307] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="934533da5a1c76725bbaafd12ab91034fc55767a58215f0bc374fe7cd77e1d5b" HandleID="k8s-pod-network.934533da5a1c76725bbaafd12ab91034fc55767a58215f0bc374fe7cd77e1d5b" Workload="ci--4081--3--6--n--32cc93a80b-k8s-calico--apiserver--59667657--b8mx9-eth0" Jan 24 00:57:49.748832 containerd[1500]: 2026-01-24 00:57:49.739 [INFO][5307] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="934533da5a1c76725bbaafd12ab91034fc55767a58215f0bc374fe7cd77e1d5b" HandleID="k8s-pod-network.934533da5a1c76725bbaafd12ab91034fc55767a58215f0bc374fe7cd77e1d5b" Workload="ci--4081--3--6--n--32cc93a80b-k8s-calico--apiserver--59667657--b8mx9-eth0" Jan 24 00:57:49.748832 containerd[1500]: 2026-01-24 00:57:49.741 [INFO][5307] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:49.748832 containerd[1500]: 2026-01-24 00:57:49.744 [INFO][5300] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="934533da5a1c76725bbaafd12ab91034fc55767a58215f0bc374fe7cd77e1d5b" Jan 24 00:57:49.749468 containerd[1500]: time="2026-01-24T00:57:49.748867854Z" level=info msg="TearDown network for sandbox \"934533da5a1c76725bbaafd12ab91034fc55767a58215f0bc374fe7cd77e1d5b\" successfully" Jan 24 00:57:49.749468 containerd[1500]: time="2026-01-24T00:57:49.748896164Z" level=info msg="StopPodSandbox for \"934533da5a1c76725bbaafd12ab91034fc55767a58215f0bc374fe7cd77e1d5b\" returns successfully" Jan 24 00:57:49.750534 containerd[1500]: time="2026-01-24T00:57:49.750482237Z" level=info msg="RemovePodSandbox for \"934533da5a1c76725bbaafd12ab91034fc55767a58215f0bc374fe7cd77e1d5b\"" Jan 24 00:57:49.750609 containerd[1500]: time="2026-01-24T00:57:49.750527307Z" level=info msg="Forcibly stopping sandbox \"934533da5a1c76725bbaafd12ab91034fc55767a58215f0bc374fe7cd77e1d5b\"" Jan 24 00:57:49.831794 containerd[1500]: 2026-01-24 00:57:49.795 [WARNING][5321] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="934533da5a1c76725bbaafd12ab91034fc55767a58215f0bc374fe7cd77e1d5b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--32cc93a80b-k8s-calico--apiserver--59667657--b8mx9-eth0", GenerateName:"calico-apiserver-59667657-", Namespace:"calico-apiserver", SelfLink:"", UID:"3be98e24-0896-49a9-8666-4ca8f66cf2c8", ResourceVersion:"1110", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 57, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59667657", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-32cc93a80b", ContainerID:"9abde062edfac768314121e4edfd7041bf4adc91bb909ef37aa0d65b67c7e589", Pod:"calico-apiserver-59667657-b8mx9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali23d5b56d9b1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:49.831794 containerd[1500]: 2026-01-24 00:57:49.795 [INFO][5321] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="934533da5a1c76725bbaafd12ab91034fc55767a58215f0bc374fe7cd77e1d5b" Jan 24 00:57:49.831794 containerd[1500]: 2026-01-24 00:57:49.795 [INFO][5321] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="934533da5a1c76725bbaafd12ab91034fc55767a58215f0bc374fe7cd77e1d5b" iface="eth0" netns="" Jan 24 00:57:49.831794 containerd[1500]: 2026-01-24 00:57:49.795 [INFO][5321] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="934533da5a1c76725bbaafd12ab91034fc55767a58215f0bc374fe7cd77e1d5b" Jan 24 00:57:49.831794 containerd[1500]: 2026-01-24 00:57:49.795 [INFO][5321] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="934533da5a1c76725bbaafd12ab91034fc55767a58215f0bc374fe7cd77e1d5b" Jan 24 00:57:49.831794 containerd[1500]: 2026-01-24 00:57:49.818 [INFO][5328] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="934533da5a1c76725bbaafd12ab91034fc55767a58215f0bc374fe7cd77e1d5b" HandleID="k8s-pod-network.934533da5a1c76725bbaafd12ab91034fc55767a58215f0bc374fe7cd77e1d5b" Workload="ci--4081--3--6--n--32cc93a80b-k8s-calico--apiserver--59667657--b8mx9-eth0" Jan 24 00:57:49.831794 containerd[1500]: 2026-01-24 00:57:49.818 [INFO][5328] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:49.831794 containerd[1500]: 2026-01-24 00:57:49.818 [INFO][5328] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:49.831794 containerd[1500]: 2026-01-24 00:57:49.823 [WARNING][5328] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="934533da5a1c76725bbaafd12ab91034fc55767a58215f0bc374fe7cd77e1d5b" HandleID="k8s-pod-network.934533da5a1c76725bbaafd12ab91034fc55767a58215f0bc374fe7cd77e1d5b" Workload="ci--4081--3--6--n--32cc93a80b-k8s-calico--apiserver--59667657--b8mx9-eth0" Jan 24 00:57:49.831794 containerd[1500]: 2026-01-24 00:57:49.823 [INFO][5328] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="934533da5a1c76725bbaafd12ab91034fc55767a58215f0bc374fe7cd77e1d5b" HandleID="k8s-pod-network.934533da5a1c76725bbaafd12ab91034fc55767a58215f0bc374fe7cd77e1d5b" Workload="ci--4081--3--6--n--32cc93a80b-k8s-calico--apiserver--59667657--b8mx9-eth0" Jan 24 00:57:49.831794 containerd[1500]: 2026-01-24 00:57:49.825 [INFO][5328] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:49.831794 containerd[1500]: 2026-01-24 00:57:49.828 [INFO][5321] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="934533da5a1c76725bbaafd12ab91034fc55767a58215f0bc374fe7cd77e1d5b" Jan 24 00:57:49.831794 containerd[1500]: time="2026-01-24T00:57:49.830105720Z" level=info msg="TearDown network for sandbox \"934533da5a1c76725bbaafd12ab91034fc55767a58215f0bc374fe7cd77e1d5b\" successfully" Jan 24 00:57:49.833664 containerd[1500]: time="2026-01-24T00:57:49.833162155Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"934533da5a1c76725bbaafd12ab91034fc55767a58215f0bc374fe7cd77e1d5b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:57:49.833664 containerd[1500]: time="2026-01-24T00:57:49.833208905Z" level=info msg="RemovePodSandbox \"934533da5a1c76725bbaafd12ab91034fc55767a58215f0bc374fe7cd77e1d5b\" returns successfully" Jan 24 00:57:49.970783 containerd[1500]: time="2026-01-24T00:57:49.970449027Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:57:50.397599 containerd[1500]: time="2026-01-24T00:57:50.397527878Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:57:50.399093 containerd[1500]: time="2026-01-24T00:57:50.399048110Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:57:50.399277 containerd[1500]: time="2026-01-24T00:57:50.399130721Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:57:50.399414 kubelet[2546]: E0124 00:57:50.399272 2546 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:57:50.399414 kubelet[2546]: E0124 00:57:50.399318 2546 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:57:50.399981 kubelet[2546]: E0124 00:57:50.399524 2546 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gvgfk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6ff89d9558-qsdz4_calico-apiserver(abee6eff-7ee6-4417-a4eb-5f0514e6e7e9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:57:50.400189 containerd[1500]: time="2026-01-24T00:57:50.399882382Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:57:50.401307 kubelet[2546]: E0124 00:57:50.401249 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6ff89d9558-qsdz4" podUID="abee6eff-7ee6-4417-a4eb-5f0514e6e7e9" Jan 24 00:57:50.846511 containerd[1500]: time="2026-01-24T00:57:50.846449149Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:57:50.848072 containerd[1500]: time="2026-01-24T00:57:50.847940821Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:57:50.848072 containerd[1500]: time="2026-01-24T00:57:50.848040531Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 24 00:57:50.848200 kubelet[2546]: E0124 00:57:50.848153 2546 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:57:50.848584 kubelet[2546]: E0124 00:57:50.848202 2546 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:57:50.848584 kubelet[2546]: E0124 00:57:50.848414 2546 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zdst7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-ftl5s_calico-system(43bd5f1f-4a0c-4b9f-b986-69bf7780bcee): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:57:50.849557 containerd[1500]: time="2026-01-24T00:57:50.849520953Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:57:51.284385 containerd[1500]: time="2026-01-24T00:57:51.284279038Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:57:51.285348 containerd[1500]: time="2026-01-24T00:57:51.285313339Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:57:51.285697 containerd[1500]: time="2026-01-24T00:57:51.285384009Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 24 00:57:51.285893 kubelet[2546]: E0124 00:57:51.285542 2546 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:57:51.285893 kubelet[2546]: E0124 00:57:51.285589 2546 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:57:51.285893 kubelet[2546]: E0124 00:57:51.285834 2546 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dhfqz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-9lcpv_calico-system(267130dd-42b7-45fa-9166-0420d7cd47cc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:57:51.286695 containerd[1500]: time="2026-01-24T00:57:51.286488891Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:57:51.288166 kubelet[2546]: E0124 00:57:51.287980 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9lcpv" podUID="267130dd-42b7-45fa-9166-0420d7cd47cc" Jan 24 00:57:51.711043 containerd[1500]: time="2026-01-24T00:57:51.710958067Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:57:51.712779 containerd[1500]: time="2026-01-24T00:57:51.712592329Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:57:51.712779 containerd[1500]: time="2026-01-24T00:57:51.712657929Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 24 00:57:51.713020 kubelet[2546]: E0124 00:57:51.712907 2546 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:57:51.713097 kubelet[2546]: E0124 00:57:51.713007 2546 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:57:51.713282 kubelet[2546]: E0124 00:57:51.713196 2546 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zdst7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-ftl5s_calico-system(43bd5f1f-4a0c-4b9f-b986-69bf7780bcee): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:57:51.715084 kubelet[2546]: E0124 00:57:51.714984 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ftl5s" podUID="43bd5f1f-4a0c-4b9f-b986-69bf7780bcee" Jan 24 00:57:54.971841 kubelet[2546]: E0124 00:57:54.971721 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6bf994bc7f-8g8k6" podUID="52940e35-8fee-4532-9c73-0644eb969513" Jan 24 00:57:58.261271 systemd[1]: run-containerd-runc-k8s.io-0b66e3806099d827dc7122b9ff99bb076625f906ba3b820f870a192b341883aa-runc.V8jgGa.mount: Deactivated successfully. Jan 24 00:57:58.969785 kubelet[2546]: E0124 00:57:58.969698 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6ff89d9558-pr2mw" podUID="e25c9c50-eb09-419b-a216-dabe2aa24f5e" Jan 24 00:58:00.971412 kubelet[2546]: E0124 00:58:00.971351 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59667657-b8mx9" podUID="3be98e24-0896-49a9-8666-4ca8f66cf2c8" Jan 24 00:58:02.971332 kubelet[2546]: E0124 00:58:02.970824 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6ff89d9558-qsdz4" podUID="abee6eff-7ee6-4417-a4eb-5f0514e6e7e9" Jan 24 00:58:03.973198 kubelet[2546]: E0124 00:58:03.973136 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85cdccf5-5whtp" podUID="92edd234-ce88-420a-bb1b-56d2f203263f" Jan 24 00:58:05.972511 kubelet[2546]: E0124 00:58:05.972359 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9lcpv" podUID="267130dd-42b7-45fa-9166-0420d7cd47cc" Jan 24 00:58:05.977049 kubelet[2546]: E0124 00:58:05.975251 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ftl5s" podUID="43bd5f1f-4a0c-4b9f-b986-69bf7780bcee" Jan 24 00:58:09.973118 containerd[1500]: time="2026-01-24T00:58:09.973050268Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:58:10.404558 containerd[1500]: time="2026-01-24T00:58:10.404341786Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:58:10.405967 containerd[1500]: time="2026-01-24T00:58:10.405901738Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:58:10.405967 containerd[1500]: time="2026-01-24T00:58:10.406004322Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 24 00:58:10.406342 kubelet[2546]: E0124 00:58:10.406239 2546 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:58:10.406342 kubelet[2546]: E0124 00:58:10.406303 2546 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:58:10.407231 kubelet[2546]: E0124 00:58:10.406964 2546 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e116ca17b1744963b9e4b3aac3adf522,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zkcx7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6bf994bc7f-8g8k6_calico-system(52940e35-8fee-4532-9c73-0644eb969513): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:58:10.410420 containerd[1500]: time="2026-01-24T00:58:10.410346516Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:58:10.843783 containerd[1500]: time="2026-01-24T00:58:10.843470803Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:58:10.845372 containerd[1500]: time="2026-01-24T00:58:10.845324149Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:58:10.845477 containerd[1500]: time="2026-01-24T00:58:10.845423293Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 24 00:58:10.845811 kubelet[2546]: E0124 00:58:10.845659 2546 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:58:10.845811 kubelet[2546]: E0124 00:58:10.845727 2546 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:58:10.846137 kubelet[2546]: E0124 00:58:10.845892 2546 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zkcx7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6bf994bc7f-8g8k6_calico-system(52940e35-8fee-4532-9c73-0644eb969513): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:58:10.847226 kubelet[2546]: E0124 00:58:10.847159 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6bf994bc7f-8g8k6" podUID="52940e35-8fee-4532-9c73-0644eb969513" Jan 24 00:58:13.975414 containerd[1500]: time="2026-01-24T00:58:13.973278907Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:58:14.423233 containerd[1500]: time="2026-01-24T00:58:14.422411411Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:58:14.426725 containerd[1500]: time="2026-01-24T00:58:14.424907554Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:58:14.426725 containerd[1500]: time="2026-01-24T00:58:14.425026248Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:58:14.426906 kubelet[2546]: E0124 00:58:14.426023 2546 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:58:14.426906 kubelet[2546]: E0124 00:58:14.426090 2546 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:58:14.426906 kubelet[2546]: E0124 00:58:14.426223 2546 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pbb4z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6ff89d9558-pr2mw_calico-apiserver(e25c9c50-eb09-419b-a216-dabe2aa24f5e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:58:14.428004 kubelet[2546]: E0124 00:58:14.427851 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6ff89d9558-pr2mw" podUID="e25c9c50-eb09-419b-a216-dabe2aa24f5e" Jan 24 00:58:14.971537 containerd[1500]: time="2026-01-24T00:58:14.971417126Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:58:15.406607 containerd[1500]: time="2026-01-24T00:58:15.406148424Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:58:15.407803 containerd[1500]: time="2026-01-24T00:58:15.407665520Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:58:15.408049 containerd[1500]: time="2026-01-24T00:58:15.407780984Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:58:15.408292 kubelet[2546]: E0124 00:58:15.408220 2546 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:58:15.408422 kubelet[2546]: E0124 00:58:15.408303 2546 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:58:15.408592 kubelet[2546]: E0124 00:58:15.408460 2546 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nvjcq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-59667657-b8mx9_calico-apiserver(3be98e24-0896-49a9-8666-4ca8f66cf2c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:58:15.409826 kubelet[2546]: E0124 00:58:15.409726 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59667657-b8mx9" podUID="3be98e24-0896-49a9-8666-4ca8f66cf2c8" Jan 24 00:58:16.970419 containerd[1500]: time="2026-01-24T00:58:16.970363174Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:58:17.412658 containerd[1500]: time="2026-01-24T00:58:17.412456178Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:58:17.414346 containerd[1500]: time="2026-01-24T00:58:17.414230355Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:58:17.414453 containerd[1500]: time="2026-01-24T00:58:17.414352999Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:58:17.414664 kubelet[2546]: E0124 00:58:17.414563 2546 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:58:17.414664 kubelet[2546]: E0124 00:58:17.414653 2546 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:58:17.415410 kubelet[2546]: E0124 00:58:17.414835 2546 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gvgfk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6ff89d9558-qsdz4_calico-apiserver(abee6eff-7ee6-4417-a4eb-5f0514e6e7e9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:58:17.416433 kubelet[2546]: E0124 00:58:17.416222 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6ff89d9558-qsdz4" podUID="abee6eff-7ee6-4417-a4eb-5f0514e6e7e9" Jan 24 00:58:18.971403 containerd[1500]: time="2026-01-24T00:58:18.971360960Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:58:19.419553 containerd[1500]: time="2026-01-24T00:58:19.419405100Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:58:19.420517 containerd[1500]: time="2026-01-24T00:58:19.420487252Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:58:19.421666 containerd[1500]: time="2026-01-24T00:58:19.420609556Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 24 00:58:19.421755 kubelet[2546]: E0124 00:58:19.421196 2546 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:58:19.421755 kubelet[2546]: E0124 00:58:19.421237 2546 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:58:19.421755 kubelet[2546]: E0124 00:58:19.421405 2546 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wjqwt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-85cdccf5-5whtp_calico-system(92edd234-ce88-420a-bb1b-56d2f203263f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:58:19.423810 containerd[1500]: time="2026-01-24T00:58:19.423653391Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:58:19.424084 kubelet[2546]: E0124 00:58:19.424031 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85cdccf5-5whtp" podUID="92edd234-ce88-420a-bb1b-56d2f203263f" Jan 24 00:58:19.852292 containerd[1500]: time="2026-01-24T00:58:19.852003271Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:58:19.853493 containerd[1500]: time="2026-01-24T00:58:19.853351011Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:58:19.853493 containerd[1500]: time="2026-01-24T00:58:19.853425687Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 24 00:58:19.855226 kubelet[2546]: E0124 00:58:19.853667 2546 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:58:19.855226 kubelet[2546]: E0124 00:58:19.853709 2546 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:58:19.855226 kubelet[2546]: E0124 00:58:19.853811 2546 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zdst7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-ftl5s_calico-system(43bd5f1f-4a0c-4b9f-b986-69bf7780bcee): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:58:19.856107 containerd[1500]: time="2026-01-24T00:58:19.856051511Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:58:20.281592 containerd[1500]: time="2026-01-24T00:58:20.281529340Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:58:20.283498 containerd[1500]: time="2026-01-24T00:58:20.283156480Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:58:20.283498 containerd[1500]: time="2026-01-24T00:58:20.283263885Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 24 00:58:20.283705 kubelet[2546]: E0124 00:58:20.283601 2546 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:58:20.283852 kubelet[2546]: E0124 00:58:20.283789 2546 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:58:20.285751 kubelet[2546]: E0124 00:58:20.284074 2546 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zdst7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-ftl5s_calico-system(43bd5f1f-4a0c-4b9f-b986-69bf7780bcee): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:58:20.285866 containerd[1500]: time="2026-01-24T00:58:20.285467550Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:58:20.286062 kubelet[2546]: E0124 00:58:20.285998 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ftl5s" podUID="43bd5f1f-4a0c-4b9f-b986-69bf7780bcee" Jan 24 00:58:20.732024 containerd[1500]: time="2026-01-24T00:58:20.731815473Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:58:20.734135 containerd[1500]: time="2026-01-24T00:58:20.733899173Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:58:20.734135 containerd[1500]: time="2026-01-24T00:58:20.733985149Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 24 00:58:20.735850 kubelet[2546]: E0124 00:58:20.734479 2546 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:58:20.735850 kubelet[2546]: E0124 00:58:20.734543 2546 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:58:20.735850 kubelet[2546]: E0124 00:58:20.734697 2546 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dhfqz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-9lcpv_calico-system(267130dd-42b7-45fa-9166-0420d7cd47cc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:58:20.737104 kubelet[2546]: E0124 00:58:20.736792 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9lcpv" podUID="267130dd-42b7-45fa-9166-0420d7cd47cc" Jan 24 00:58:25.972356 kubelet[2546]: E0124 00:58:25.972009 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59667657-b8mx9" podUID="3be98e24-0896-49a9-8666-4ca8f66cf2c8" Jan 24 00:58:25.973252 kubelet[2546]: E0124 00:58:25.973158 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6bf994bc7f-8g8k6" podUID="52940e35-8fee-4532-9c73-0644eb969513" Jan 24 00:58:28.972460 kubelet[2546]: E0124 00:58:28.972379 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6ff89d9558-pr2mw" podUID="e25c9c50-eb09-419b-a216-dabe2aa24f5e" Jan 24 00:58:29.970965 kubelet[2546]: E0124 00:58:29.970895 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6ff89d9558-qsdz4" podUID="abee6eff-7ee6-4417-a4eb-5f0514e6e7e9" Jan 24 00:58:29.971549 kubelet[2546]: E0124 00:58:29.971090 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85cdccf5-5whtp" podUID="92edd234-ce88-420a-bb1b-56d2f203263f" Jan 24 00:58:31.971348 kubelet[2546]: E0124 00:58:31.971268 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9lcpv" podUID="267130dd-42b7-45fa-9166-0420d7cd47cc" Jan 24 00:58:34.990960 systemd[1]: Started sshd@7-89.167.6.198:22-20.161.92.111:34316.service - OpenSSH per-connection server daemon (20.161.92.111:34316). Jan 24 00:58:35.760764 sshd[5402]: Accepted publickey for core from 20.161.92.111 port 34316 ssh2: RSA SHA256:OsSs7dy1EZ4NwQ5GvwLn/kngMzUyINAIgjgXHlkMFNw Jan 24 00:58:35.765645 sshd[5402]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:58:35.774172 systemd-logind[1476]: New session 8 of user core. Jan 24 00:58:35.778174 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 24 00:58:35.975013 kubelet[2546]: E0124 00:58:35.974937 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ftl5s" podUID="43bd5f1f-4a0c-4b9f-b986-69bf7780bcee" Jan 24 00:58:36.377341 sshd[5402]: pam_unix(sshd:session): session closed for user core Jan 24 00:58:36.381455 systemd[1]: sshd@7-89.167.6.198:22-20.161.92.111:34316.service: Deactivated successfully. Jan 24 00:58:36.381664 systemd-logind[1476]: Session 8 logged out. Waiting for processes to exit. Jan 24 00:58:36.383630 systemd[1]: session-8.scope: Deactivated successfully. Jan 24 00:58:36.385170 systemd-logind[1476]: Removed session 8. Jan 24 00:58:39.974848 kubelet[2546]: E0124 00:58:39.974785 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6ff89d9558-pr2mw" podUID="e25c9c50-eb09-419b-a216-dabe2aa24f5e" Jan 24 00:58:40.971504 kubelet[2546]: E0124 00:58:40.971316 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59667657-b8mx9" podUID="3be98e24-0896-49a9-8666-4ca8f66cf2c8" Jan 24 00:58:40.973688 kubelet[2546]: E0124 00:58:40.973600 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6bf994bc7f-8g8k6" podUID="52940e35-8fee-4532-9c73-0644eb969513" Jan 24 00:58:41.515153 systemd[1]: Started sshd@8-89.167.6.198:22-20.161.92.111:34324.service - OpenSSH per-connection server daemon (20.161.92.111:34324). Jan 24 00:58:41.970590 kubelet[2546]: E0124 00:58:41.970533 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85cdccf5-5whtp" podUID="92edd234-ce88-420a-bb1b-56d2f203263f" Jan 24 00:58:42.290999 sshd[5416]: Accepted publickey for core from 20.161.92.111 port 34324 ssh2: RSA SHA256:OsSs7dy1EZ4NwQ5GvwLn/kngMzUyINAIgjgXHlkMFNw Jan 24 00:58:42.293251 sshd[5416]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:58:42.301891 systemd-logind[1476]: New session 9 of user core. Jan 24 00:58:42.307967 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 24 00:58:42.930602 sshd[5416]: pam_unix(sshd:session): session closed for user core Jan 24 00:58:42.939160 systemd-logind[1476]: Session 9 logged out. Waiting for processes to exit. Jan 24 00:58:42.940703 systemd[1]: sshd@8-89.167.6.198:22-20.161.92.111:34324.service: Deactivated successfully. Jan 24 00:58:42.945367 systemd[1]: session-9.scope: Deactivated successfully. Jan 24 00:58:42.947423 systemd-logind[1476]: Removed session 9. Jan 24 00:58:42.970959 kubelet[2546]: E0124 00:58:42.970716 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6ff89d9558-qsdz4" podUID="abee6eff-7ee6-4417-a4eb-5f0514e6e7e9" Jan 24 00:58:44.969871 kubelet[2546]: E0124 00:58:44.969809 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9lcpv" podUID="267130dd-42b7-45fa-9166-0420d7cd47cc" Jan 24 00:58:46.971337 kubelet[2546]: E0124 00:58:46.971296 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ftl5s" podUID="43bd5f1f-4a0c-4b9f-b986-69bf7780bcee" Jan 24 00:58:48.068952 systemd[1]: Started sshd@9-89.167.6.198:22-20.161.92.111:42168.service - OpenSSH per-connection server daemon (20.161.92.111:42168). Jan 24 00:58:48.838028 sshd[5432]: Accepted publickey for core from 20.161.92.111 port 42168 ssh2: RSA SHA256:OsSs7dy1EZ4NwQ5GvwLn/kngMzUyINAIgjgXHlkMFNw Jan 24 00:58:48.839544 sshd[5432]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:58:48.843888 systemd-logind[1476]: New session 10 of user core. Jan 24 00:58:48.849844 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 24 00:58:49.497159 sshd[5432]: pam_unix(sshd:session): session closed for user core Jan 24 00:58:49.504687 systemd[1]: sshd@9-89.167.6.198:22-20.161.92.111:42168.service: Deactivated successfully. Jan 24 00:58:49.511934 systemd[1]: session-10.scope: Deactivated successfully. Jan 24 00:58:49.517722 systemd-logind[1476]: Session 10 logged out. Waiting for processes to exit. Jan 24 00:58:49.520586 systemd-logind[1476]: Removed session 10. Jan 24 00:58:49.632456 systemd[1]: Started sshd@10-89.167.6.198:22-20.161.92.111:42170.service - OpenSSH per-connection server daemon (20.161.92.111:42170). Jan 24 00:58:50.389665 sshd[5456]: Accepted publickey for core from 20.161.92.111 port 42170 ssh2: RSA SHA256:OsSs7dy1EZ4NwQ5GvwLn/kngMzUyINAIgjgXHlkMFNw Jan 24 00:58:50.392441 sshd[5456]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:58:50.400714 systemd-logind[1476]: New session 11 of user core. Jan 24 00:58:50.405951 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 24 00:58:50.970344 kubelet[2546]: E0124 00:58:50.970000 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6ff89d9558-pr2mw" podUID="e25c9c50-eb09-419b-a216-dabe2aa24f5e" Jan 24 00:58:51.003319 sshd[5456]: pam_unix(sshd:session): session closed for user core Jan 24 00:58:51.011437 systemd-logind[1476]: Session 11 logged out. Waiting for processes to exit. Jan 24 00:58:51.014210 systemd[1]: sshd@10-89.167.6.198:22-20.161.92.111:42170.service: Deactivated successfully. Jan 24 00:58:51.020715 systemd[1]: session-11.scope: Deactivated successfully. Jan 24 00:58:51.025475 systemd-logind[1476]: Removed session 11. Jan 24 00:58:51.148015 systemd[1]: Started sshd@11-89.167.6.198:22-20.161.92.111:42178.service - OpenSSH per-connection server daemon (20.161.92.111:42178). Jan 24 00:58:51.915247 sshd[5468]: Accepted publickey for core from 20.161.92.111 port 42178 ssh2: RSA SHA256:OsSs7dy1EZ4NwQ5GvwLn/kngMzUyINAIgjgXHlkMFNw Jan 24 00:58:51.921010 sshd[5468]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:58:51.935097 systemd-logind[1476]: New session 12 of user core. Jan 24 00:58:51.941983 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 24 00:58:52.565390 sshd[5468]: pam_unix(sshd:session): session closed for user core Jan 24 00:58:52.570532 systemd-logind[1476]: Session 12 logged out. Waiting for processes to exit. Jan 24 00:58:52.571571 systemd[1]: sshd@11-89.167.6.198:22-20.161.92.111:42178.service: Deactivated successfully. Jan 24 00:58:52.573792 systemd[1]: session-12.scope: Deactivated successfully. Jan 24 00:58:52.575111 systemd-logind[1476]: Removed session 12. Jan 24 00:58:53.971720 kubelet[2546]: E0124 00:58:53.971550 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6ff89d9558-qsdz4" podUID="abee6eff-7ee6-4417-a4eb-5f0514e6e7e9" Jan 24 00:58:53.972335 containerd[1500]: time="2026-01-24T00:58:53.972242791Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:58:54.405341 containerd[1500]: time="2026-01-24T00:58:54.405101941Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:58:54.408726 containerd[1500]: time="2026-01-24T00:58:54.406722989Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:58:54.408726 containerd[1500]: time="2026-01-24T00:58:54.406869876Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 24 00:58:54.408934 kubelet[2546]: E0124 00:58:54.407126 2546 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:58:54.408934 kubelet[2546]: E0124 00:58:54.407207 2546 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:58:54.408934 kubelet[2546]: E0124 00:58:54.407354 2546 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e116ca17b1744963b9e4b3aac3adf522,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zkcx7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6bf994bc7f-8g8k6_calico-system(52940e35-8fee-4532-9c73-0644eb969513): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:58:54.411281 containerd[1500]: time="2026-01-24T00:58:54.410929216Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:58:54.830349 containerd[1500]: time="2026-01-24T00:58:54.830108407Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:58:54.832294 containerd[1500]: time="2026-01-24T00:58:54.832107468Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:58:54.832294 containerd[1500]: time="2026-01-24T00:58:54.832187296Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 24 00:58:54.832463 kubelet[2546]: E0124 00:58:54.832317 2546 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:58:54.832463 kubelet[2546]: E0124 00:58:54.832358 2546 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:58:54.832463 kubelet[2546]: E0124 00:58:54.832439 2546 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zkcx7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6bf994bc7f-8g8k6_calico-system(52940e35-8fee-4532-9c73-0644eb969513): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:58:54.834767 kubelet[2546]: E0124 00:58:54.833721 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6bf994bc7f-8g8k6" podUID="52940e35-8fee-4532-9c73-0644eb969513" Jan 24 00:58:55.972475 kubelet[2546]: E0124 00:58:55.972398 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9lcpv" podUID="267130dd-42b7-45fa-9166-0420d7cd47cc" Jan 24 00:58:55.973469 containerd[1500]: time="2026-01-24T00:58:55.972723688Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:58:55.975785 kubelet[2546]: E0124 00:58:55.972628 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85cdccf5-5whtp" podUID="92edd234-ce88-420a-bb1b-56d2f203263f" Jan 24 00:58:56.422343 containerd[1500]: time="2026-01-24T00:58:56.422097598Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:58:56.423876 containerd[1500]: time="2026-01-24T00:58:56.423692037Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:58:56.423876 containerd[1500]: time="2026-01-24T00:58:56.423812305Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:58:56.425204 kubelet[2546]: E0124 00:58:56.424106 2546 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:58:56.425204 kubelet[2546]: E0124 00:58:56.424180 2546 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:58:56.425204 kubelet[2546]: E0124 00:58:56.424327 2546 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nvjcq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-59667657-b8mx9_calico-apiserver(3be98e24-0896-49a9-8666-4ca8f66cf2c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:58:56.425543 kubelet[2546]: E0124 00:58:56.425510 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59667657-b8mx9" podUID="3be98e24-0896-49a9-8666-4ca8f66cf2c8" Jan 24 00:58:57.706290 systemd[1]: Started sshd@12-89.167.6.198:22-20.161.92.111:51678.service - OpenSSH per-connection server daemon (20.161.92.111:51678). Jan 24 00:58:58.263040 systemd[1]: run-containerd-runc-k8s.io-0b66e3806099d827dc7122b9ff99bb076625f906ba3b820f870a192b341883aa-runc.pouSva.mount: Deactivated successfully. Jan 24 00:58:58.481639 sshd[5485]: Accepted publickey for core from 20.161.92.111 port 51678 ssh2: RSA SHA256:OsSs7dy1EZ4NwQ5GvwLn/kngMzUyINAIgjgXHlkMFNw Jan 24 00:58:58.483161 sshd[5485]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:58:58.490232 systemd-logind[1476]: New session 13 of user core. Jan 24 00:58:58.494851 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 24 00:58:59.084942 sshd[5485]: pam_unix(sshd:session): session closed for user core Jan 24 00:58:59.089360 systemd-logind[1476]: Session 13 logged out. Waiting for processes to exit. Jan 24 00:58:59.090399 systemd[1]: sshd@12-89.167.6.198:22-20.161.92.111:51678.service: Deactivated successfully. Jan 24 00:58:59.094039 systemd[1]: session-13.scope: Deactivated successfully. Jan 24 00:58:59.095913 systemd-logind[1476]: Removed session 13. Jan 24 00:58:59.229883 systemd[1]: Started sshd@13-89.167.6.198:22-20.161.92.111:51694.service - OpenSSH per-connection server daemon (20.161.92.111:51694). Jan 24 00:58:59.971613 containerd[1500]: time="2026-01-24T00:58:59.971480677Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:59:00.001758 sshd[5519]: Accepted publickey for core from 20.161.92.111 port 51694 ssh2: RSA SHA256:OsSs7dy1EZ4NwQ5GvwLn/kngMzUyINAIgjgXHlkMFNw Jan 24 00:59:00.004980 sshd[5519]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:59:00.020858 systemd-logind[1476]: New session 14 of user core. Jan 24 00:59:00.030919 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 24 00:59:00.435180 containerd[1500]: time="2026-01-24T00:59:00.434913308Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:59:00.437637 containerd[1500]: time="2026-01-24T00:59:00.437476963Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:59:00.437637 containerd[1500]: time="2026-01-24T00:59:00.437585791Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 24 00:59:00.438010 kubelet[2546]: E0124 00:59:00.437910 2546 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:59:00.438010 kubelet[2546]: E0124 00:59:00.437968 2546 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:59:00.440276 kubelet[2546]: E0124 00:59:00.439938 2546 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zdst7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-ftl5s_calico-system(43bd5f1f-4a0c-4b9f-b986-69bf7780bcee): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:59:00.444487 containerd[1500]: time="2026-01-24T00:59:00.444092496Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:59:00.827340 sshd[5519]: pam_unix(sshd:session): session closed for user core Jan 24 00:59:00.834956 systemd-logind[1476]: Session 14 logged out. Waiting for processes to exit. Jan 24 00:59:00.837347 systemd[1]: sshd@13-89.167.6.198:22-20.161.92.111:51694.service: Deactivated successfully. Jan 24 00:59:00.843167 systemd[1]: session-14.scope: Deactivated successfully. Jan 24 00:59:00.851518 systemd-logind[1476]: Removed session 14. Jan 24 00:59:00.887064 containerd[1500]: time="2026-01-24T00:59:00.886995469Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:59:00.889659 containerd[1500]: time="2026-01-24T00:59:00.889049953Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:59:00.889659 containerd[1500]: time="2026-01-24T00:59:00.889117562Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 24 00:59:00.889776 kubelet[2546]: E0124 00:59:00.889254 2546 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:59:00.889776 kubelet[2546]: E0124 00:59:00.889297 2546 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:59:00.889776 kubelet[2546]: E0124 00:59:00.889408 2546 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zdst7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-ftl5s_calico-system(43bd5f1f-4a0c-4b9f-b986-69bf7780bcee): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:59:00.890577 kubelet[2546]: E0124 00:59:00.890547 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ftl5s" podUID="43bd5f1f-4a0c-4b9f-b986-69bf7780bcee" Jan 24 00:59:00.971848 systemd[1]: Started sshd@14-89.167.6.198:22-20.161.92.111:51696.service - OpenSSH per-connection server daemon (20.161.92.111:51696). Jan 24 00:59:01.734847 sshd[5551]: Accepted publickey for core from 20.161.92.111 port 51696 ssh2: RSA SHA256:OsSs7dy1EZ4NwQ5GvwLn/kngMzUyINAIgjgXHlkMFNw Jan 24 00:59:01.739009 sshd[5551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:59:01.748840 systemd-logind[1476]: New session 15 of user core. Jan 24 00:59:01.757961 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 24 00:59:01.973291 containerd[1500]: time="2026-01-24T00:59:01.973212552Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:59:02.425855 containerd[1500]: time="2026-01-24T00:59:02.425791894Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:59:02.428217 containerd[1500]: time="2026-01-24T00:59:02.428174704Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:59:02.428364 containerd[1500]: time="2026-01-24T00:59:02.428260322Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:59:02.430053 kubelet[2546]: E0124 00:59:02.428543 2546 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:59:02.430053 kubelet[2546]: E0124 00:59:02.428603 2546 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:59:02.430053 kubelet[2546]: E0124 00:59:02.428871 2546 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pbb4z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6ff89d9558-pr2mw_calico-apiserver(e25c9c50-eb09-419b-a216-dabe2aa24f5e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:59:02.431152 kubelet[2546]: E0124 00:59:02.430869 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6ff89d9558-pr2mw" podUID="e25c9c50-eb09-419b-a216-dabe2aa24f5e" Jan 24 00:59:02.995138 sshd[5551]: pam_unix(sshd:session): session closed for user core Jan 24 00:59:03.002475 systemd-logind[1476]: Session 15 logged out. Waiting for processes to exit. Jan 24 00:59:03.007300 systemd[1]: sshd@14-89.167.6.198:22-20.161.92.111:51696.service: Deactivated successfully. Jan 24 00:59:03.010628 systemd[1]: session-15.scope: Deactivated successfully. Jan 24 00:59:03.014312 systemd-logind[1476]: Removed session 15. Jan 24 00:59:03.135092 systemd[1]: Started sshd@15-89.167.6.198:22-20.161.92.111:46966.service - OpenSSH per-connection server daemon (20.161.92.111:46966). Jan 24 00:59:03.889777 sshd[5572]: Accepted publickey for core from 20.161.92.111 port 46966 ssh2: RSA SHA256:OsSs7dy1EZ4NwQ5GvwLn/kngMzUyINAIgjgXHlkMFNw Jan 24 00:59:03.894066 sshd[5572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:59:03.910479 systemd-logind[1476]: New session 16 of user core. Jan 24 00:59:03.916987 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 24 00:59:04.572032 sshd[5572]: pam_unix(sshd:session): session closed for user core Jan 24 00:59:04.576323 systemd-logind[1476]: Session 16 logged out. Waiting for processes to exit. Jan 24 00:59:04.577340 systemd[1]: sshd@15-89.167.6.198:22-20.161.92.111:46966.service: Deactivated successfully. Jan 24 00:59:04.581121 systemd[1]: session-16.scope: Deactivated successfully. Jan 24 00:59:04.582516 systemd-logind[1476]: Removed session 16. Jan 24 00:59:04.705898 systemd[1]: Started sshd@16-89.167.6.198:22-20.161.92.111:46982.service - OpenSSH per-connection server daemon (20.161.92.111:46982). Jan 24 00:59:05.477854 sshd[5583]: Accepted publickey for core from 20.161.92.111 port 46982 ssh2: RSA SHA256:OsSs7dy1EZ4NwQ5GvwLn/kngMzUyINAIgjgXHlkMFNw Jan 24 00:59:05.481898 sshd[5583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:59:05.488581 systemd-logind[1476]: New session 17 of user core. Jan 24 00:59:05.494090 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 24 00:59:06.134487 sshd[5583]: pam_unix(sshd:session): session closed for user core Jan 24 00:59:06.138870 systemd-logind[1476]: Session 17 logged out. Waiting for processes to exit. Jan 24 00:59:06.138877 systemd[1]: sshd@16-89.167.6.198:22-20.161.92.111:46982.service: Deactivated successfully. Jan 24 00:59:06.140839 systemd[1]: session-17.scope: Deactivated successfully. Jan 24 00:59:06.142176 systemd-logind[1476]: Removed session 17. Jan 24 00:59:08.972447 containerd[1500]: time="2026-01-24T00:59:08.972235838Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:59:08.974417 kubelet[2546]: E0124 00:59:08.974373 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6bf994bc7f-8g8k6" podUID="52940e35-8fee-4532-9c73-0644eb969513" Jan 24 00:59:09.405583 containerd[1500]: time="2026-01-24T00:59:09.405458423Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:59:09.407198 containerd[1500]: time="2026-01-24T00:59:09.407152317Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:59:09.407321 containerd[1500]: time="2026-01-24T00:59:09.407224996Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:59:09.407904 kubelet[2546]: E0124 00:59:09.407472 2546 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:59:09.407904 kubelet[2546]: E0124 00:59:09.407512 2546 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:59:09.407904 kubelet[2546]: E0124 00:59:09.407615 2546 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gvgfk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6ff89d9558-qsdz4_calico-apiserver(abee6eff-7ee6-4417-a4eb-5f0514e6e7e9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:59:09.409027 kubelet[2546]: E0124 00:59:09.409006 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6ff89d9558-qsdz4" podUID="abee6eff-7ee6-4417-a4eb-5f0514e6e7e9" Jan 24 00:59:09.974800 kubelet[2546]: E0124 00:59:09.972369 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59667657-b8mx9" podUID="3be98e24-0896-49a9-8666-4ca8f66cf2c8" Jan 24 00:59:09.976256 containerd[1500]: time="2026-01-24T00:59:09.975882134Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:59:10.408306 containerd[1500]: time="2026-01-24T00:59:10.408002898Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:59:10.409611 containerd[1500]: time="2026-01-24T00:59:10.409498425Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:59:10.409721 containerd[1500]: time="2026-01-24T00:59:10.409581384Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 24 00:59:10.409850 kubelet[2546]: E0124 00:59:10.409795 2546 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:59:10.409916 kubelet[2546]: E0124 00:59:10.409862 2546 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:59:10.410154 kubelet[2546]: E0124 00:59:10.410097 2546 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wjqwt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-85cdccf5-5whtp_calico-system(92edd234-ce88-420a-bb1b-56d2f203263f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:59:10.411140 containerd[1500]: time="2026-01-24T00:59:10.410774386Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:59:10.411359 kubelet[2546]: E0124 00:59:10.411275 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85cdccf5-5whtp" podUID="92edd234-ce88-420a-bb1b-56d2f203263f" Jan 24 00:59:10.847844 containerd[1500]: time="2026-01-24T00:59:10.847784693Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:59:10.849355 containerd[1500]: time="2026-01-24T00:59:10.849305590Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:59:10.849438 containerd[1500]: time="2026-01-24T00:59:10.849400299Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 24 00:59:10.849748 kubelet[2546]: E0124 00:59:10.849698 2546 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:59:10.849903 kubelet[2546]: E0124 00:59:10.849790 2546 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:59:10.850045 kubelet[2546]: E0124 00:59:10.849965 2546 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dhfqz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-9lcpv_calico-system(267130dd-42b7-45fa-9166-0420d7cd47cc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:59:10.851792 kubelet[2546]: E0124 00:59:10.851567 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9lcpv" podUID="267130dd-42b7-45fa-9166-0420d7cd47cc" Jan 24 00:59:11.276125 systemd[1]: Started sshd@17-89.167.6.198:22-20.161.92.111:46994.service - OpenSSH per-connection server daemon (20.161.92.111:46994). Jan 24 00:59:12.056885 sshd[5598]: Accepted publickey for core from 20.161.92.111 port 46994 ssh2: RSA SHA256:OsSs7dy1EZ4NwQ5GvwLn/kngMzUyINAIgjgXHlkMFNw Jan 24 00:59:12.058935 sshd[5598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:59:12.066471 systemd-logind[1476]: New session 18 of user core. Jan 24 00:59:12.075156 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 24 00:59:12.660148 sshd[5598]: pam_unix(sshd:session): session closed for user core Jan 24 00:59:12.664057 systemd-logind[1476]: Session 18 logged out. Waiting for processes to exit. Jan 24 00:59:12.664787 systemd[1]: sshd@17-89.167.6.198:22-20.161.92.111:46994.service: Deactivated successfully. Jan 24 00:59:12.666457 systemd[1]: session-18.scope: Deactivated successfully. Jan 24 00:59:12.669214 systemd-logind[1476]: Removed session 18. Jan 24 00:59:12.973070 kubelet[2546]: E0124 00:59:12.972983 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6ff89d9558-pr2mw" podUID="e25c9c50-eb09-419b-a216-dabe2aa24f5e" Jan 24 00:59:14.970949 kubelet[2546]: E0124 00:59:14.970875 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ftl5s" podUID="43bd5f1f-4a0c-4b9f-b986-69bf7780bcee" Jan 24 00:59:17.799120 systemd[1]: Started sshd@18-89.167.6.198:22-20.161.92.111:45288.service - OpenSSH per-connection server daemon (20.161.92.111:45288). Jan 24 00:59:18.576183 sshd[5611]: Accepted publickey for core from 20.161.92.111 port 45288 ssh2: RSA SHA256:OsSs7dy1EZ4NwQ5GvwLn/kngMzUyINAIgjgXHlkMFNw Jan 24 00:59:18.577719 sshd[5611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:59:18.584291 systemd-logind[1476]: New session 19 of user core. Jan 24 00:59:18.588293 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 24 00:59:19.176912 sshd[5611]: pam_unix(sshd:session): session closed for user core Jan 24 00:59:19.180055 systemd-logind[1476]: Session 19 logged out. Waiting for processes to exit. Jan 24 00:59:19.182526 systemd[1]: sshd@18-89.167.6.198:22-20.161.92.111:45288.service: Deactivated successfully. Jan 24 00:59:19.185430 systemd[1]: session-19.scope: Deactivated successfully. Jan 24 00:59:19.187075 systemd-logind[1476]: Removed session 19. Jan 24 00:59:21.976147 kubelet[2546]: E0124 00:59:21.975996 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6bf994bc7f-8g8k6" podUID="52940e35-8fee-4532-9c73-0644eb969513" Jan 24 00:59:22.971457 kubelet[2546]: E0124 00:59:22.971393 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6ff89d9558-qsdz4" podUID="abee6eff-7ee6-4417-a4eb-5f0514e6e7e9" Jan 24 00:59:23.972761 kubelet[2546]: E0124 00:59:23.972609 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59667657-b8mx9" podUID="3be98e24-0896-49a9-8666-4ca8f66cf2c8" Jan 24 00:59:23.972761 kubelet[2546]: E0124 00:59:23.972708 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85cdccf5-5whtp" podUID="92edd234-ce88-420a-bb1b-56d2f203263f" Jan 24 00:59:24.970485 kubelet[2546]: E0124 00:59:24.970443 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6ff89d9558-pr2mw" podUID="e25c9c50-eb09-419b-a216-dabe2aa24f5e" Jan 24 00:59:24.971035 kubelet[2546]: E0124 00:59:24.971004 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9lcpv" podUID="267130dd-42b7-45fa-9166-0420d7cd47cc" Jan 24 00:59:25.972719 kubelet[2546]: E0124 00:59:25.972627 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ftl5s" podUID="43bd5f1f-4a0c-4b9f-b986-69bf7780bcee" Jan 24 00:59:32.971612 kubelet[2546]: E0124 00:59:32.971488 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6bf994bc7f-8g8k6" podUID="52940e35-8fee-4532-9c73-0644eb969513" Jan 24 00:59:35.970078 kubelet[2546]: E0124 00:59:35.969936 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6ff89d9558-qsdz4" podUID="abee6eff-7ee6-4417-a4eb-5f0514e6e7e9" Jan 24 00:59:35.971995 kubelet[2546]: E0124 00:59:35.971966 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85cdccf5-5whtp" podUID="92edd234-ce88-420a-bb1b-56d2f203263f" Jan 24 00:59:36.430199 kubelet[2546]: I0124 00:59:36.430056 2546 status_manager.go:890] "Failed to get status for pod" podUID="43bd5f1f-4a0c-4b9f-b986-69bf7780bcee" pod="calico-system/csi-node-driver-ftl5s" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:35738->10.0.0.2:2379: read: connection timed out" Jan 24 00:59:36.430368 kubelet[2546]: E0124 00:59:36.430340 2546 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:35830->10.0.0.2:2379: read: connection timed out" Jan 24 00:59:36.440623 kubelet[2546]: E0124 00:59:36.440431 2546 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:35626->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{csi-node-driver-ftl5s.188d84cfb804bf20 calico-system 1629 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-system,Name:csi-node-driver-ftl5s,UID:43bd5f1f-4a0c-4b9f-b986-69bf7780bcee,APIVersion:v1,ResourceVersion:699,FieldPath:spec.containers{calico-csi},},Reason:BackOff,Message:Back-off pulling image \"ghcr.io/flatcar/calico/csi:v3.30.4\",Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-32cc93a80b,},FirstTimestamp:2026-01-24 00:57:36 +0000 UTC,LastTimestamp:2026-01-24 00:59:25.971941645 +0000 UTC m=+158.121883178,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-32cc93a80b,}" Jan 24 00:59:36.446908 systemd[1]: cri-containerd-171d68bcb881a0c51562111af18dac67c65223e9782eb71e1dcc86c62fbe85bb.scope: Deactivated successfully. Jan 24 00:59:36.447332 systemd[1]: cri-containerd-171d68bcb881a0c51562111af18dac67c65223e9782eb71e1dcc86c62fbe85bb.scope: Consumed 2.367s CPU time, 15.4M memory peak, 0B memory swap peak. Jan 24 00:59:36.497985 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-171d68bcb881a0c51562111af18dac67c65223e9782eb71e1dcc86c62fbe85bb-rootfs.mount: Deactivated successfully. Jan 24 00:59:36.502429 containerd[1500]: time="2026-01-24T00:59:36.502322724Z" level=info msg="shim disconnected" id=171d68bcb881a0c51562111af18dac67c65223e9782eb71e1dcc86c62fbe85bb namespace=k8s.io Jan 24 00:59:36.502429 containerd[1500]: time="2026-01-24T00:59:36.502426213Z" level=warning msg="cleaning up after shim disconnected" id=171d68bcb881a0c51562111af18dac67c65223e9782eb71e1dcc86c62fbe85bb namespace=k8s.io Jan 24 00:59:36.503389 containerd[1500]: time="2026-01-24T00:59:36.502442343Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:59:36.542410 systemd[1]: cri-containerd-a9d399184c5d88d397c97598d4cba127e23b7454002d8efebf87a9ffc5d6a08b.scope: Deactivated successfully. Jan 24 00:59:36.543130 systemd[1]: cri-containerd-a9d399184c5d88d397c97598d4cba127e23b7454002d8efebf87a9ffc5d6a08b.scope: Consumed 4.160s CPU time, 18.0M memory peak, 0B memory swap peak. Jan 24 00:59:36.593990 containerd[1500]: time="2026-01-24T00:59:36.593871063Z" level=info msg="shim disconnected" id=a9d399184c5d88d397c97598d4cba127e23b7454002d8efebf87a9ffc5d6a08b namespace=k8s.io Jan 24 00:59:36.593990 containerd[1500]: time="2026-01-24T00:59:36.593952562Z" level=warning msg="cleaning up after shim disconnected" id=a9d399184c5d88d397c97598d4cba127e23b7454002d8efebf87a9ffc5d6a08b namespace=k8s.io Jan 24 00:59:36.593990 containerd[1500]: time="2026-01-24T00:59:36.593967742Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:59:36.595779 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a9d399184c5d88d397c97598d4cba127e23b7454002d8efebf87a9ffc5d6a08b-rootfs.mount: Deactivated successfully. Jan 24 00:59:36.659309 kubelet[2546]: I0124 00:59:36.659258 2546 scope.go:117] "RemoveContainer" containerID="171d68bcb881a0c51562111af18dac67c65223e9782eb71e1dcc86c62fbe85bb" Jan 24 00:59:36.661571 kubelet[2546]: I0124 00:59:36.660845 2546 scope.go:117] "RemoveContainer" containerID="a9d399184c5d88d397c97598d4cba127e23b7454002d8efebf87a9ffc5d6a08b" Jan 24 00:59:36.664910 containerd[1500]: time="2026-01-24T00:59:36.664822334Z" level=info msg="CreateContainer within sandbox \"a660fb331b2bd891397c1992c1bb8c341521198d55b68c79ed916ee7f55e8cad\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 24 00:59:36.666429 containerd[1500]: time="2026-01-24T00:59:36.666291478Z" level=info msg="CreateContainer within sandbox \"b156be70aefdd7348cf610581c490ca4de06525d399a71f11220a227e72c608d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 24 00:59:36.676726 systemd[1]: cri-containerd-33f2d89ca355e9d418a6570f4c2c0038f5d5a37d11e1a7f203197674cf96e302.scope: Deactivated successfully. Jan 24 00:59:36.677925 systemd[1]: cri-containerd-33f2d89ca355e9d418a6570f4c2c0038f5d5a37d11e1a7f203197674cf96e302.scope: Consumed 20.653s CPU time. Jan 24 00:59:36.708245 containerd[1500]: time="2026-01-24T00:59:36.707796328Z" level=info msg="CreateContainer within sandbox \"a660fb331b2bd891397c1992c1bb8c341521198d55b68c79ed916ee7f55e8cad\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"c91a831e383508ec12e8126eb51a434556bf795022d6cd37e6aebd5d98bed5ab\"" Jan 24 00:59:36.710505 containerd[1500]: time="2026-01-24T00:59:36.710368871Z" level=info msg="CreateContainer within sandbox \"b156be70aefdd7348cf610581c490ca4de06525d399a71f11220a227e72c608d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"809987e883e99ecd4bf11b4e6a41518d3e6757c0ab23fc8b3f1a990bd88ad38d\"" Jan 24 00:59:36.710818 containerd[1500]: time="2026-01-24T00:59:36.710723937Z" level=info msg="StartContainer for \"c91a831e383508ec12e8126eb51a434556bf795022d6cd37e6aebd5d98bed5ab\"" Jan 24 00:59:36.711358 containerd[1500]: time="2026-01-24T00:59:36.711310040Z" level=info msg="StartContainer for \"809987e883e99ecd4bf11b4e6a41518d3e6757c0ab23fc8b3f1a990bd88ad38d\"" Jan 24 00:59:36.770449 containerd[1500]: time="2026-01-24T00:59:36.770195822Z" level=info msg="shim disconnected" id=33f2d89ca355e9d418a6570f4c2c0038f5d5a37d11e1a7f203197674cf96e302 namespace=k8s.io Jan 24 00:59:36.770449 containerd[1500]: time="2026-01-24T00:59:36.770247002Z" level=warning msg="cleaning up after shim disconnected" id=33f2d89ca355e9d418a6570f4c2c0038f5d5a37d11e1a7f203197674cf96e302 namespace=k8s.io Jan 24 00:59:36.770449 containerd[1500]: time="2026-01-24T00:59:36.770259622Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:59:36.772312 systemd[1]: Started cri-containerd-c91a831e383508ec12e8126eb51a434556bf795022d6cd37e6aebd5d98bed5ab.scope - libcontainer container c91a831e383508ec12e8126eb51a434556bf795022d6cd37e6aebd5d98bed5ab. Jan 24 00:59:36.790093 systemd[1]: Started cri-containerd-809987e883e99ecd4bf11b4e6a41518d3e6757c0ab23fc8b3f1a990bd88ad38d.scope - libcontainer container 809987e883e99ecd4bf11b4e6a41518d3e6757c0ab23fc8b3f1a990bd88ad38d. Jan 24 00:59:36.838632 containerd[1500]: time="2026-01-24T00:59:36.838583432Z" level=info msg="StartContainer for \"c91a831e383508ec12e8126eb51a434556bf795022d6cd37e6aebd5d98bed5ab\" returns successfully" Jan 24 00:59:36.863721 containerd[1500]: time="2026-01-24T00:59:36.863692470Z" level=info msg="StartContainer for \"809987e883e99ecd4bf11b4e6a41518d3e6757c0ab23fc8b3f1a990bd88ad38d\" returns successfully" Jan 24 00:59:36.969603 kubelet[2546]: E0124 00:59:36.969498 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9lcpv" podUID="267130dd-42b7-45fa-9166-0420d7cd47cc" Jan 24 00:59:37.498250 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-33f2d89ca355e9d418a6570f4c2c0038f5d5a37d11e1a7f203197674cf96e302-rootfs.mount: Deactivated successfully. Jan 24 00:59:37.665708 kubelet[2546]: I0124 00:59:37.665573 2546 scope.go:117] "RemoveContainer" containerID="33f2d89ca355e9d418a6570f4c2c0038f5d5a37d11e1a7f203197674cf96e302" Jan 24 00:59:37.668179 containerd[1500]: time="2026-01-24T00:59:37.668108722Z" level=info msg="CreateContainer within sandbox \"199864c55f5e749fe30f0342e3802029d2f6fb62e2fda72fc543341c34dc43d5\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 24 00:59:37.685767 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1896583645.mount: Deactivated successfully. Jan 24 00:59:37.688944 containerd[1500]: time="2026-01-24T00:59:37.688902069Z" level=info msg="CreateContainer within sandbox \"199864c55f5e749fe30f0342e3802029d2f6fb62e2fda72fc543341c34dc43d5\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"a43d779b1812d67b693ae57651f34f0a8979d2b21e23643d028d74e50b61a68f\"" Jan 24 00:59:37.690059 containerd[1500]: time="2026-01-24T00:59:37.689439903Z" level=info msg="StartContainer for \"a43d779b1812d67b693ae57651f34f0a8979d2b21e23643d028d74e50b61a68f\"" Jan 24 00:59:37.714876 systemd[1]: Started cri-containerd-a43d779b1812d67b693ae57651f34f0a8979d2b21e23643d028d74e50b61a68f.scope - libcontainer container a43d779b1812d67b693ae57651f34f0a8979d2b21e23643d028d74e50b61a68f. Jan 24 00:59:37.742721 containerd[1500]: time="2026-01-24T00:59:37.742542133Z" level=info msg="StartContainer for \"a43d779b1812d67b693ae57651f34f0a8979d2b21e23643d028d74e50b61a68f\" returns successfully" Jan 24 00:59:37.970783 kubelet[2546]: E0124 00:59:37.970586 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59667657-b8mx9" podUID="3be98e24-0896-49a9-8666-4ca8f66cf2c8" Jan 24 00:59:38.971058 kubelet[2546]: E0124 00:59:38.970991 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ftl5s" podUID="43bd5f1f-4a0c-4b9f-b986-69bf7780bcee" Jan 24 00:59:39.970815 kubelet[2546]: E0124 00:59:39.970726 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6ff89d9558-pr2mw" podUID="e25c9c50-eb09-419b-a216-dabe2aa24f5e" Jan 24 00:59:45.970577 kubelet[2546]: E0124 00:59:45.970398 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6bf994bc7f-8g8k6" podUID="52940e35-8fee-4532-9c73-0644eb969513" Jan 24 00:59:46.431408 kubelet[2546]: E0124 00:59:46.431031 2546 controller.go:195] "Failed to update lease" err="Put \"https://89.167.6.198:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-32cc93a80b?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 24 00:59:48.875221 systemd[1]: cri-containerd-a43d779b1812d67b693ae57651f34f0a8979d2b21e23643d028d74e50b61a68f.scope: Deactivated successfully. Jan 24 00:59:48.914281 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a43d779b1812d67b693ae57651f34f0a8979d2b21e23643d028d74e50b61a68f-rootfs.mount: Deactivated successfully. Jan 24 00:59:48.926330 containerd[1500]: time="2026-01-24T00:59:48.926219853Z" level=info msg="shim disconnected" id=a43d779b1812d67b693ae57651f34f0a8979d2b21e23643d028d74e50b61a68f namespace=k8s.io Jan 24 00:59:48.926330 containerd[1500]: time="2026-01-24T00:59:48.926320612Z" level=warning msg="cleaning up after shim disconnected" id=a43d779b1812d67b693ae57651f34f0a8979d2b21e23643d028d74e50b61a68f namespace=k8s.io Jan 24 00:59:48.926330 containerd[1500]: time="2026-01-24T00:59:48.926333882Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:59:49.700537 kubelet[2546]: I0124 00:59:49.700258 2546 scope.go:117] "RemoveContainer" containerID="33f2d89ca355e9d418a6570f4c2c0038f5d5a37d11e1a7f203197674cf96e302" Jan 24 00:59:49.702957 kubelet[2546]: I0124 00:59:49.700863 2546 scope.go:117] "RemoveContainer" containerID="a43d779b1812d67b693ae57651f34f0a8979d2b21e23643d028d74e50b61a68f" Jan 24 00:59:49.702957 kubelet[2546]: E0124 00:59:49.701107 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-7dcd859c48-kb5r9_tigera-operator(ec8ba4de-8ded-459b-bd27-14288e528b4d)\"" pod="tigera-operator/tigera-operator-7dcd859c48-kb5r9" podUID="ec8ba4de-8ded-459b-bd27-14288e528b4d" Jan 24 00:59:49.703959 containerd[1500]: time="2026-01-24T00:59:49.703898373Z" level=info msg="RemoveContainer for \"33f2d89ca355e9d418a6570f4c2c0038f5d5a37d11e1a7f203197674cf96e302\"" Jan 24 00:59:49.711169 containerd[1500]: time="2026-01-24T00:59:49.711109752Z" level=info msg="RemoveContainer for \"33f2d89ca355e9d418a6570f4c2c0038f5d5a37d11e1a7f203197674cf96e302\" returns successfully" Jan 24 00:59:49.972148 kubelet[2546]: E0124 00:59:49.971512 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6ff89d9558-qsdz4" podUID="abee6eff-7ee6-4417-a4eb-5f0514e6e7e9" Jan 24 00:59:49.972809 kubelet[2546]: E0124 00:59:49.972573 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ftl5s" podUID="43bd5f1f-4a0c-4b9f-b986-69bf7780bcee" Jan 24 00:59:50.970157 kubelet[2546]: E0124 00:59:50.970049 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59667657-b8mx9" podUID="3be98e24-0896-49a9-8666-4ca8f66cf2c8" Jan 24 00:59:50.970157 kubelet[2546]: E0124 00:59:50.970075 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85cdccf5-5whtp" podUID="92edd234-ce88-420a-bb1b-56d2f203263f" Jan 24 00:59:51.970781 kubelet[2546]: E0124 00:59:51.970527 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9lcpv" podUID="267130dd-42b7-45fa-9166-0420d7cd47cc" Jan 24 00:59:53.970372 kubelet[2546]: E0124 00:59:53.970282 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6ff89d9558-pr2mw" podUID="e25c9c50-eb09-419b-a216-dabe2aa24f5e" Jan 24 00:59:56.432645 kubelet[2546]: E0124 00:59:56.432576 2546 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io ci-4081-3-6-n-32cc93a80b)" Jan 24 00:59:58.971219 kubelet[2546]: E0124 00:59:58.971136 2546 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6bf994bc7f-8g8k6" podUID="52940e35-8fee-4532-9c73-0644eb969513"