Jan 24 00:31:40.125389 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 23 22:35:12 -00 2026 Jan 24 00:31:40.125408 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:31:40.125418 kernel: BIOS-provided physical RAM map: Jan 24 00:31:40.125423 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 24 00:31:40.125427 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ed3efff] usable Jan 24 00:31:40.125431 kernel: BIOS-e820: [mem 0x000000007ed3f000-0x000000007edfffff] reserved Jan 24 00:31:40.125436 kernel: BIOS-e820: [mem 0x000000007ee00000-0x000000007f8ecfff] usable Jan 24 00:31:40.125441 kernel: BIOS-e820: [mem 0x000000007f8ed000-0x000000007f9ecfff] reserved Jan 24 00:31:40.125445 kernel: BIOS-e820: [mem 0x000000007f9ed000-0x000000007faecfff] type 20 Jan 24 00:31:40.125449 kernel: BIOS-e820: [mem 0x000000007faed000-0x000000007fb6cfff] reserved Jan 24 00:31:40.125454 kernel: BIOS-e820: [mem 0x000000007fb6d000-0x000000007fb7efff] ACPI data Jan 24 00:31:40.125461 kernel: BIOS-e820: [mem 0x000000007fb7f000-0x000000007fbfefff] ACPI NVS Jan 24 00:31:40.125466 kernel: BIOS-e820: [mem 0x000000007fbff000-0x000000007ff7bfff] usable Jan 24 00:31:40.125470 kernel: BIOS-e820: [mem 0x000000007ff7c000-0x000000007fffffff] reserved Jan 24 00:31:40.125475 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 24 00:31:40.125480 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 24 00:31:40.125487 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 24 00:31:40.125492 kernel: BIOS-e820: [mem 0x0000000100000000-0x0000000179ffffff] usable Jan 24 00:31:40.125496 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 24 00:31:40.125501 kernel: NX (Execute Disable) protection: active Jan 24 00:31:40.125506 kernel: APIC: Static calls initialized Jan 24 00:31:40.125510 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Jan 24 00:31:40.125515 kernel: efi: SMBIOS=0x7f988000 SMBIOS 3.0=0x7f986000 ACPI=0x7fb7e000 ACPI 2.0=0x7fb7e014 MEMATTR=0x7e00c198 Jan 24 00:31:40.125520 kernel: efi: Remove mem135: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jan 24 00:31:40.125525 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jan 24 00:31:40.125529 kernel: SMBIOS 3.0.0 present. Jan 24 00:31:40.125534 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Jan 24 00:31:40.125539 kernel: Hypervisor detected: KVM Jan 24 00:31:40.125546 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 24 00:31:40.125551 kernel: kvm-clock: using sched offset of 12746534377 cycles Jan 24 00:31:40.125555 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 24 00:31:40.125560 kernel: tsc: Detected 2399.998 MHz processor Jan 24 00:31:40.125565 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 24 00:31:40.125570 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 24 00:31:40.125575 kernel: last_pfn = 0x17a000 max_arch_pfn = 0x10000000000 Jan 24 00:31:40.125580 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 24 00:31:40.125585 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 24 00:31:40.125592 kernel: last_pfn = 0x7ff7c max_arch_pfn = 0x10000000000 Jan 24 00:31:40.125597 kernel: Using GB pages for direct mapping Jan 24 00:31:40.125602 kernel: Secure boot disabled Jan 24 00:31:40.125618 kernel: ACPI: Early table checksum verification disabled Jan 24 00:31:40.125623 kernel: ACPI: RSDP 0x000000007FB7E014 000024 (v02 BOCHS ) Jan 24 00:31:40.125628 kernel: ACPI: XSDT 0x000000007FB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 24 00:31:40.125633 kernel: ACPI: FACP 0x000000007FB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:31:40.125640 kernel: ACPI: DSDT 0x000000007FB7A000 002443 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:31:40.125646 kernel: ACPI: FACS 0x000000007FBDD000 000040 Jan 24 00:31:40.125651 kernel: ACPI: APIC 0x000000007FB78000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:31:40.125656 kernel: ACPI: HPET 0x000000007FB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:31:40.125661 kernel: ACPI: MCFG 0x000000007FB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:31:40.125666 kernel: ACPI: WAET 0x000000007FB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:31:40.125671 kernel: ACPI: BGRT 0x000000007FB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 24 00:31:40.125678 kernel: ACPI: Reserving FACP table memory at [mem 0x7fb79000-0x7fb790f3] Jan 24 00:31:40.125683 kernel: ACPI: Reserving DSDT table memory at [mem 0x7fb7a000-0x7fb7c442] Jan 24 00:31:40.125688 kernel: ACPI: Reserving FACS table memory at [mem 0x7fbdd000-0x7fbdd03f] Jan 24 00:31:40.125693 kernel: ACPI: Reserving APIC table memory at [mem 0x7fb78000-0x7fb7807f] Jan 24 00:31:40.125698 kernel: ACPI: Reserving HPET table memory at [mem 0x7fb77000-0x7fb77037] Jan 24 00:31:40.125703 kernel: ACPI: Reserving MCFG table memory at [mem 0x7fb76000-0x7fb7603b] Jan 24 00:31:40.125708 kernel: ACPI: Reserving WAET table memory at [mem 0x7fb75000-0x7fb75027] Jan 24 00:31:40.125713 kernel: ACPI: Reserving BGRT table memory at [mem 0x7fb74000-0x7fb74037] Jan 24 00:31:40.125718 kernel: No NUMA configuration found Jan 24 00:31:40.125726 kernel: Faking a node at [mem 0x0000000000000000-0x0000000179ffffff] Jan 24 00:31:40.125731 kernel: NODE_DATA(0) allocated [mem 0x179ff8000-0x179ffdfff] Jan 24 00:31:40.125736 kernel: Zone ranges: Jan 24 00:31:40.125741 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 24 00:31:40.125746 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 24 00:31:40.125751 kernel: Normal [mem 0x0000000100000000-0x0000000179ffffff] Jan 24 00:31:40.125756 kernel: Movable zone start for each node Jan 24 00:31:40.125761 kernel: Early memory node ranges Jan 24 00:31:40.125766 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 24 00:31:40.125771 kernel: node 0: [mem 0x0000000000100000-0x000000007ed3efff] Jan 24 00:31:40.125779 kernel: node 0: [mem 0x000000007ee00000-0x000000007f8ecfff] Jan 24 00:31:40.125784 kernel: node 0: [mem 0x000000007fbff000-0x000000007ff7bfff] Jan 24 00:31:40.125789 kernel: node 0: [mem 0x0000000100000000-0x0000000179ffffff] Jan 24 00:31:40.125793 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x0000000179ffffff] Jan 24 00:31:40.125799 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 24 00:31:40.125804 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 24 00:31:40.125808 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jan 24 00:31:40.125814 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 24 00:31:40.125819 kernel: On node 0, zone Normal: 132 pages in unavailable ranges Jan 24 00:31:40.125827 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Jan 24 00:31:40.125832 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 24 00:31:40.125863 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 24 00:31:40.125868 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 24 00:31:40.125873 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 24 00:31:40.125878 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 24 00:31:40.125883 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 24 00:31:40.125888 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 24 00:31:40.125893 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 24 00:31:40.125901 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 24 00:31:40.125906 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 24 00:31:40.125911 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 24 00:31:40.125916 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 24 00:31:40.125921 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Jan 24 00:31:40.125926 kernel: Booting paravirtualized kernel on KVM Jan 24 00:31:40.125931 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 24 00:31:40.125937 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 24 00:31:40.125942 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Jan 24 00:31:40.125949 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Jan 24 00:31:40.125954 kernel: pcpu-alloc: [0] 0 1 Jan 24 00:31:40.125959 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 24 00:31:40.125965 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:31:40.125970 kernel: random: crng init done Jan 24 00:31:40.125975 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 24 00:31:40.125980 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 24 00:31:40.125985 kernel: Fallback order for Node 0: 0 Jan 24 00:31:40.125993 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1004632 Jan 24 00:31:40.125998 kernel: Policy zone: Normal Jan 24 00:31:40.126003 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 24 00:31:40.126008 kernel: software IO TLB: area num 2. Jan 24 00:31:40.126013 kernel: Memory: 3827828K/4091168K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 263136K reserved, 0K cma-reserved) Jan 24 00:31:40.126018 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 24 00:31:40.126023 kernel: ftrace: allocating 37989 entries in 149 pages Jan 24 00:31:40.126028 kernel: ftrace: allocated 149 pages with 4 groups Jan 24 00:31:40.126033 kernel: Dynamic Preempt: voluntary Jan 24 00:31:40.126041 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 24 00:31:40.126046 kernel: rcu: RCU event tracing is enabled. Jan 24 00:31:40.126052 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 24 00:31:40.126057 kernel: Trampoline variant of Tasks RCU enabled. Jan 24 00:31:40.126069 kernel: Rude variant of Tasks RCU enabled. Jan 24 00:31:40.126076 kernel: Tracing variant of Tasks RCU enabled. Jan 24 00:31:40.126081 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 24 00:31:40.126086 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 24 00:31:40.126092 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 24 00:31:40.126097 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 24 00:31:40.126102 kernel: Console: colour dummy device 80x25 Jan 24 00:31:40.126107 kernel: printk: console [tty0] enabled Jan 24 00:31:40.126112 kernel: printk: console [ttyS0] enabled Jan 24 00:31:40.126120 kernel: ACPI: Core revision 20230628 Jan 24 00:31:40.126126 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 24 00:31:40.126131 kernel: APIC: Switch to symmetric I/O mode setup Jan 24 00:31:40.126136 kernel: x2apic enabled Jan 24 00:31:40.126141 kernel: APIC: Switched APIC routing to: physical x2apic Jan 24 00:31:40.126149 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 24 00:31:40.126155 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 24 00:31:40.126160 kernel: Calibrating delay loop (skipped) preset value.. 4799.99 BogoMIPS (lpj=2399998) Jan 24 00:31:40.126165 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 24 00:31:40.126171 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 24 00:31:40.126176 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 24 00:31:40.126181 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 24 00:31:40.126186 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jan 24 00:31:40.126194 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 24 00:31:40.126199 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 24 00:31:40.126205 kernel: active return thunk: srso_alias_return_thunk Jan 24 00:31:40.126210 kernel: Speculative Return Stack Overflow: Mitigation: Safe RET Jan 24 00:31:40.126215 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 24 00:31:40.126220 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 24 00:31:40.126226 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 24 00:31:40.126231 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 24 00:31:40.126236 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 24 00:31:40.126244 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 24 00:31:40.126249 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 24 00:31:40.126254 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 24 00:31:40.126259 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 24 00:31:40.126265 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 24 00:31:40.126270 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 24 00:31:40.126275 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 24 00:31:40.126280 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 24 00:31:40.126286 kernel: x86/fpu: xstate_offset[9]: 2432, xstate_sizes[9]: 8 Jan 24 00:31:40.126293 kernel: x86/fpu: Enabled xstate features 0x2e7, context size is 2440 bytes, using 'compacted' format. Jan 24 00:31:40.126299 kernel: Freeing SMP alternatives memory: 32K Jan 24 00:31:40.126304 kernel: pid_max: default: 32768 minimum: 301 Jan 24 00:31:40.126309 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 24 00:31:40.126314 kernel: landlock: Up and running. Jan 24 00:31:40.126319 kernel: SELinux: Initializing. Jan 24 00:31:40.126325 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 24 00:31:40.126330 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 24 00:31:40.126335 kernel: smpboot: CPU0: AMD EPYC-Genoa Processor (family: 0x19, model: 0x11, stepping: 0x0) Jan 24 00:31:40.126343 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 24 00:31:40.126348 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 24 00:31:40.126353 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 24 00:31:40.126358 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 24 00:31:40.126364 kernel: ... version: 0 Jan 24 00:31:40.126369 kernel: ... bit width: 48 Jan 24 00:31:40.126374 kernel: ... generic registers: 6 Jan 24 00:31:40.126379 kernel: ... value mask: 0000ffffffffffff Jan 24 00:31:40.126384 kernel: ... max period: 00007fffffffffff Jan 24 00:31:40.126392 kernel: ... fixed-purpose events: 0 Jan 24 00:31:40.126397 kernel: ... event mask: 000000000000003f Jan 24 00:31:40.126402 kernel: signal: max sigframe size: 3376 Jan 24 00:31:40.126407 kernel: rcu: Hierarchical SRCU implementation. Jan 24 00:31:40.126413 kernel: rcu: Max phase no-delay instances is 400. Jan 24 00:31:40.126418 kernel: smp: Bringing up secondary CPUs ... Jan 24 00:31:40.126423 kernel: smpboot: x86: Booting SMP configuration: Jan 24 00:31:40.126434 kernel: .... node #0, CPUs: #1 Jan 24 00:31:40.126439 kernel: smp: Brought up 1 node, 2 CPUs Jan 24 00:31:40.126446 kernel: smpboot: Max logical packages: 1 Jan 24 00:31:40.126452 kernel: smpboot: Total of 2 processors activated (9599.99 BogoMIPS) Jan 24 00:31:40.126457 kernel: devtmpfs: initialized Jan 24 00:31:40.126462 kernel: x86/mm: Memory block size: 128MB Jan 24 00:31:40.126467 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7fb7f000-0x7fbfefff] (524288 bytes) Jan 24 00:31:40.126473 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 24 00:31:40.126478 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 24 00:31:40.126483 kernel: pinctrl core: initialized pinctrl subsystem Jan 24 00:31:40.126488 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 24 00:31:40.126496 kernel: audit: initializing netlink subsys (disabled) Jan 24 00:31:40.126501 kernel: audit: type=2000 audit(1769214698.840:1): state=initialized audit_enabled=0 res=1 Jan 24 00:31:40.126508 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 24 00:31:40.126517 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 24 00:31:40.126525 kernel: cpuidle: using governor menu Jan 24 00:31:40.126533 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 24 00:31:40.126541 kernel: dca service started, version 1.12.1 Jan 24 00:31:40.126549 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Jan 24 00:31:40.126557 kernel: PCI: Using configuration type 1 for base access Jan 24 00:31:40.126569 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 24 00:31:40.126576 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 24 00:31:40.126582 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 24 00:31:40.126587 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 24 00:31:40.126592 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 24 00:31:40.126598 kernel: ACPI: Added _OSI(Module Device) Jan 24 00:31:40.126612 kernel: ACPI: Added _OSI(Processor Device) Jan 24 00:31:40.126617 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 24 00:31:40.126623 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 24 00:31:40.126630 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 24 00:31:40.126635 kernel: ACPI: Interpreter enabled Jan 24 00:31:40.126641 kernel: ACPI: PM: (supports S0 S5) Jan 24 00:31:40.126646 kernel: ACPI: Using IOAPIC for interrupt routing Jan 24 00:31:40.126651 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 24 00:31:40.126656 kernel: PCI: Using E820 reservations for host bridge windows Jan 24 00:31:40.126662 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 24 00:31:40.126667 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 24 00:31:40.126824 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 24 00:31:40.126969 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 24 00:31:40.127067 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 24 00:31:40.127074 kernel: PCI host bridge to bus 0000:00 Jan 24 00:31:40.127174 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 24 00:31:40.127262 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 24 00:31:40.127350 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 24 00:31:40.127440 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xdfffffff window] Jan 24 00:31:40.127526 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jan 24 00:31:40.127622 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc7ffffffff window] Jan 24 00:31:40.127709 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 24 00:31:40.127816 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 24 00:31:40.127950 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Jan 24 00:31:40.128048 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80000000-0x807fffff pref] Jan 24 00:31:40.128147 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc060500000-0xc060503fff 64bit pref] Jan 24 00:31:40.128242 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8138a000-0x8138afff] Jan 24 00:31:40.128339 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jan 24 00:31:40.128434 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Jan 24 00:31:40.128529 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 24 00:31:40.128640 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 24 00:31:40.128740 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x81389000-0x81389fff] Jan 24 00:31:40.128857 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 24 00:31:40.128953 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x81388000-0x81388fff] Jan 24 00:31:40.129055 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 24 00:31:40.129150 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x81387000-0x81387fff] Jan 24 00:31:40.129252 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 24 00:31:40.129350 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x81386000-0x81386fff] Jan 24 00:31:40.129451 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 24 00:31:40.129545 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x81385000-0x81385fff] Jan 24 00:31:40.129654 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 24 00:31:40.129750 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x81384000-0x81384fff] Jan 24 00:31:40.129860 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 24 00:31:40.129971 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x81383000-0x81383fff] Jan 24 00:31:40.130078 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 24 00:31:40.130173 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x81382000-0x81382fff] Jan 24 00:31:40.130273 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Jan 24 00:31:40.130369 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x81381000-0x81381fff] Jan 24 00:31:40.130470 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 24 00:31:40.130565 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 24 00:31:40.130683 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 24 00:31:40.130779 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x6040-0x605f] Jan 24 00:31:40.130910 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0x81380000-0x81380fff] Jan 24 00:31:40.131012 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 24 00:31:40.131106 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6000-0x603f] Jan 24 00:31:40.132873 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Jan 24 00:31:40.133003 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x81200000-0x81200fff] Jan 24 00:31:40.133107 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xc060000000-0xc060003fff 64bit pref] Jan 24 00:31:40.133209 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 24 00:31:40.133306 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 24 00:31:40.133401 kernel: pci 0000:00:02.0: bridge window [mem 0x81200000-0x812fffff] Jan 24 00:31:40.133497 kernel: pci 0000:00:02.0: bridge window [mem 0xc060000000-0xc0600fffff 64bit pref] Jan 24 00:31:40.133615 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 24 00:31:40.133720 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x81100000-0x81103fff 64bit] Jan 24 00:31:40.133815 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 24 00:31:40.133993 kernel: pci 0000:00:02.1: bridge window [mem 0x81100000-0x811fffff] Jan 24 00:31:40.134102 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Jan 24 00:31:40.134201 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x81000000-0x81000fff] Jan 24 00:31:40.134299 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xc060100000-0xc060103fff 64bit pref] Jan 24 00:31:40.134395 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 24 00:31:40.134493 kernel: pci 0000:00:02.2: bridge window [mem 0x81000000-0x810fffff] Jan 24 00:31:40.134587 kernel: pci 0000:00:02.2: bridge window [mem 0xc060100000-0xc0601fffff 64bit pref] Jan 24 00:31:40.134702 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Jan 24 00:31:40.134801 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xc060200000-0xc060203fff 64bit pref] Jan 24 00:31:40.134923 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 24 00:31:40.135019 kernel: pci 0000:00:02.3: bridge window [mem 0xc060200000-0xc0602fffff 64bit pref] Jan 24 00:31:40.135125 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 24 00:31:40.135229 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x80f00000-0x80f00fff] Jan 24 00:31:40.135327 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xc060300000-0xc060303fff 64bit pref] Jan 24 00:31:40.135422 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 24 00:31:40.135518 kernel: pci 0000:00:02.4: bridge window [mem 0x80f00000-0x80ffffff] Jan 24 00:31:40.135619 kernel: pci 0000:00:02.4: bridge window [mem 0xc060300000-0xc0603fffff 64bit pref] Jan 24 00:31:40.135725 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Jan 24 00:31:40.135825 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x80e00000-0x80e00fff] Jan 24 00:31:40.136014 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xc060400000-0xc060403fff 64bit pref] Jan 24 00:31:40.136109 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 24 00:31:40.136202 kernel: pci 0000:00:02.5: bridge window [mem 0x80e00000-0x80efffff] Jan 24 00:31:40.136295 kernel: pci 0000:00:02.5: bridge window [mem 0xc060400000-0xc0604fffff 64bit pref] Jan 24 00:31:40.136301 kernel: acpiphp: Slot [0] registered Jan 24 00:31:40.136409 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Jan 24 00:31:40.136508 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x80c00000-0x80c00fff] Jan 24 00:31:40.136614 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xc000000000-0xc000003fff 64bit pref] Jan 24 00:31:40.136716 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 24 00:31:40.136811 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 24 00:31:40.136941 kernel: pci 0000:00:02.6: bridge window [mem 0x80c00000-0x80dfffff] Jan 24 00:31:40.137039 kernel: pci 0000:00:02.6: bridge window [mem 0xc000000000-0xc01fffffff 64bit pref] Jan 24 00:31:40.137045 kernel: acpiphp: Slot [0-2] registered Jan 24 00:31:40.137142 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 24 00:31:40.137236 kernel: pci 0000:00:02.7: bridge window [mem 0x80a00000-0x80bfffff] Jan 24 00:31:40.137330 kernel: pci 0000:00:02.7: bridge window [mem 0xc020000000-0xc03fffffff 64bit pref] Jan 24 00:31:40.137340 kernel: acpiphp: Slot [0-3] registered Jan 24 00:31:40.137435 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 24 00:31:40.137528 kernel: pci 0000:00:03.0: bridge window [mem 0x80800000-0x809fffff] Jan 24 00:31:40.137631 kernel: pci 0000:00:03.0: bridge window [mem 0xc040000000-0xc05fffffff 64bit pref] Jan 24 00:31:40.137638 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 24 00:31:40.137643 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 24 00:31:40.137648 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 24 00:31:40.137654 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 24 00:31:40.137662 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 24 00:31:40.137668 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 24 00:31:40.137673 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 24 00:31:40.137678 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 24 00:31:40.137684 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 24 00:31:40.137689 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 24 00:31:40.137694 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 24 00:31:40.137699 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 24 00:31:40.137705 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 24 00:31:40.137712 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 24 00:31:40.137717 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 24 00:31:40.137723 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 24 00:31:40.137728 kernel: iommu: Default domain type: Translated Jan 24 00:31:40.137733 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 24 00:31:40.137738 kernel: efivars: Registered efivars operations Jan 24 00:31:40.137744 kernel: PCI: Using ACPI for IRQ routing Jan 24 00:31:40.137749 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 24 00:31:40.137754 kernel: e820: reserve RAM buffer [mem 0x7ed3f000-0x7fffffff] Jan 24 00:31:40.137762 kernel: e820: reserve RAM buffer [mem 0x7f8ed000-0x7fffffff] Jan 24 00:31:40.137767 kernel: e820: reserve RAM buffer [mem 0x7ff7c000-0x7fffffff] Jan 24 00:31:40.137772 kernel: e820: reserve RAM buffer [mem 0x17a000000-0x17bffffff] Jan 24 00:31:40.137882 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 24 00:31:40.137977 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 24 00:31:40.138072 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 24 00:31:40.138078 kernel: vgaarb: loaded Jan 24 00:31:40.138084 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 24 00:31:40.138089 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 24 00:31:40.138098 kernel: clocksource: Switched to clocksource kvm-clock Jan 24 00:31:40.138103 kernel: VFS: Disk quotas dquot_6.6.0 Jan 24 00:31:40.138109 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 24 00:31:40.138114 kernel: pnp: PnP ACPI init Jan 24 00:31:40.138218 kernel: system 00:04: [mem 0xe0000000-0xefffffff window] has been reserved Jan 24 00:31:40.138225 kernel: pnp: PnP ACPI: found 5 devices Jan 24 00:31:40.138230 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 24 00:31:40.138236 kernel: NET: Registered PF_INET protocol family Jan 24 00:31:40.138257 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 24 00:31:40.138265 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 24 00:31:40.138270 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 24 00:31:40.138276 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 24 00:31:40.138281 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 24 00:31:40.138287 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 24 00:31:40.138292 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 24 00:31:40.138298 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 24 00:31:40.138303 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 24 00:31:40.138311 kernel: NET: Registered PF_XDP protocol family Jan 24 00:31:40.138412 kernel: pci 0000:01:00.0: can't claim BAR 6 [mem 0xfff80000-0xffffffff pref]: no compatible bridge window Jan 24 00:31:40.138514 kernel: pci 0000:07:00.0: can't claim BAR 6 [mem 0xfff80000-0xffffffff pref]: no compatible bridge window Jan 24 00:31:40.138617 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 24 00:31:40.138714 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 24 00:31:40.138809 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 24 00:31:40.138923 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Jan 24 00:31:40.139023 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Jan 24 00:31:40.139121 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Jan 24 00:31:40.139221 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x81280000-0x812fffff pref] Jan 24 00:31:40.139317 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 24 00:31:40.139415 kernel: pci 0000:00:02.0: bridge window [mem 0x81200000-0x812fffff] Jan 24 00:31:40.139510 kernel: pci 0000:00:02.0: bridge window [mem 0xc060000000-0xc0600fffff 64bit pref] Jan 24 00:31:40.139612 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 24 00:31:40.139707 kernel: pci 0000:00:02.1: bridge window [mem 0x81100000-0x811fffff] Jan 24 00:31:40.139802 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 24 00:31:40.139908 kernel: pci 0000:00:02.2: bridge window [mem 0x81000000-0x810fffff] Jan 24 00:31:40.140003 kernel: pci 0000:00:02.2: bridge window [mem 0xc060100000-0xc0601fffff 64bit pref] Jan 24 00:31:40.140100 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 24 00:31:40.140195 kernel: pci 0000:00:02.3: bridge window [mem 0xc060200000-0xc0602fffff 64bit pref] Jan 24 00:31:40.140295 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 24 00:31:40.140389 kernel: pci 0000:00:02.4: bridge window [mem 0x80f00000-0x80ffffff] Jan 24 00:31:40.140483 kernel: pci 0000:00:02.4: bridge window [mem 0xc060300000-0xc0603fffff 64bit pref] Jan 24 00:31:40.140577 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 24 00:31:40.140680 kernel: pci 0000:00:02.5: bridge window [mem 0x80e00000-0x80efffff] Jan 24 00:31:40.140774 kernel: pci 0000:00:02.5: bridge window [mem 0xc060400000-0xc0604fffff 64bit pref] Jan 24 00:31:40.140883 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x80c80000-0x80cfffff pref] Jan 24 00:31:40.140978 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 24 00:31:40.141075 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Jan 24 00:31:40.141169 kernel: pci 0000:00:02.6: bridge window [mem 0x80c00000-0x80dfffff] Jan 24 00:31:40.141263 kernel: pci 0000:00:02.6: bridge window [mem 0xc000000000-0xc01fffffff 64bit pref] Jan 24 00:31:40.141357 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 24 00:31:40.141454 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Jan 24 00:31:40.141547 kernel: pci 0000:00:02.7: bridge window [mem 0x80a00000-0x80bfffff] Jan 24 00:31:40.141649 kernel: pci 0000:00:02.7: bridge window [mem 0xc020000000-0xc03fffffff 64bit pref] Jan 24 00:31:40.141744 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 24 00:31:40.141854 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Jan 24 00:31:40.141971 kernel: pci 0000:00:03.0: bridge window [mem 0x80800000-0x809fffff] Jan 24 00:31:40.142065 kernel: pci 0000:00:03.0: bridge window [mem 0xc040000000-0xc05fffffff 64bit pref] Jan 24 00:31:40.142158 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 24 00:31:40.142247 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 24 00:31:40.142337 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 24 00:31:40.142424 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xdfffffff window] Jan 24 00:31:40.142510 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jan 24 00:31:40.142597 kernel: pci_bus 0000:00: resource 9 [mem 0xc000000000-0xc7ffffffff window] Jan 24 00:31:40.142707 kernel: pci_bus 0000:01: resource 1 [mem 0x81200000-0x812fffff] Jan 24 00:31:40.142800 kernel: pci_bus 0000:01: resource 2 [mem 0xc060000000-0xc0600fffff 64bit pref] Jan 24 00:31:40.144937 kernel: pci_bus 0000:02: resource 1 [mem 0x81100000-0x811fffff] Jan 24 00:31:40.145052 kernel: pci_bus 0000:03: resource 1 [mem 0x81000000-0x810fffff] Jan 24 00:31:40.145146 kernel: pci_bus 0000:03: resource 2 [mem 0xc060100000-0xc0601fffff 64bit pref] Jan 24 00:31:40.145244 kernel: pci_bus 0000:04: resource 2 [mem 0xc060200000-0xc0602fffff 64bit pref] Jan 24 00:31:40.145343 kernel: pci_bus 0000:05: resource 1 [mem 0x80f00000-0x80ffffff] Jan 24 00:31:40.145434 kernel: pci_bus 0000:05: resource 2 [mem 0xc060300000-0xc0603fffff 64bit pref] Jan 24 00:31:40.145534 kernel: pci_bus 0000:06: resource 1 [mem 0x80e00000-0x80efffff] Jan 24 00:31:40.145637 kernel: pci_bus 0000:06: resource 2 [mem 0xc060400000-0xc0604fffff 64bit pref] Jan 24 00:31:40.145738 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Jan 24 00:31:40.145831 kernel: pci_bus 0000:07: resource 1 [mem 0x80c00000-0x80dfffff] Jan 24 00:31:40.146072 kernel: pci_bus 0000:07: resource 2 [mem 0xc000000000-0xc01fffffff 64bit pref] Jan 24 00:31:40.146171 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Jan 24 00:31:40.146262 kernel: pci_bus 0000:08: resource 1 [mem 0x80a00000-0x80bfffff] Jan 24 00:31:40.146353 kernel: pci_bus 0000:08: resource 2 [mem 0xc020000000-0xc03fffffff 64bit pref] Jan 24 00:31:40.146453 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Jan 24 00:31:40.146544 kernel: pci_bus 0000:09: resource 1 [mem 0x80800000-0x809fffff] Jan 24 00:31:40.146644 kernel: pci_bus 0000:09: resource 2 [mem 0xc040000000-0xc05fffffff 64bit pref] Jan 24 00:31:40.146651 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 24 00:31:40.146657 kernel: PCI: CLS 0 bytes, default 64 Jan 24 00:31:40.146662 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 24 00:31:40.146668 kernel: software IO TLB: mapped [mem 0x0000000077ffd000-0x000000007bffd000] (64MB) Jan 24 00:31:40.146674 kernel: Initialise system trusted keyrings Jan 24 00:31:40.146682 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 24 00:31:40.146688 kernel: Key type asymmetric registered Jan 24 00:31:40.146693 kernel: Asymmetric key parser 'x509' registered Jan 24 00:31:40.146699 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 24 00:31:40.146705 kernel: io scheduler mq-deadline registered Jan 24 00:31:40.146710 kernel: io scheduler kyber registered Jan 24 00:31:40.146715 kernel: io scheduler bfq registered Jan 24 00:31:40.146814 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 24 00:31:40.146923 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 24 00:31:40.147022 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 24 00:31:40.147122 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 24 00:31:40.147238 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 24 00:31:40.147336 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 24 00:31:40.147431 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 24 00:31:40.147525 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 24 00:31:40.147631 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 24 00:31:40.147726 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 24 00:31:40.147825 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 24 00:31:40.148811 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 24 00:31:40.149185 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 24 00:31:40.149470 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 24 00:31:40.149759 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 24 00:31:40.150039 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 24 00:31:40.150064 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 24 00:31:40.150320 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Jan 24 00:31:40.150591 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Jan 24 00:31:40.150631 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 24 00:31:40.150649 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Jan 24 00:31:40.150667 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 24 00:31:40.150684 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 24 00:31:40.150701 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 24 00:31:40.150719 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 24 00:31:40.150736 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 24 00:31:40.158148 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 24 00:31:40.158204 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 24 00:31:40.158473 kernel: rtc_cmos 00:03: registered as rtc0 Jan 24 00:31:40.158750 kernel: rtc_cmos 00:03: setting system clock to 2026-01-24T00:31:39 UTC (1769214699) Jan 24 00:31:40.159271 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 24 00:31:40.159300 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 24 00:31:40.159317 kernel: efifb: probing for efifb Jan 24 00:31:40.159334 kernel: efifb: framebuffer at 0x80000000, using 4032k, total 4032k Jan 24 00:31:40.159351 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jan 24 00:31:40.159379 kernel: efifb: scrolling: redraw Jan 24 00:31:40.159396 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 24 00:31:40.159413 kernel: Console: switching to colour frame buffer device 160x50 Jan 24 00:31:40.159430 kernel: fb0: EFI VGA frame buffer device Jan 24 00:31:40.159447 kernel: pstore: Using crash dump compression: deflate Jan 24 00:31:40.159464 kernel: pstore: Registered efi_pstore as persistent store backend Jan 24 00:31:40.159481 kernel: NET: Registered PF_INET6 protocol family Jan 24 00:31:40.159497 kernel: Segment Routing with IPv6 Jan 24 00:31:40.159514 kernel: In-situ OAM (IOAM) with IPv6 Jan 24 00:31:40.159539 kernel: NET: Registered PF_PACKET protocol family Jan 24 00:31:40.159556 kernel: Key type dns_resolver registered Jan 24 00:31:40.159573 kernel: IPI shorthand broadcast: enabled Jan 24 00:31:40.159589 kernel: sched_clock: Marking stable (1525011623, 191984782)->(1752479690, -35483285) Jan 24 00:31:40.159624 kernel: registered taskstats version 1 Jan 24 00:31:40.159641 kernel: Loading compiled-in X.509 certificates Jan 24 00:31:40.159658 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 6e114855f6cf7a40074d93a4383c22d00e384634' Jan 24 00:31:40.159676 kernel: Key type .fscrypt registered Jan 24 00:31:40.159694 kernel: Key type fscrypt-provisioning registered Jan 24 00:31:40.159716 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 24 00:31:40.159733 kernel: ima: Allocated hash algorithm: sha1 Jan 24 00:31:40.159750 kernel: ima: No architecture policies found Jan 24 00:31:40.159767 kernel: clk: Disabling unused clocks Jan 24 00:31:40.159784 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 24 00:31:40.159801 kernel: Write protecting the kernel read-only data: 36864k Jan 24 00:31:40.159818 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 24 00:31:40.159834 kernel: Run /init as init process Jan 24 00:31:40.159880 kernel: with arguments: Jan 24 00:31:40.159905 kernel: /init Jan 24 00:31:40.159922 kernel: with environment: Jan 24 00:31:40.159939 kernel: HOME=/ Jan 24 00:31:40.159955 kernel: TERM=linux Jan 24 00:31:40.159978 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 24 00:31:40.160001 systemd[1]: Detected virtualization kvm. Jan 24 00:31:40.160019 systemd[1]: Detected architecture x86-64. Jan 24 00:31:40.160043 systemd[1]: Running in initrd. Jan 24 00:31:40.160060 systemd[1]: No hostname configured, using default hostname. Jan 24 00:31:40.160120 systemd[1]: Hostname set to . Jan 24 00:31:40.160140 systemd[1]: Initializing machine ID from VM UUID. Jan 24 00:31:40.160158 systemd[1]: Queued start job for default target initrd.target. Jan 24 00:31:40.160185 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:31:40.160203 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:31:40.160223 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 24 00:31:40.160248 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 00:31:40.160267 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 24 00:31:40.160285 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 24 00:31:40.160307 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 24 00:31:40.160325 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 24 00:31:40.160343 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:31:40.160361 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:31:40.160385 systemd[1]: Reached target paths.target - Path Units. Jan 24 00:31:40.160402 systemd[1]: Reached target slices.target - Slice Units. Jan 24 00:31:40.160420 systemd[1]: Reached target swap.target - Swaps. Jan 24 00:31:40.160443 systemd[1]: Reached target timers.target - Timer Units. Jan 24 00:31:40.160461 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 00:31:40.160479 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 00:31:40.160496 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 24 00:31:40.160514 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 24 00:31:40.160537 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:31:40.160555 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 00:31:40.160573 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:31:40.160591 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 00:31:40.160627 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 24 00:31:40.160645 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 00:31:40.160662 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 24 00:31:40.160680 systemd[1]: Starting systemd-fsck-usr.service... Jan 24 00:31:40.160699 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 00:31:40.160723 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 00:31:40.160824 systemd-journald[187]: Collecting audit messages is disabled. Jan 24 00:31:40.161435 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:31:40.161457 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 24 00:31:40.161486 systemd-journald[187]: Journal started Jan 24 00:31:40.161523 systemd-journald[187]: Runtime Journal (/run/log/journal/571b6cb2a7dc4f0a91d412be2179056a) is 8.0M, max 76.3M, 68.3M free. Jan 24 00:31:40.159222 systemd-modules-load[189]: Inserted module 'overlay' Jan 24 00:31:40.171865 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 00:31:40.171257 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:31:40.171754 systemd[1]: Finished systemd-fsck-usr.service. Jan 24 00:31:40.181749 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 24 00:31:40.188681 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 24 00:31:40.185051 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 00:31:40.193048 kernel: Bridge firewalling registered Jan 24 00:31:40.190773 systemd-modules-load[189]: Inserted module 'br_netfilter' Jan 24 00:31:40.199221 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 00:31:40.200327 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:31:40.201403 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 00:31:40.208055 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:31:40.209661 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:31:40.212999 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 00:31:40.213621 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:31:40.227080 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:31:40.230990 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 24 00:31:40.231554 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:31:40.232188 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:31:40.240963 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 00:31:40.249987 dracut-cmdline[221]: dracut-dracut-053 Jan 24 00:31:40.253439 dracut-cmdline[221]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:31:40.268094 systemd-resolved[225]: Positive Trust Anchors: Jan 24 00:31:40.268109 systemd-resolved[225]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 00:31:40.268132 systemd-resolved[225]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 00:31:40.272122 systemd-resolved[225]: Defaulting to hostname 'linux'. Jan 24 00:31:40.273059 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 00:31:40.274082 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:31:40.320878 kernel: SCSI subsystem initialized Jan 24 00:31:40.328873 kernel: Loading iSCSI transport class v2.0-870. Jan 24 00:31:40.338900 kernel: iscsi: registered transport (tcp) Jan 24 00:31:40.354973 kernel: iscsi: registered transport (qla4xxx) Jan 24 00:31:40.355022 kernel: QLogic iSCSI HBA Driver Jan 24 00:31:40.416692 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 24 00:31:40.424142 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 24 00:31:40.472269 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 24 00:31:40.472333 kernel: device-mapper: uevent: version 1.0.3 Jan 24 00:31:40.479881 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 24 00:31:40.531902 kernel: raid6: avx512x4 gen() 42887 MB/s Jan 24 00:31:40.548881 kernel: raid6: avx512x2 gen() 43494 MB/s Jan 24 00:31:40.566883 kernel: raid6: avx512x1 gen() 48877 MB/s Jan 24 00:31:40.584909 kernel: raid6: avx2x4 gen() 54481 MB/s Jan 24 00:31:40.602886 kernel: raid6: avx2x2 gen() 57082 MB/s Jan 24 00:31:40.621640 kernel: raid6: avx2x1 gen() 45473 MB/s Jan 24 00:31:40.621722 kernel: raid6: using algorithm avx2x2 gen() 57082 MB/s Jan 24 00:31:40.640691 kernel: raid6: .... xor() 37448 MB/s, rmw enabled Jan 24 00:31:40.640772 kernel: raid6: using avx512x2 recovery algorithm Jan 24 00:31:40.673883 kernel: xor: automatically using best checksumming function avx Jan 24 00:31:40.783906 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 24 00:31:40.802332 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 24 00:31:40.810099 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:31:40.821256 systemd-udevd[409]: Using default interface naming scheme 'v255'. Jan 24 00:31:40.825692 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:31:40.833071 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 24 00:31:40.852441 dracut-pre-trigger[417]: rd.md=0: removing MD RAID activation Jan 24 00:31:40.894138 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 00:31:40.903971 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 00:31:41.002888 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:31:41.013147 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 24 00:31:41.049952 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 24 00:31:41.053689 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 00:31:41.054978 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:31:41.056948 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 00:31:41.064107 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 24 00:31:41.086364 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 24 00:31:41.116859 kernel: cryptd: max_cpu_qlen set to 1000 Jan 24 00:31:41.118117 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 00:31:41.118199 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:31:41.119385 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:31:41.119714 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:31:41.129739 kernel: scsi host0: Virtio SCSI HBA Jan 24 00:31:41.140897 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 24 00:31:41.136385 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:31:41.148466 kernel: ACPI: bus type USB registered Jan 24 00:31:41.148479 kernel: AVX2 version of gcm_enc/dec engaged. Jan 24 00:31:41.148487 kernel: AES CTR mode by8 optimization enabled Jan 24 00:31:41.148495 kernel: usbcore: registered new interface driver usbfs Jan 24 00:31:41.136768 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:31:41.142312 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:31:41.154858 kernel: usbcore: registered new interface driver hub Jan 24 00:31:41.161788 kernel: usbcore: registered new device driver usb Jan 24 00:31:41.163039 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:31:41.163508 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:31:41.165177 kernel: libata version 3.00 loaded. Jan 24 00:31:41.177857 kernel: ahci 0000:00:1f.2: version 3.0 Jan 24 00:31:41.178027 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 24 00:31:41.179219 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:31:41.186389 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 24 00:31:41.189651 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 24 00:31:41.189776 kernel: scsi host1: ahci Jan 24 00:31:41.194715 kernel: scsi host2: ahci Jan 24 00:31:41.198879 kernel: scsi host3: ahci Jan 24 00:31:41.201852 kernel: scsi host4: ahci Jan 24 00:31:41.203465 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:31:41.205852 kernel: scsi host5: ahci Jan 24 00:31:41.213856 kernel: scsi host6: ahci Jan 24 00:31:41.214008 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:31:41.231511 kernel: sd 0:0:0:0: Power-on or device reset occurred Jan 24 00:31:41.233899 kernel: ata1: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380100 irq 48 Jan 24 00:31:41.233910 kernel: sd 0:0:0:0: [sda] 160006144 512-byte logical blocks: (81.9 GB/76.3 GiB) Jan 24 00:31:41.234258 kernel: ata2: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380180 irq 48 Jan 24 00:31:41.234272 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 24 00:31:41.234565 kernel: ata3: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380200 irq 48 Jan 24 00:31:41.234596 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Jan 24 00:31:41.234911 kernel: ata4: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380280 irq 48 Jan 24 00:31:41.234920 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 24 00:31:41.235133 kernel: ata5: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380300 irq 48 Jan 24 00:31:41.235145 kernel: ata6: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380380 irq 48 Jan 24 00:31:41.239311 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 24 00:31:41.239331 kernel: GPT:17805311 != 160006143 Jan 24 00:31:41.239344 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 24 00:31:41.242140 kernel: GPT:17805311 != 160006143 Jan 24 00:31:41.242156 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 24 00:31:41.244982 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 24 00:31:41.249872 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 24 00:31:41.257220 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:31:41.543925 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 24 00:31:41.557871 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 24 00:31:41.557938 kernel: ata1.00: applying bridge limits Jan 24 00:31:41.557963 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 24 00:31:41.557984 kernel: ata1.00: configured for UDMA/100 Jan 24 00:31:41.564816 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 24 00:31:41.568668 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 24 00:31:41.568775 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 24 00:31:41.579891 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 24 00:31:41.584886 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 24 00:31:41.605895 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 24 00:31:41.606297 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jan 24 00:31:41.619018 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 24 00:31:41.633116 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 24 00:31:41.633551 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jan 24 00:31:41.638894 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jan 24 00:31:41.658671 kernel: hub 1-0:1.0: USB hub found Jan 24 00:31:41.659140 kernel: hub 1-0:1.0: 4 ports detected Jan 24 00:31:41.664257 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 24 00:31:41.664713 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 24 00:31:41.664767 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 24 00:31:41.677455 kernel: hub 2-0:1.0: USB hub found Jan 24 00:31:41.677955 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (458) Jan 24 00:31:41.699898 kernel: hub 2-0:1.0: 4 ports detected Jan 24 00:31:41.708894 kernel: BTRFS: device fsid b9d3569e-180c-420c-96ec-490d7c970b80 devid 1 transid 33 /dev/sda3 scanned by (udev-worker) (454) Jan 24 00:31:41.710526 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 24 00:31:41.716905 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Jan 24 00:31:41.726288 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 24 00:31:41.749985 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 24 00:31:41.755666 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 24 00:31:41.756709 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jan 24 00:31:41.765968 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 24 00:31:41.775344 disk-uuid[587]: Primary Header is updated. Jan 24 00:31:41.775344 disk-uuid[587]: Secondary Entries is updated. Jan 24 00:31:41.775344 disk-uuid[587]: Secondary Header is updated. Jan 24 00:31:41.780953 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 24 00:31:41.786883 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 24 00:31:41.791903 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 24 00:31:41.906150 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 24 00:31:42.043873 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 24 00:31:42.048305 kernel: usbcore: registered new interface driver usbhid Jan 24 00:31:42.048355 kernel: usbhid: USB HID core driver Jan 24 00:31:42.057404 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Jan 24 00:31:42.057442 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jan 24 00:31:42.805541 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 24 00:31:42.808710 disk-uuid[589]: The operation has completed successfully. Jan 24 00:31:42.912148 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 24 00:31:42.912380 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 24 00:31:42.927102 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 24 00:31:42.946606 sh[610]: Success Jan 24 00:31:42.974216 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 24 00:31:43.057781 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 24 00:31:43.078992 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 24 00:31:43.087372 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 24 00:31:43.111162 kernel: BTRFS info (device dm-0): first mount of filesystem b9d3569e-180c-420c-96ec-490d7c970b80 Jan 24 00:31:43.111270 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:31:43.117110 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 24 00:31:43.123170 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 24 00:31:43.131188 kernel: BTRFS info (device dm-0): using free space tree Jan 24 00:31:43.145893 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 24 00:31:43.149039 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 24 00:31:43.151138 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 24 00:31:43.158216 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 24 00:31:43.163240 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 24 00:31:43.202060 kernel: BTRFS info (device sda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:31:43.202147 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:31:43.202182 kernel: BTRFS info (device sda6): using free space tree Jan 24 00:31:43.216951 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 24 00:31:43.217031 kernel: BTRFS info (device sda6): auto enabling async discard Jan 24 00:31:43.245441 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 24 00:31:43.246508 kernel: BTRFS info (device sda6): last unmount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:31:43.258376 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 24 00:31:43.268224 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 24 00:31:43.411554 ignition[710]: Ignition 2.19.0 Jan 24 00:31:43.413078 ignition[710]: Stage: fetch-offline Jan 24 00:31:43.413165 ignition[710]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:31:43.413187 ignition[710]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 24 00:31:43.413370 ignition[710]: parsed url from cmdline: "" Jan 24 00:31:43.413379 ignition[710]: no config URL provided Jan 24 00:31:43.413391 ignition[710]: reading system config file "/usr/lib/ignition/user.ign" Jan 24 00:31:43.413410 ignition[710]: no config at "/usr/lib/ignition/user.ign" Jan 24 00:31:43.413422 ignition[710]: failed to fetch config: resource requires networking Jan 24 00:31:43.419369 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 00:31:43.413758 ignition[710]: Ignition finished successfully Jan 24 00:31:43.431961 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 00:31:43.441150 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 00:31:43.494334 systemd-networkd[797]: lo: Link UP Jan 24 00:31:43.494352 systemd-networkd[797]: lo: Gained carrier Jan 24 00:31:43.499101 systemd-networkd[797]: Enumeration completed Jan 24 00:31:43.499269 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 00:31:43.501129 systemd-networkd[797]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:31:43.501140 systemd-networkd[797]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:31:43.501580 systemd[1]: Reached target network.target - Network. Jan 24 00:31:43.503184 systemd-networkd[797]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:31:43.503194 systemd-networkd[797]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:31:43.504501 systemd-networkd[797]: eth0: Link UP Jan 24 00:31:43.504511 systemd-networkd[797]: eth0: Gained carrier Jan 24 00:31:43.504527 systemd-networkd[797]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:31:43.514542 systemd-networkd[797]: eth1: Link UP Jan 24 00:31:43.514551 systemd-networkd[797]: eth1: Gained carrier Jan 24 00:31:43.514571 systemd-networkd[797]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:31:43.515218 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 24 00:31:43.544171 ignition[799]: Ignition 2.19.0 Jan 24 00:31:43.544192 ignition[799]: Stage: fetch Jan 24 00:31:43.544492 ignition[799]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:31:43.544513 ignition[799]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 24 00:31:43.544721 ignition[799]: parsed url from cmdline: "" Jan 24 00:31:43.544730 ignition[799]: no config URL provided Jan 24 00:31:43.544744 ignition[799]: reading system config file "/usr/lib/ignition/user.ign" Jan 24 00:31:43.544767 ignition[799]: no config at "/usr/lib/ignition/user.ign" Jan 24 00:31:43.544814 ignition[799]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jan 24 00:31:43.549970 systemd-networkd[797]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Jan 24 00:31:43.545166 ignition[799]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 24 00:31:43.583001 systemd-networkd[797]: eth0: DHCPv4 address 65.109.167.77/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 24 00:31:43.745400 ignition[799]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Jan 24 00:31:43.755490 ignition[799]: GET result: OK Jan 24 00:31:43.755602 ignition[799]: parsing config with SHA512: a06497cf02df9db1995e5bc1b4c792e59b0041dc42d67a62237c26a63f1bdd4be703915d30af54579ba31b8e146613dba3ffe232337e7dd064a507d61711ce68 Jan 24 00:31:43.764963 unknown[799]: fetched base config from "system" Jan 24 00:31:43.765651 ignition[799]: fetch: fetch complete Jan 24 00:31:43.764994 unknown[799]: fetched base config from "system" Jan 24 00:31:43.765681 ignition[799]: fetch: fetch passed Jan 24 00:31:43.765023 unknown[799]: fetched user config from "hetzner" Jan 24 00:31:43.765773 ignition[799]: Ignition finished successfully Jan 24 00:31:43.772444 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 24 00:31:43.782201 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 24 00:31:43.832634 ignition[807]: Ignition 2.19.0 Jan 24 00:31:43.832676 ignition[807]: Stage: kargs Jan 24 00:31:43.833155 ignition[807]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:31:43.833201 ignition[807]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 24 00:31:43.834820 ignition[807]: kargs: kargs passed Jan 24 00:31:43.835006 ignition[807]: Ignition finished successfully Jan 24 00:31:43.838486 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 24 00:31:43.847141 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 24 00:31:43.883967 ignition[813]: Ignition 2.19.0 Jan 24 00:31:43.883987 ignition[813]: Stage: disks Jan 24 00:31:43.884269 ignition[813]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:31:43.888969 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 24 00:31:43.884290 ignition[813]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 24 00:31:43.891459 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 24 00:31:43.885592 ignition[813]: disks: disks passed Jan 24 00:31:43.893394 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 24 00:31:43.885695 ignition[813]: Ignition finished successfully Jan 24 00:31:43.895707 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 00:31:43.897696 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 00:31:43.899188 systemd[1]: Reached target basic.target - Basic System. Jan 24 00:31:43.910147 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 24 00:31:43.949815 systemd-fsck[821]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 24 00:31:43.957208 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 24 00:31:43.966130 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 24 00:31:44.099047 kernel: EXT4-fs (sda9): mounted filesystem a752e1f1-ddf3-43b9-88e7-8cc533707c34 r/w with ordered data mode. Quota mode: none. Jan 24 00:31:44.099421 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 24 00:31:44.100540 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 24 00:31:44.106913 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 00:31:44.108966 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 24 00:31:44.113967 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 24 00:31:44.114682 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 24 00:31:44.114938 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 00:31:44.129105 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 24 00:31:44.140903 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (829) Jan 24 00:31:44.143179 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 24 00:31:44.166790 kernel: BTRFS info (device sda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:31:44.166826 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:31:44.166862 kernel: BTRFS info (device sda6): using free space tree Jan 24 00:31:44.174323 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 24 00:31:44.174370 kernel: BTRFS info (device sda6): auto enabling async discard Jan 24 00:31:44.186044 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 00:31:44.244309 initrd-setup-root[856]: cut: /sysroot/etc/passwd: No such file or directory Jan 24 00:31:44.253300 initrd-setup-root[863]: cut: /sysroot/etc/group: No such file or directory Jan 24 00:31:44.261221 coreos-metadata[831]: Jan 24 00:31:44.260 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jan 24 00:31:44.264169 initrd-setup-root[870]: cut: /sysroot/etc/shadow: No such file or directory Jan 24 00:31:44.265385 coreos-metadata[831]: Jan 24 00:31:44.262 INFO Fetch successful Jan 24 00:31:44.265385 coreos-metadata[831]: Jan 24 00:31:44.263 INFO wrote hostname ci-4081-3-6-n-a9e48d2ea0 to /sysroot/etc/hostname Jan 24 00:31:44.265563 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 24 00:31:44.270902 initrd-setup-root[878]: cut: /sysroot/etc/gshadow: No such file or directory Jan 24 00:31:44.425812 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 24 00:31:44.439034 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 24 00:31:44.443130 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 24 00:31:44.459412 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 24 00:31:44.466940 kernel: BTRFS info (device sda6): last unmount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:31:44.510296 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 24 00:31:44.512383 ignition[945]: INFO : Ignition 2.19.0 Jan 24 00:31:44.512383 ignition[945]: INFO : Stage: mount Jan 24 00:31:44.512383 ignition[945]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:31:44.512383 ignition[945]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 24 00:31:44.517697 ignition[945]: INFO : mount: mount passed Jan 24 00:31:44.517697 ignition[945]: INFO : Ignition finished successfully Jan 24 00:31:44.517817 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 24 00:31:44.530005 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 24 00:31:44.552054 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 00:31:44.573914 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (957) Jan 24 00:31:44.581992 kernel: BTRFS info (device sda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:31:44.582074 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:31:44.587005 kernel: BTRFS info (device sda6): using free space tree Jan 24 00:31:44.599492 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 24 00:31:44.599568 kernel: BTRFS info (device sda6): auto enabling async discard Jan 24 00:31:44.608659 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 00:31:44.650350 ignition[974]: INFO : Ignition 2.19.0 Jan 24 00:31:44.650350 ignition[974]: INFO : Stage: files Jan 24 00:31:44.652890 ignition[974]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:31:44.652890 ignition[974]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 24 00:31:44.652890 ignition[974]: DEBUG : files: compiled without relabeling support, skipping Jan 24 00:31:44.655422 ignition[974]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 24 00:31:44.655422 ignition[974]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 24 00:31:44.660600 ignition[974]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 24 00:31:44.661882 ignition[974]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 24 00:31:44.661882 ignition[974]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 24 00:31:44.661600 unknown[974]: wrote ssh authorized keys file for user: core Jan 24 00:31:44.665817 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 24 00:31:44.665817 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 24 00:31:44.665817 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 24 00:31:44.665817 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 24 00:31:44.892957 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 24 00:31:45.196923 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 24 00:31:45.196923 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 24 00:31:45.196923 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 24 00:31:45.196923 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 24 00:31:45.196923 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 24 00:31:45.196923 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 00:31:45.196923 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 00:31:45.196923 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 00:31:45.196923 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 00:31:45.196923 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 00:31:45.196923 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 00:31:45.196923 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 00:31:45.208762 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 00:31:45.208762 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 00:31:45.208762 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 24 00:31:45.292400 systemd-networkd[797]: eth0: Gained IPv6LL Jan 24 00:31:45.293082 systemd-networkd[797]: eth1: Gained IPv6LL Jan 24 00:31:45.626329 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 24 00:31:45.982441 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 00:31:45.982441 ignition[974]: INFO : files: op(c): [started] processing unit "containerd.service" Jan 24 00:31:45.985910 ignition[974]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 24 00:31:45.985910 ignition[974]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 24 00:31:45.985910 ignition[974]: INFO : files: op(c): [finished] processing unit "containerd.service" Jan 24 00:31:45.985910 ignition[974]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jan 24 00:31:45.985910 ignition[974]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 00:31:45.985910 ignition[974]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 00:31:45.985910 ignition[974]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jan 24 00:31:45.985910 ignition[974]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Jan 24 00:31:45.985910 ignition[974]: INFO : files: op(10): op(11): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 24 00:31:45.985910 ignition[974]: INFO : files: op(10): op(11): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 24 00:31:45.985910 ignition[974]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Jan 24 00:31:45.985910 ignition[974]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jan 24 00:31:45.985910 ignition[974]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jan 24 00:31:46.002067 ignition[974]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 24 00:31:46.002067 ignition[974]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 24 00:31:46.002067 ignition[974]: INFO : files: files passed Jan 24 00:31:46.002067 ignition[974]: INFO : Ignition finished successfully Jan 24 00:31:45.989538 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 24 00:31:46.002284 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 24 00:31:46.011059 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 24 00:31:46.015467 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 24 00:31:46.016939 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 24 00:31:46.036903 initrd-setup-root-after-ignition[1003]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:31:46.038280 initrd-setup-root-after-ignition[1003]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:31:46.039415 initrd-setup-root-after-ignition[1007]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:31:46.042393 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 00:31:46.044461 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 24 00:31:46.052087 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 24 00:31:46.122758 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 24 00:31:46.123048 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 24 00:31:46.125430 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 24 00:31:46.127068 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 24 00:31:46.129166 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 24 00:31:46.136226 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 24 00:31:46.164446 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 00:31:46.172073 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 24 00:31:46.200518 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:31:46.202749 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:31:46.204019 systemd[1]: Stopped target timers.target - Timer Units. Jan 24 00:31:46.206116 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 24 00:31:46.206294 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 00:31:46.209239 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 24 00:31:46.211349 systemd[1]: Stopped target basic.target - Basic System. Jan 24 00:31:46.213130 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 24 00:31:46.214899 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 00:31:46.216960 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 24 00:31:46.218881 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 24 00:31:46.220820 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 00:31:46.222687 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 24 00:31:46.224670 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 24 00:31:46.226599 systemd[1]: Stopped target swap.target - Swaps. Jan 24 00:31:46.228487 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 24 00:31:46.228672 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 24 00:31:46.231934 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:31:46.233889 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:31:46.235782 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 24 00:31:46.236753 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:31:46.237773 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 24 00:31:46.237988 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 24 00:31:46.240567 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 24 00:31:46.240779 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 00:31:46.242547 systemd[1]: ignition-files.service: Deactivated successfully. Jan 24 00:31:46.242726 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 24 00:31:46.244331 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 24 00:31:46.244497 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 24 00:31:46.254168 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 24 00:31:46.255177 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 24 00:31:46.255433 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:31:46.259093 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 24 00:31:46.262968 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 24 00:31:46.263249 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:31:46.266047 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 24 00:31:46.266215 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 00:31:46.279209 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 24 00:31:46.280100 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 24 00:31:46.295682 ignition[1027]: INFO : Ignition 2.19.0 Jan 24 00:31:46.298095 ignition[1027]: INFO : Stage: umount Jan 24 00:31:46.298095 ignition[1027]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:31:46.298095 ignition[1027]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 24 00:31:46.301711 ignition[1027]: INFO : umount: umount passed Jan 24 00:31:46.302468 ignition[1027]: INFO : Ignition finished successfully Jan 24 00:31:46.305314 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 24 00:31:46.305903 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 24 00:31:46.308421 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 24 00:31:46.308564 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 24 00:31:46.310009 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 24 00:31:46.310085 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 24 00:31:46.312965 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 24 00:31:46.313035 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 24 00:31:46.314267 systemd[1]: Stopped target network.target - Network. Jan 24 00:31:46.314940 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 24 00:31:46.315012 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 00:31:46.315708 systemd[1]: Stopped target paths.target - Path Units. Jan 24 00:31:46.319038 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 24 00:31:46.319177 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:31:46.320793 systemd[1]: Stopped target slices.target - Slice Units. Jan 24 00:31:46.322332 systemd[1]: Stopped target sockets.target - Socket Units. Jan 24 00:31:46.324015 systemd[1]: iscsid.socket: Deactivated successfully. Jan 24 00:31:46.324115 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 00:31:46.325616 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 24 00:31:46.325722 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 00:31:46.327218 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 24 00:31:46.327325 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 24 00:31:46.328871 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 24 00:31:46.328974 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 24 00:31:46.330791 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 24 00:31:46.332209 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 24 00:31:46.336243 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 24 00:31:46.337400 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 24 00:31:46.337596 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 24 00:31:46.338579 systemd-networkd[797]: eth0: DHCPv6 lease lost Jan 24 00:31:46.340433 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 24 00:31:46.340573 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 24 00:31:46.341905 systemd-networkd[797]: eth1: DHCPv6 lease lost Jan 24 00:31:46.344731 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 24 00:31:46.345304 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 24 00:31:46.349335 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 24 00:31:46.349552 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 24 00:31:46.352369 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 24 00:31:46.352451 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:31:46.358050 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 24 00:31:46.358705 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 24 00:31:46.358800 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 00:31:46.359662 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 24 00:31:46.359753 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:31:46.360652 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 24 00:31:46.360747 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 24 00:31:46.362105 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 24 00:31:46.362179 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:31:46.369628 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:31:46.388982 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 24 00:31:46.389338 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:31:46.391764 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 24 00:31:46.392342 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 24 00:31:46.393717 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 24 00:31:46.393796 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:31:46.395529 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 24 00:31:46.395626 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 24 00:31:46.398321 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 24 00:31:46.398420 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 24 00:31:46.401082 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 00:31:46.401180 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:31:46.410180 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 24 00:31:46.411246 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 24 00:31:46.411358 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:31:46.414015 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:31:46.414113 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:31:46.415711 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 24 00:31:46.420577 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 24 00:31:46.430565 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 24 00:31:46.431891 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 24 00:31:46.433287 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 24 00:31:46.441096 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 24 00:31:46.457547 systemd[1]: Switching root. Jan 24 00:31:46.508222 systemd-journald[187]: Journal stopped Jan 24 00:31:48.070528 systemd-journald[187]: Received SIGTERM from PID 1 (systemd). Jan 24 00:31:48.070601 kernel: SELinux: policy capability network_peer_controls=1 Jan 24 00:31:48.070612 kernel: SELinux: policy capability open_perms=1 Jan 24 00:31:48.070621 kernel: SELinux: policy capability extended_socket_class=1 Jan 24 00:31:48.070634 kernel: SELinux: policy capability always_check_network=0 Jan 24 00:31:48.070643 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 24 00:31:48.070658 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 24 00:31:48.070666 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 24 00:31:48.070675 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 24 00:31:48.070686 kernel: audit: type=1403 audit(1769214706.861:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 24 00:31:48.070695 systemd[1]: Successfully loaded SELinux policy in 89.202ms. Jan 24 00:31:48.070727 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 19.673ms. Jan 24 00:31:48.070737 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 24 00:31:48.070751 systemd[1]: Detected virtualization kvm. Jan 24 00:31:48.070760 systemd[1]: Detected architecture x86-64. Jan 24 00:31:48.070768 systemd[1]: Detected first boot. Jan 24 00:31:48.070777 systemd[1]: Hostname set to . Jan 24 00:31:48.070786 systemd[1]: Initializing machine ID from VM UUID. Jan 24 00:31:48.070795 zram_generator::config[1086]: No configuration found. Jan 24 00:31:48.070805 systemd[1]: Populated /etc with preset unit settings. Jan 24 00:31:48.070814 systemd[1]: Queued start job for default target multi-user.target. Jan 24 00:31:48.070825 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 24 00:31:48.070834 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 24 00:31:48.070871 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 24 00:31:48.070881 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 24 00:31:48.070890 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 24 00:31:48.070898 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 24 00:31:48.070907 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 24 00:31:48.070916 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 24 00:31:48.070928 systemd[1]: Created slice user.slice - User and Session Slice. Jan 24 00:31:48.070937 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:31:48.070946 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:31:48.070955 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 24 00:31:48.070964 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 24 00:31:48.070973 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 24 00:31:48.070982 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 00:31:48.070992 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 24 00:31:48.071001 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:31:48.071012 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 24 00:31:48.071021 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:31:48.071035 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 00:31:48.071044 systemd[1]: Reached target slices.target - Slice Units. Jan 24 00:31:48.071053 systemd[1]: Reached target swap.target - Swaps. Jan 24 00:31:48.071062 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 24 00:31:48.071073 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 24 00:31:48.071082 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 24 00:31:48.071091 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 24 00:31:48.071100 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:31:48.071109 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 00:31:48.071118 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:31:48.071127 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 24 00:31:48.071136 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 24 00:31:48.071145 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 24 00:31:48.071154 systemd[1]: Mounting media.mount - External Media Directory... Jan 24 00:31:48.071165 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:31:48.071178 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 24 00:31:48.071187 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 24 00:31:48.071197 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 24 00:31:48.071206 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 24 00:31:48.071215 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:31:48.071224 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 00:31:48.071233 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 24 00:31:48.071244 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:31:48.071253 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 00:31:48.071262 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:31:48.071271 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 24 00:31:48.071280 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:31:48.071292 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 24 00:31:48.071305 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 24 00:31:48.071321 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 24 00:31:48.071336 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 00:31:48.071348 kernel: fuse: init (API version 7.39) Jan 24 00:31:48.071358 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 00:31:48.071367 kernel: loop: module loaded Jan 24 00:31:48.071379 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 24 00:31:48.071388 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 24 00:31:48.071396 kernel: ACPI: bus type drm_connector registered Jan 24 00:31:48.071431 systemd-journald[1192]: Collecting audit messages is disabled. Jan 24 00:31:48.071461 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 00:31:48.071471 systemd-journald[1192]: Journal started Jan 24 00:31:48.071487 systemd-journald[1192]: Runtime Journal (/run/log/journal/571b6cb2a7dc4f0a91d412be2179056a) is 8.0M, max 76.3M, 68.3M free. Jan 24 00:31:48.078851 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:31:48.086931 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 00:31:48.088141 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 24 00:31:48.088914 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 24 00:31:48.089416 systemd[1]: Mounted media.mount - External Media Directory. Jan 24 00:31:48.090001 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 24 00:31:48.090572 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 24 00:31:48.091134 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 24 00:31:48.092021 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 24 00:31:48.092768 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:31:48.093531 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 24 00:31:48.093699 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 24 00:31:48.094530 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:31:48.094746 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:31:48.095454 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 00:31:48.095664 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 00:31:48.096384 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:31:48.096553 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:31:48.097471 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 24 00:31:48.097705 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 24 00:31:48.098426 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:31:48.098645 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:31:48.099614 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 00:31:48.100346 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 24 00:31:48.101252 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 24 00:31:48.112487 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 24 00:31:48.117948 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 24 00:31:48.126958 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 24 00:31:48.127897 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 24 00:31:48.130973 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 24 00:31:48.133589 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 24 00:31:48.134941 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 00:31:48.143967 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 24 00:31:48.144486 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 00:31:48.147414 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:31:48.159121 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 24 00:31:48.162832 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 24 00:31:48.165008 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 24 00:31:48.178331 systemd-journald[1192]: Time spent on flushing to /var/log/journal/571b6cb2a7dc4f0a91d412be2179056a is 30.653ms for 1167 entries. Jan 24 00:31:48.178331 systemd-journald[1192]: System Journal (/var/log/journal/571b6cb2a7dc4f0a91d412be2179056a) is 8.0M, max 584.8M, 576.8M free. Jan 24 00:31:48.236122 systemd-journald[1192]: Received client request to flush runtime journal. Jan 24 00:31:48.187313 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 24 00:31:48.189971 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 24 00:31:48.216939 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:31:48.220231 systemd-tmpfiles[1229]: ACLs are not supported, ignoring. Jan 24 00:31:48.220242 systemd-tmpfiles[1229]: ACLs are not supported, ignoring. Jan 24 00:31:48.231177 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 00:31:48.243038 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 24 00:31:48.243813 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 24 00:31:48.267320 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:31:48.277052 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 24 00:31:48.295240 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 24 00:31:48.304076 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 00:31:48.310923 udevadm[1248]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 24 00:31:48.320409 systemd-tmpfiles[1251]: ACLs are not supported, ignoring. Jan 24 00:31:48.320705 systemd-tmpfiles[1251]: ACLs are not supported, ignoring. Jan 24 00:31:48.327123 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:31:48.579364 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 24 00:31:48.587004 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:31:48.616082 systemd-udevd[1257]: Using default interface naming scheme 'v255'. Jan 24 00:31:48.646409 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:31:48.659179 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 00:31:48.691090 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 24 00:31:48.740217 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 24 00:31:48.755922 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 24 00:31:48.796858 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 24 00:31:48.814999 kernel: ACPI: button: Power Button [PWRF] Jan 24 00:31:48.827481 systemd-networkd[1262]: lo: Link UP Jan 24 00:31:48.827490 systemd-networkd[1262]: lo: Gained carrier Jan 24 00:31:48.829918 systemd-networkd[1262]: Enumeration completed Jan 24 00:31:48.830055 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 00:31:48.830287 systemd-networkd[1262]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:31:48.830291 systemd-networkd[1262]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:31:48.831317 systemd-networkd[1262]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:31:48.831389 systemd-networkd[1262]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:31:48.832049 systemd-networkd[1262]: eth0: Link UP Jan 24 00:31:48.832082 systemd-networkd[1262]: eth0: Gained carrier Jan 24 00:31:48.832115 systemd-networkd[1262]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:31:48.835234 systemd-networkd[1262]: eth1: Link UP Jan 24 00:31:48.835973 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 24 00:31:48.838499 systemd-networkd[1262]: eth1: Gained carrier Jan 24 00:31:48.838518 systemd-networkd[1262]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:31:48.845336 systemd-networkd[1262]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:31:48.855895 kernel: mousedev: PS/2 mouse device common for all mice Jan 24 00:31:48.857262 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:31:48.857491 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:31:48.870051 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:31:48.876151 systemd-networkd[1262]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Jan 24 00:31:48.879004 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:31:48.888010 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:31:48.889859 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 24 00:31:48.892230 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 24 00:31:48.892270 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 24 00:31:48.892304 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:31:48.899680 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:31:48.899907 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:31:48.902744 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:31:48.903918 systemd-networkd[1262]: eth0: DHCPv4 address 65.109.167.77/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 24 00:31:48.904056 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:31:48.905257 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:31:48.906082 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:31:48.910760 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 00:31:48.911657 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 00:31:48.922029 systemd-networkd[1262]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:31:48.938865 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Jan 24 00:31:48.947290 kernel: Console: switching to colour dummy device 80x25 Jan 24 00:31:48.947357 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 24 00:31:48.954438 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 24 00:31:48.954582 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 24 00:31:48.954774 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 24 00:31:48.956026 kernel: EDAC MC: Ver: 3.0.0 Jan 24 00:31:48.948210 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:31:48.960170 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Jan 24 00:31:48.964247 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 24 00:31:48.964283 kernel: [drm] features: -context_init Jan 24 00:31:48.969857 kernel: [drm] number of scanouts: 1 Jan 24 00:31:48.972900 kernel: [drm] number of cap sets: 0 Jan 24 00:31:48.972954 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1272) Jan 24 00:31:48.974253 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:31:48.974754 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:31:48.977971 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Jan 24 00:31:48.981049 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:31:49.007650 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 24 00:31:49.007707 kernel: Console: switching to colour frame buffer device 160x50 Jan 24 00:31:49.016293 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 24 00:31:49.027448 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:31:49.027726 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:31:49.042074 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:31:49.047952 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 24 00:31:49.086493 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:31:49.119476 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 24 00:31:49.123203 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 24 00:31:49.140215 lvm[1334]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 24 00:31:49.182785 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 24 00:31:49.184119 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:31:49.195128 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 24 00:31:49.208249 lvm[1337]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 24 00:31:49.257043 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 24 00:31:49.258492 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 24 00:31:49.259149 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 24 00:31:49.259196 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 00:31:49.259350 systemd[1]: Reached target machines.target - Containers. Jan 24 00:31:49.262509 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 24 00:31:49.269066 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 24 00:31:49.276190 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 24 00:31:49.277472 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:31:49.281091 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 24 00:31:49.288381 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 24 00:31:49.303957 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 24 00:31:49.310223 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 24 00:31:49.349943 kernel: loop0: detected capacity change from 0 to 224512 Jan 24 00:31:49.363624 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 24 00:31:49.365402 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 24 00:31:49.369321 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 24 00:31:49.411907 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 24 00:31:49.438563 kernel: loop1: detected capacity change from 0 to 142488 Jan 24 00:31:49.491924 kernel: loop2: detected capacity change from 0 to 8 Jan 24 00:31:49.528316 kernel: loop3: detected capacity change from 0 to 140768 Jan 24 00:31:49.593912 kernel: loop4: detected capacity change from 0 to 224512 Jan 24 00:31:49.631911 kernel: loop5: detected capacity change from 0 to 142488 Jan 24 00:31:49.659334 kernel: loop6: detected capacity change from 0 to 8 Jan 24 00:31:49.665878 kernel: loop7: detected capacity change from 0 to 140768 Jan 24 00:31:49.698960 (sd-merge)[1359]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Jan 24 00:31:49.700418 (sd-merge)[1359]: Merged extensions into '/usr'. Jan 24 00:31:49.734894 systemd[1]: Reloading requested from client PID 1345 ('systemd-sysext') (unit systemd-sysext.service)... Jan 24 00:31:49.734922 systemd[1]: Reloading... Jan 24 00:31:49.833864 zram_generator::config[1387]: No configuration found. Jan 24 00:31:49.891854 ldconfig[1341]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 24 00:31:49.964257 systemd-networkd[1262]: eth1: Gained IPv6LL Jan 24 00:31:49.983917 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:31:50.037591 systemd[1]: Reloading finished in 301 ms. Jan 24 00:31:50.051292 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 24 00:31:50.053947 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 24 00:31:50.061636 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 24 00:31:50.080108 systemd[1]: Starting ensure-sysext.service... Jan 24 00:31:50.085988 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 00:31:50.089361 systemd[1]: Reloading requested from client PID 1439 ('systemctl') (unit ensure-sysext.service)... Jan 24 00:31:50.089374 systemd[1]: Reloading... Jan 24 00:31:50.092227 systemd-networkd[1262]: eth0: Gained IPv6LL Jan 24 00:31:50.147629 systemd-tmpfiles[1440]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 24 00:31:50.150144 systemd-tmpfiles[1440]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 24 00:31:50.153702 systemd-tmpfiles[1440]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 24 00:31:50.154354 systemd-tmpfiles[1440]: ACLs are not supported, ignoring. Jan 24 00:31:50.154580 systemd-tmpfiles[1440]: ACLs are not supported, ignoring. Jan 24 00:31:50.163734 systemd-tmpfiles[1440]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 00:31:50.163961 systemd-tmpfiles[1440]: Skipping /boot Jan 24 00:31:50.178476 systemd-tmpfiles[1440]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 00:31:50.179965 systemd-tmpfiles[1440]: Skipping /boot Jan 24 00:31:50.189867 zram_generator::config[1494]: No configuration found. Jan 24 00:31:50.256907 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:31:50.310720 systemd[1]: Reloading finished in 220 ms. Jan 24 00:31:50.327946 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:31:50.342127 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 24 00:31:50.356514 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 24 00:31:50.361095 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 24 00:31:50.373613 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 00:31:50.388117 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 24 00:31:50.400458 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:31:50.400792 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:31:50.411977 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:31:50.426280 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:31:50.440305 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:31:50.441622 augenrules[1540]: No rules Jan 24 00:31:50.453124 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:31:50.453335 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:31:50.454653 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 24 00:31:50.473890 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 24 00:31:50.478349 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:31:50.478531 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:31:50.485859 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:31:50.486059 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:31:50.489309 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:31:50.490222 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:31:50.502416 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:31:50.503741 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:31:50.507044 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:31:50.511919 systemd-resolved[1526]: Positive Trust Anchors: Jan 24 00:31:50.511937 systemd-resolved[1526]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 00:31:50.511967 systemd-resolved[1526]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 00:31:50.516530 systemd-resolved[1526]: Using system hostname 'ci-4081-3-6-n-a9e48d2ea0'. Jan 24 00:31:50.517077 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:31:50.521569 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:31:50.523973 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:31:50.533241 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 24 00:31:50.533702 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:31:50.534307 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 00:31:50.538188 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 24 00:31:50.541638 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 24 00:31:50.543137 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:31:50.543341 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:31:50.544928 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:31:50.545183 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:31:50.547432 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:31:50.547704 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:31:50.556929 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 24 00:31:50.559344 systemd[1]: Reached target network.target - Network. Jan 24 00:31:50.560443 systemd[1]: Reached target network-online.target - Network is Online. Jan 24 00:31:50.560824 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:31:50.561499 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:31:50.561859 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:31:50.567011 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:31:50.569029 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 00:31:50.573597 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:31:50.578276 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:31:50.579559 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:31:50.579667 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 24 00:31:50.579725 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:31:50.582666 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:31:50.582940 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:31:50.585505 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 00:31:50.585677 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 00:31:50.586480 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:31:50.586640 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:31:50.591977 systemd[1]: Finished ensure-sysext.service. Jan 24 00:31:50.595288 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 00:31:50.597953 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 24 00:31:50.598577 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:31:50.598743 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:31:50.602677 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 00:31:50.662396 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 24 00:31:50.663041 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 00:31:50.663520 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 24 00:31:50.665292 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 24 00:31:50.665689 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 24 00:31:50.666072 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 24 00:31:50.666101 systemd[1]: Reached target paths.target - Path Units. Jan 24 00:31:50.666429 systemd[1]: Reached target time-set.target - System Time Set. Jan 24 00:31:50.667174 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 24 00:31:50.669212 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 24 00:31:50.669544 systemd[1]: Reached target timers.target - Timer Units. Jan 24 00:31:50.671267 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 24 00:31:50.673254 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 24 00:31:50.677522 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 24 00:31:50.679743 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 24 00:31:50.680117 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 00:31:50.680414 systemd[1]: Reached target basic.target - Basic System. Jan 24 00:31:50.681162 systemd[1]: System is tainted: cgroupsv1 Jan 24 00:31:50.681195 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 24 00:31:50.681216 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 24 00:31:50.682916 systemd[1]: Starting containerd.service - containerd container runtime... Jan 24 00:31:50.686958 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 24 00:31:50.692236 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 24 00:31:50.709801 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 24 00:31:50.715003 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 24 00:31:50.717279 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 24 00:31:50.727341 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:31:50.732968 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 24 00:31:50.742137 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 24 00:31:50.750912 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 24 00:31:50.759420 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jan 24 00:31:50.767691 coreos-metadata[1595]: Jan 24 00:31:50.767 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jan 24 00:31:50.778000 coreos-metadata[1595]: Jan 24 00:31:50.767 INFO Fetch successful Jan 24 00:31:50.778000 coreos-metadata[1595]: Jan 24 00:31:50.767 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jan 24 00:31:50.778000 coreos-metadata[1595]: Jan 24 00:31:50.768 INFO Fetch successful Jan 24 00:31:50.778067 extend-filesystems[1601]: Found loop4 Jan 24 00:31:50.778067 extend-filesystems[1601]: Found loop5 Jan 24 00:31:50.778067 extend-filesystems[1601]: Found loop6 Jan 24 00:31:50.778067 extend-filesystems[1601]: Found loop7 Jan 24 00:31:50.778067 extend-filesystems[1601]: Found sda Jan 24 00:31:50.778067 extend-filesystems[1601]: Found sda1 Jan 24 00:31:50.778067 extend-filesystems[1601]: Found sda2 Jan 24 00:31:50.778067 extend-filesystems[1601]: Found sda3 Jan 24 00:31:50.778067 extend-filesystems[1601]: Found usr Jan 24 00:31:50.778067 extend-filesystems[1601]: Found sda4 Jan 24 00:31:50.778067 extend-filesystems[1601]: Found sda6 Jan 24 00:31:50.778067 extend-filesystems[1601]: Found sda7 Jan 24 00:31:50.778067 extend-filesystems[1601]: Found sda9 Jan 24 00:31:50.778067 extend-filesystems[1601]: Checking size of /dev/sda9 Jan 24 00:31:50.831809 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 19393531 blocks Jan 24 00:31:50.770974 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 24 00:31:50.832674 jq[1598]: false Jan 24 00:31:50.840241 extend-filesystems[1601]: Resized partition /dev/sda9 Jan 24 00:31:50.859284 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1270) Jan 24 00:31:50.778595 dbus-daemon[1596]: [system] SELinux support is enabled Jan 24 00:31:50.780260 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 24 00:31:50.862302 extend-filesystems[1630]: resize2fs 1.47.1 (20-May-2024) Jan 24 00:31:50.790574 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 24 00:31:50.815727 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 24 00:31:50.823238 systemd[1]: Starting update-engine.service - Update Engine... Jan 24 00:31:50.841131 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 24 00:31:50.864011 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 24 00:31:50.875921 jq[1638]: true Jan 24 00:31:50.877406 update_engine[1634]: I20260124 00:31:50.877116 1634 main.cc:92] Flatcar Update Engine starting Jan 24 00:31:50.880559 update_engine[1634]: I20260124 00:31:50.880465 1634 update_check_scheduler.cc:74] Next update check in 3m6s Jan 24 00:31:50.884151 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 24 00:31:50.884421 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 24 00:31:50.887364 systemd[1]: motdgen.service: Deactivated successfully. Jan 24 00:31:50.887611 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 24 00:31:50.897131 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 24 00:31:50.903224 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 24 00:31:50.903456 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 24 00:31:50.932537 systemd-logind[1620]: New seat seat0. Jan 24 00:31:50.937221 systemd-logind[1620]: Watching system buttons on /dev/input/event2 (Power Button) Jan 24 00:31:50.942225 jq[1649]: true Jan 24 00:31:50.937241 systemd-logind[1620]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 24 00:31:50.938002 systemd[1]: Started systemd-logind.service - User Login Management. Jan 24 00:31:50.946166 (ntainerd)[1650]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 24 00:31:50.971134 dbus-daemon[1596]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 24 00:31:50.981201 tar[1648]: linux-amd64/LICENSE Jan 24 00:31:50.981734 tar[1648]: linux-amd64/helm Jan 24 00:31:51.761909 systemd-resolved[1526]: Clock change detected. Flushing caches. Jan 24 00:31:51.762054 systemd-timesyncd[1588]: Contacted time server 185.233.107.180:123 (0.flatcar.pool.ntp.org). Jan 24 00:31:51.762094 systemd-timesyncd[1588]: Initial clock synchronization to Sat 2026-01-24 00:31:51.761870 UTC. Jan 24 00:31:51.781901 systemd[1]: Started update-engine.service - Update Engine. Jan 24 00:31:51.785886 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 24 00:31:51.788127 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 24 00:31:51.788220 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 24 00:31:51.791153 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 24 00:31:51.791237 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 24 00:31:51.792112 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 24 00:31:51.800597 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 24 00:31:51.835104 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 24 00:31:51.837736 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 24 00:31:51.885586 locksmithd[1684]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 24 00:31:51.922680 bash[1689]: Updated "/home/core/.ssh/authorized_keys" Jan 24 00:31:51.925183 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 24 00:31:51.935654 systemd[1]: Starting sshkeys.service... Jan 24 00:31:51.962196 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 24 00:31:51.971610 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 24 00:31:51.982274 containerd[1650]: time="2026-01-24T00:31:51.982210573Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 24 00:31:52.035591 coreos-metadata[1703]: Jan 24 00:31:52.035 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jan 24 00:31:52.038434 coreos-metadata[1703]: Jan 24 00:31:52.037 INFO Fetch successful Jan 24 00:31:52.044186 containerd[1650]: time="2026-01-24T00:31:52.044151789Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:31:52.044275 unknown[1703]: wrote ssh authorized keys file for user: core Jan 24 00:31:52.048397 kernel: EXT4-fs (sda9): resized filesystem to 19393531 Jan 24 00:31:52.073799 containerd[1650]: time="2026-01-24T00:31:52.046740900Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:31:52.073799 containerd[1650]: time="2026-01-24T00:31:52.048435441Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 24 00:31:52.073799 containerd[1650]: time="2026-01-24T00:31:52.048457971Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 24 00:31:52.074671 containerd[1650]: time="2026-01-24T00:31:52.074643162Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 24 00:31:52.074696 containerd[1650]: time="2026-01-24T00:31:52.074674782Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 24 00:31:52.074756 containerd[1650]: time="2026-01-24T00:31:52.074737612Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:31:52.074756 containerd[1650]: time="2026-01-24T00:31:52.074754192Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:31:52.074969 containerd[1650]: time="2026-01-24T00:31:52.074953002Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:31:52.074969 containerd[1650]: time="2026-01-24T00:31:52.074967512Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 24 00:31:52.075003 containerd[1650]: time="2026-01-24T00:31:52.074977142Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:31:52.075003 containerd[1650]: time="2026-01-24T00:31:52.074984232Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 24 00:31:52.075069 containerd[1650]: time="2026-01-24T00:31:52.075055122Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:31:52.075258 containerd[1650]: time="2026-01-24T00:31:52.075243162Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:31:52.075379 containerd[1650]: time="2026-01-24T00:31:52.075365572Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:31:52.075414 containerd[1650]: time="2026-01-24T00:31:52.075378112Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 24 00:31:52.075482 containerd[1650]: time="2026-01-24T00:31:52.075460172Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 24 00:31:52.075917 containerd[1650]: time="2026-01-24T00:31:52.075501002Z" level=info msg="metadata content store policy set" policy=shared Jan 24 00:31:52.077833 extend-filesystems[1630]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 24 00:31:52.077833 extend-filesystems[1630]: old_desc_blocks = 1, new_desc_blocks = 10 Jan 24 00:31:52.077833 extend-filesystems[1630]: The filesystem on /dev/sda9 is now 19393531 (4k) blocks long. Jan 24 00:31:52.084042 extend-filesystems[1601]: Resized filesystem in /dev/sda9 Jan 24 00:31:52.084042 extend-filesystems[1601]: Found sr0 Jan 24 00:31:52.087142 containerd[1650]: time="2026-01-24T00:31:52.083443895Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 24 00:31:52.087142 containerd[1650]: time="2026-01-24T00:31:52.083526785Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 24 00:31:52.087142 containerd[1650]: time="2026-01-24T00:31:52.083543385Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 24 00:31:52.087142 containerd[1650]: time="2026-01-24T00:31:52.083555185Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 24 00:31:52.087142 containerd[1650]: time="2026-01-24T00:31:52.083567785Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 24 00:31:52.087142 containerd[1650]: time="2026-01-24T00:31:52.084209806Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 24 00:31:52.087142 containerd[1650]: time="2026-01-24T00:31:52.086802657Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 24 00:31:52.087142 containerd[1650]: time="2026-01-24T00:31:52.086987187Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 24 00:31:52.087142 containerd[1650]: time="2026-01-24T00:31:52.087020087Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 24 00:31:52.087142 containerd[1650]: time="2026-01-24T00:31:52.087033037Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 24 00:31:52.087142 containerd[1650]: time="2026-01-24T00:31:52.087050957Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 24 00:31:52.087142 containerd[1650]: time="2026-01-24T00:31:52.087063907Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 24 00:31:52.087142 containerd[1650]: time="2026-01-24T00:31:52.087075447Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 24 00:31:52.087142 containerd[1650]: time="2026-01-24T00:31:52.087104317Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 24 00:31:52.079790 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 24 00:31:52.091573 containerd[1650]: time="2026-01-24T00:31:52.087118267Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 24 00:31:52.091573 containerd[1650]: time="2026-01-24T00:31:52.087131277Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 24 00:31:52.091573 containerd[1650]: time="2026-01-24T00:31:52.087449567Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 24 00:31:52.091573 containerd[1650]: time="2026-01-24T00:31:52.087467757Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 24 00:31:52.091573 containerd[1650]: time="2026-01-24T00:31:52.087490087Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 24 00:31:52.091573 containerd[1650]: time="2026-01-24T00:31:52.087503587Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 24 00:31:52.091573 containerd[1650]: time="2026-01-24T00:31:52.087531837Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 24 00:31:52.091573 containerd[1650]: time="2026-01-24T00:31:52.087557987Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 24 00:31:52.091573 containerd[1650]: time="2026-01-24T00:31:52.087578517Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 24 00:31:52.091573 containerd[1650]: time="2026-01-24T00:31:52.087610137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 24 00:31:52.091573 containerd[1650]: time="2026-01-24T00:31:52.087621517Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 24 00:31:52.091573 containerd[1650]: time="2026-01-24T00:31:52.087634417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 24 00:31:52.091573 containerd[1650]: time="2026-01-24T00:31:52.087646177Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 24 00:31:52.091573 containerd[1650]: time="2026-01-24T00:31:52.087669617Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 24 00:31:52.080502 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 24 00:31:52.091817 update-ssh-keys[1709]: Updated "/home/core/.ssh/authorized_keys" Jan 24 00:31:52.092540 containerd[1650]: time="2026-01-24T00:31:52.087695847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 24 00:31:52.092540 containerd[1650]: time="2026-01-24T00:31:52.087707797Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 24 00:31:52.092540 containerd[1650]: time="2026-01-24T00:31:52.087719897Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 24 00:31:52.092540 containerd[1650]: time="2026-01-24T00:31:52.087735837Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 24 00:31:52.092540 containerd[1650]: time="2026-01-24T00:31:52.088040667Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 24 00:31:52.092540 containerd[1650]: time="2026-01-24T00:31:52.088056237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 24 00:31:52.092540 containerd[1650]: time="2026-01-24T00:31:52.088066897Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 24 00:31:52.092540 containerd[1650]: time="2026-01-24T00:31:52.088134137Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 24 00:31:52.092540 containerd[1650]: time="2026-01-24T00:31:52.088150967Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 24 00:31:52.092540 containerd[1650]: time="2026-01-24T00:31:52.088162447Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 24 00:31:52.092540 containerd[1650]: time="2026-01-24T00:31:52.088173677Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 24 00:31:52.092540 containerd[1650]: time="2026-01-24T00:31:52.088260347Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 24 00:31:52.092540 containerd[1650]: time="2026-01-24T00:31:52.088272787Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 24 00:31:52.092540 containerd[1650]: time="2026-01-24T00:31:52.088282857Z" level=info msg="NRI interface is disabled by configuration." Jan 24 00:31:52.092736 containerd[1650]: time="2026-01-24T00:31:52.088292457Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 24 00:31:52.092752 containerd[1650]: time="2026-01-24T00:31:52.088686788Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 24 00:31:52.092752 containerd[1650]: time="2026-01-24T00:31:52.088745218Z" level=info msg="Connect containerd service" Jan 24 00:31:52.092752 containerd[1650]: time="2026-01-24T00:31:52.088770708Z" level=info msg="using legacy CRI server" Jan 24 00:31:52.092752 containerd[1650]: time="2026-01-24T00:31:52.088776588Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 24 00:31:52.092752 containerd[1650]: time="2026-01-24T00:31:52.088886318Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 24 00:31:52.094114 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 24 00:31:52.094987 containerd[1650]: time="2026-01-24T00:31:52.094879840Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 24 00:31:52.095182 containerd[1650]: time="2026-01-24T00:31:52.095165170Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 24 00:31:52.095228 containerd[1650]: time="2026-01-24T00:31:52.095214150Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 24 00:31:52.095267 containerd[1650]: time="2026-01-24T00:31:52.095244760Z" level=info msg="Start subscribing containerd event" Jan 24 00:31:52.095290 containerd[1650]: time="2026-01-24T00:31:52.095280560Z" level=info msg="Start recovering state" Jan 24 00:31:52.095354 containerd[1650]: time="2026-01-24T00:31:52.095343430Z" level=info msg="Start event monitor" Jan 24 00:31:52.095367 containerd[1650]: time="2026-01-24T00:31:52.095362300Z" level=info msg="Start snapshots syncer" Jan 24 00:31:52.095399 containerd[1650]: time="2026-01-24T00:31:52.095372170Z" level=info msg="Start cni network conf syncer for default" Jan 24 00:31:52.095399 containerd[1650]: time="2026-01-24T00:31:52.095378540Z" level=info msg="Start streaming server" Jan 24 00:31:52.102890 systemd[1]: Finished sshkeys.service. Jan 24 00:31:52.110143 containerd[1650]: time="2026-01-24T00:31:52.110113817Z" level=info msg="containerd successfully booted in 0.132468s" Jan 24 00:31:52.111081 systemd[1]: Started containerd.service - containerd container runtime. Jan 24 00:31:52.211934 sshd_keygen[1640]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 24 00:31:52.245275 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 24 00:31:52.255660 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 24 00:31:52.262653 systemd[1]: Started sshd@0-65.109.167.77:22-20.161.92.111:46950.service - OpenSSH per-connection server daemon (20.161.92.111:46950). Jan 24 00:31:52.267483 systemd[1]: issuegen.service: Deactivated successfully. Jan 24 00:31:52.267755 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 24 00:31:52.281755 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 24 00:31:52.307079 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 24 00:31:52.317187 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 24 00:31:52.321544 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 24 00:31:52.324184 systemd[1]: Reached target getty.target - Login Prompts. Jan 24 00:31:52.424138 tar[1648]: linux-amd64/README.md Jan 24 00:31:52.434663 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 24 00:31:53.038724 sshd[1731]: Accepted publickey for core from 20.161.92.111 port 46950 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:31:53.042279 sshd[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:31:53.063662 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 24 00:31:53.080363 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 24 00:31:53.094499 systemd-logind[1620]: New session 1 of user core. Jan 24 00:31:53.115568 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 24 00:31:53.135570 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 24 00:31:53.155258 (systemd)[1755]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 24 00:31:53.163250 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:31:53.181022 (kubelet)[1760]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:31:53.181132 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 24 00:31:53.266807 systemd[1755]: Queued start job for default target default.target. Jan 24 00:31:53.267827 systemd[1755]: Created slice app.slice - User Application Slice. Jan 24 00:31:53.267918 systemd[1755]: Reached target paths.target - Paths. Jan 24 00:31:53.267975 systemd[1755]: Reached target timers.target - Timers. Jan 24 00:31:53.273969 systemd[1755]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 24 00:31:53.280419 systemd[1755]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 24 00:31:53.280469 systemd[1755]: Reached target sockets.target - Sockets. Jan 24 00:31:53.280482 systemd[1755]: Reached target basic.target - Basic System. Jan 24 00:31:53.280515 systemd[1755]: Reached target default.target - Main User Target. Jan 24 00:31:53.280544 systemd[1755]: Startup finished in 108ms. Jan 24 00:31:53.282308 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 24 00:31:53.290616 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 24 00:31:53.300572 systemd[1]: Startup finished in 8.713s (kernel) + 5.751s (userspace) = 14.465s. Jan 24 00:31:53.796350 kubelet[1760]: E0124 00:31:53.796248 1760 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:31:53.800558 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:31:53.801126 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:31:53.839772 systemd[1]: Started sshd@1-65.109.167.77:22-20.161.92.111:52472.service - OpenSSH per-connection server daemon (20.161.92.111:52472). Jan 24 00:31:54.605007 sshd[1784]: Accepted publickey for core from 20.161.92.111 port 52472 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:31:54.607986 sshd[1784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:31:54.615905 systemd-logind[1620]: New session 2 of user core. Jan 24 00:31:54.624916 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 24 00:31:55.149009 sshd[1784]: pam_unix(sshd:session): session closed for user core Jan 24 00:31:55.151811 systemd[1]: sshd@1-65.109.167.77:22-20.161.92.111:52472.service: Deactivated successfully. Jan 24 00:31:55.155253 systemd-logind[1620]: Session 2 logged out. Waiting for processes to exit. Jan 24 00:31:55.155905 systemd[1]: session-2.scope: Deactivated successfully. Jan 24 00:31:55.157306 systemd-logind[1620]: Removed session 2. Jan 24 00:31:55.277818 systemd[1]: Started sshd@2-65.109.167.77:22-20.161.92.111:52482.service - OpenSSH per-connection server daemon (20.161.92.111:52482). Jan 24 00:31:56.032531 sshd[1792]: Accepted publickey for core from 20.161.92.111 port 52482 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:31:56.034920 sshd[1792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:31:56.041722 systemd-logind[1620]: New session 3 of user core. Jan 24 00:31:56.045016 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 24 00:31:56.563784 sshd[1792]: pam_unix(sshd:session): session closed for user core Jan 24 00:31:56.569030 systemd[1]: sshd@2-65.109.167.77:22-20.161.92.111:52482.service: Deactivated successfully. Jan 24 00:31:56.571622 systemd-logind[1620]: Session 3 logged out. Waiting for processes to exit. Jan 24 00:31:56.572765 systemd[1]: session-3.scope: Deactivated successfully. Jan 24 00:31:56.573290 systemd-logind[1620]: Removed session 3. Jan 24 00:31:56.694730 systemd[1]: Started sshd@3-65.109.167.77:22-20.161.92.111:52488.service - OpenSSH per-connection server daemon (20.161.92.111:52488). Jan 24 00:31:57.459115 sshd[1800]: Accepted publickey for core from 20.161.92.111 port 52488 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:31:57.461896 sshd[1800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:31:57.469885 systemd-logind[1620]: New session 4 of user core. Jan 24 00:31:57.481932 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 24 00:31:57.997933 sshd[1800]: pam_unix(sshd:session): session closed for user core Jan 24 00:31:58.003621 systemd[1]: sshd@3-65.109.167.77:22-20.161.92.111:52488.service: Deactivated successfully. Jan 24 00:31:58.010853 systemd-logind[1620]: Session 4 logged out. Waiting for processes to exit. Jan 24 00:31:58.012075 systemd[1]: session-4.scope: Deactivated successfully. Jan 24 00:31:58.014099 systemd-logind[1620]: Removed session 4. Jan 24 00:31:58.127857 systemd[1]: Started sshd@4-65.109.167.77:22-20.161.92.111:52500.service - OpenSSH per-connection server daemon (20.161.92.111:52500). Jan 24 00:31:58.903023 sshd[1808]: Accepted publickey for core from 20.161.92.111 port 52500 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:31:58.905909 sshd[1808]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:31:58.913925 systemd-logind[1620]: New session 5 of user core. Jan 24 00:31:58.929957 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 24 00:31:59.334582 sudo[1812]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 24 00:31:59.335263 sudo[1812]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:31:59.355851 sudo[1812]: pam_unix(sudo:session): session closed for user root Jan 24 00:31:59.479883 sshd[1808]: pam_unix(sshd:session): session closed for user core Jan 24 00:31:59.485660 systemd[1]: sshd@4-65.109.167.77:22-20.161.92.111:52500.service: Deactivated successfully. Jan 24 00:31:59.492662 systemd-logind[1620]: Session 5 logged out. Waiting for processes to exit. Jan 24 00:31:59.493156 systemd[1]: session-5.scope: Deactivated successfully. Jan 24 00:31:59.496361 systemd-logind[1620]: Removed session 5. Jan 24 00:31:59.608138 systemd[1]: Started sshd@5-65.109.167.77:22-20.161.92.111:52510.service - OpenSSH per-connection server daemon (20.161.92.111:52510). Jan 24 00:32:00.378194 sshd[1817]: Accepted publickey for core from 20.161.92.111 port 52510 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:32:00.381363 sshd[1817]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:32:00.389519 systemd-logind[1620]: New session 6 of user core. Jan 24 00:32:00.402884 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 24 00:32:00.794285 sudo[1822]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 24 00:32:00.794856 sudo[1822]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:32:00.803638 sudo[1822]: pam_unix(sudo:session): session closed for user root Jan 24 00:32:00.816251 sudo[1821]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 24 00:32:00.817153 sudo[1821]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:32:00.841920 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 24 00:32:00.858531 auditctl[1825]: No rules Jan 24 00:32:00.858728 systemd[1]: audit-rules.service: Deactivated successfully. Jan 24 00:32:00.859328 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 24 00:32:00.871128 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 24 00:32:00.930822 augenrules[1844]: No rules Jan 24 00:32:00.934578 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 24 00:32:00.938793 sudo[1821]: pam_unix(sudo:session): session closed for user root Jan 24 00:32:01.061777 sshd[1817]: pam_unix(sshd:session): session closed for user core Jan 24 00:32:01.067164 systemd[1]: sshd@5-65.109.167.77:22-20.161.92.111:52510.service: Deactivated successfully. Jan 24 00:32:01.074119 systemd-logind[1620]: Session 6 logged out. Waiting for processes to exit. Jan 24 00:32:01.075064 systemd[1]: session-6.scope: Deactivated successfully. Jan 24 00:32:01.076964 systemd-logind[1620]: Removed session 6. Jan 24 00:32:01.191132 systemd[1]: Started sshd@6-65.109.167.77:22-20.161.92.111:52524.service - OpenSSH per-connection server daemon (20.161.92.111:52524). Jan 24 00:32:01.962984 sshd[1853]: Accepted publickey for core from 20.161.92.111 port 52524 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:32:01.965943 sshd[1853]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:32:01.975384 systemd-logind[1620]: New session 7 of user core. Jan 24 00:32:01.985930 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 24 00:32:02.383157 sudo[1857]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 24 00:32:02.384059 sudo[1857]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:32:02.686567 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 24 00:32:02.688633 (dockerd)[1874]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 24 00:32:03.121414 dockerd[1874]: time="2026-01-24T00:32:03.121271753Z" level=info msg="Starting up" Jan 24 00:32:03.237123 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport502580322-merged.mount: Deactivated successfully. Jan 24 00:32:03.353277 systemd[1]: var-lib-docker-metacopy\x2dcheck2517672728-merged.mount: Deactivated successfully. Jan 24 00:32:03.385637 dockerd[1874]: time="2026-01-24T00:32:03.385482463Z" level=info msg="Loading containers: start." Jan 24 00:32:03.594585 kernel: Initializing XFRM netlink socket Jan 24 00:32:03.759338 systemd-networkd[1262]: docker0: Link UP Jan 24 00:32:03.786954 dockerd[1874]: time="2026-01-24T00:32:03.786873760Z" level=info msg="Loading containers: done." Jan 24 00:32:03.814271 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 24 00:32:03.814792 dockerd[1874]: time="2026-01-24T00:32:03.814726381Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 24 00:32:03.814924 dockerd[1874]: time="2026-01-24T00:32:03.814878051Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 24 00:32:03.815115 dockerd[1874]: time="2026-01-24T00:32:03.815075341Z" level=info msg="Daemon has completed initialization" Jan 24 00:32:03.825017 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:32:03.889946 dockerd[1874]: time="2026-01-24T00:32:03.889553933Z" level=info msg="API listen on /run/docker.sock" Jan 24 00:32:03.889967 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 24 00:32:04.013575 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:32:04.024226 (kubelet)[2020]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:32:04.072321 kubelet[2020]: E0124 00:32:04.072284 2020 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:32:04.076939 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:32:04.077325 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:32:04.231967 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2725675401-merged.mount: Deactivated successfully. Jan 24 00:32:05.180086 containerd[1650]: time="2026-01-24T00:32:05.180039230Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 24 00:32:05.829351 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1293386028.mount: Deactivated successfully. Jan 24 00:32:07.160690 containerd[1650]: time="2026-01-24T00:32:07.160643615Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:32:07.161593 containerd[1650]: time="2026-01-24T00:32:07.161412515Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=29070747" Jan 24 00:32:07.162727 containerd[1650]: time="2026-01-24T00:32:07.162376686Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:32:07.167966 containerd[1650]: time="2026-01-24T00:32:07.167929438Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:32:07.168810 containerd[1650]: time="2026-01-24T00:32:07.168462808Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 1.988376058s" Jan 24 00:32:07.168810 containerd[1650]: time="2026-01-24T00:32:07.168492908Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 24 00:32:07.168979 containerd[1650]: time="2026-01-24T00:32:07.168965658Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 24 00:32:08.904305 containerd[1650]: time="2026-01-24T00:32:08.904257451Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:32:08.905346 containerd[1650]: time="2026-01-24T00:32:08.905219251Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24993376" Jan 24 00:32:08.906323 containerd[1650]: time="2026-01-24T00:32:08.906072002Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:32:08.908105 containerd[1650]: time="2026-01-24T00:32:08.908063903Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:32:08.908774 containerd[1650]: time="2026-01-24T00:32:08.908755973Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 1.739740935s" Jan 24 00:32:08.908898 containerd[1650]: time="2026-01-24T00:32:08.908830283Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 24 00:32:08.909193 containerd[1650]: time="2026-01-24T00:32:08.909179893Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 24 00:32:10.070507 containerd[1650]: time="2026-01-24T00:32:10.070452917Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:32:10.071556 containerd[1650]: time="2026-01-24T00:32:10.071358367Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19405098" Jan 24 00:32:10.072472 containerd[1650]: time="2026-01-24T00:32:10.072451948Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:32:10.074974 containerd[1650]: time="2026-01-24T00:32:10.074939199Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:32:10.076130 containerd[1650]: time="2026-01-24T00:32:10.075486979Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 1.166250286s" Jan 24 00:32:10.076130 containerd[1650]: time="2026-01-24T00:32:10.075508389Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 24 00:32:10.076447 containerd[1650]: time="2026-01-24T00:32:10.076377289Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 24 00:32:11.230001 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount367534621.mount: Deactivated successfully. Jan 24 00:32:11.672644 containerd[1650]: time="2026-01-24T00:32:11.671672424Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:32:11.675251 containerd[1650]: time="2026-01-24T00:32:11.675209305Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31161927" Jan 24 00:32:11.677237 containerd[1650]: time="2026-01-24T00:32:11.677205076Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:32:11.680281 containerd[1650]: time="2026-01-24T00:32:11.680230847Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:32:11.682382 containerd[1650]: time="2026-01-24T00:32:11.682210028Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 1.605712479s" Jan 24 00:32:11.682382 containerd[1650]: time="2026-01-24T00:32:11.682258968Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 24 00:32:11.683304 containerd[1650]: time="2026-01-24T00:32:11.683234678Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 24 00:32:12.247327 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2122848577.mount: Deactivated successfully. Jan 24 00:32:13.200448 containerd[1650]: time="2026-01-24T00:32:13.200365410Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:32:13.202071 containerd[1650]: time="2026-01-24T00:32:13.201642481Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565335" Jan 24 00:32:13.202974 containerd[1650]: time="2026-01-24T00:32:13.202885731Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:32:13.205877 containerd[1650]: time="2026-01-24T00:32:13.205817003Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:32:13.208107 containerd[1650]: time="2026-01-24T00:32:13.206765013Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.523480245s" Jan 24 00:32:13.208107 containerd[1650]: time="2026-01-24T00:32:13.206791553Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 24 00:32:13.208107 containerd[1650]: time="2026-01-24T00:32:13.207203363Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 24 00:32:13.674800 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount715190820.mount: Deactivated successfully. Jan 24 00:32:13.682980 containerd[1650]: time="2026-01-24T00:32:13.682872311Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:32:13.684661 containerd[1650]: time="2026-01-24T00:32:13.684537822Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321160" Jan 24 00:32:13.685796 containerd[1650]: time="2026-01-24T00:32:13.685721592Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:32:13.691178 containerd[1650]: time="2026-01-24T00:32:13.689768194Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:32:13.691178 containerd[1650]: time="2026-01-24T00:32:13.690954225Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 483.720162ms" Jan 24 00:32:13.691178 containerd[1650]: time="2026-01-24T00:32:13.691019665Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 24 00:32:13.692511 containerd[1650]: time="2026-01-24T00:32:13.692481315Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 24 00:32:14.215920 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 24 00:32:14.224636 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:32:14.257093 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2031269934.mount: Deactivated successfully. Jan 24 00:32:14.439503 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:32:14.444841 (kubelet)[2186]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:32:14.475634 kubelet[2186]: E0124 00:32:14.474715 2186 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:32:14.477852 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:32:14.478296 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:32:15.921944 containerd[1650]: time="2026-01-24T00:32:15.921896014Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:32:15.923040 containerd[1650]: time="2026-01-24T00:32:15.922909074Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682132" Jan 24 00:32:15.924695 containerd[1650]: time="2026-01-24T00:32:15.923720685Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:32:15.925762 containerd[1650]: time="2026-01-24T00:32:15.925742505Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:32:15.926539 containerd[1650]: time="2026-01-24T00:32:15.926517646Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.233867941s" Jan 24 00:32:15.926575 containerd[1650]: time="2026-01-24T00:32:15.926542396Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 24 00:32:18.076697 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:32:18.084534 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:32:18.112902 systemd[1]: Reloading requested from client PID 2266 ('systemctl') (unit session-7.scope)... Jan 24 00:32:18.112915 systemd[1]: Reloading... Jan 24 00:32:18.243419 zram_generator::config[2305]: No configuration found. Jan 24 00:32:18.338764 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:32:18.399937 systemd[1]: Reloading finished in 286 ms. Jan 24 00:32:18.444928 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 24 00:32:18.445036 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 24 00:32:18.445489 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:32:18.453215 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:32:18.593259 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:32:18.605508 (kubelet)[2367]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 24 00:32:18.669823 kubelet[2367]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:32:18.670488 kubelet[2367]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 24 00:32:18.670566 kubelet[2367]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:32:18.670781 kubelet[2367]: I0124 00:32:18.670742 2367 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 24 00:32:19.027634 kubelet[2367]: I0124 00:32:19.027455 2367 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 24 00:32:19.027634 kubelet[2367]: I0124 00:32:19.027497 2367 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 24 00:32:19.028483 kubelet[2367]: I0124 00:32:19.028444 2367 server.go:954] "Client rotation is on, will bootstrap in background" Jan 24 00:32:19.071461 kubelet[2367]: E0124 00:32:19.071370 2367 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://65.109.167.77:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 65.109.167.77:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:32:19.073197 kubelet[2367]: I0124 00:32:19.072860 2367 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 24 00:32:19.085805 kubelet[2367]: E0124 00:32:19.085669 2367 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 24 00:32:19.085805 kubelet[2367]: I0124 00:32:19.085755 2367 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 24 00:32:19.093312 kubelet[2367]: I0124 00:32:19.092414 2367 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 24 00:32:19.094946 kubelet[2367]: I0124 00:32:19.094888 2367 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 24 00:32:19.095171 kubelet[2367]: I0124 00:32:19.094937 2367 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-a9e48d2ea0","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jan 24 00:32:19.095319 kubelet[2367]: I0124 00:32:19.095180 2367 topology_manager.go:138] "Creating topology manager with none policy" Jan 24 00:32:19.095319 kubelet[2367]: I0124 00:32:19.095196 2367 container_manager_linux.go:304] "Creating device plugin manager" Jan 24 00:32:19.095482 kubelet[2367]: I0124 00:32:19.095382 2367 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:32:19.102106 kubelet[2367]: I0124 00:32:19.102063 2367 kubelet.go:446] "Attempting to sync node with API server" Jan 24 00:32:19.105675 kubelet[2367]: I0124 00:32:19.105623 2367 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 24 00:32:19.105675 kubelet[2367]: I0124 00:32:19.105667 2367 kubelet.go:352] "Adding apiserver pod source" Jan 24 00:32:19.105800 kubelet[2367]: I0124 00:32:19.105701 2367 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 24 00:32:19.113506 kubelet[2367]: W0124 00:32:19.113021 2367 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://65.109.167.77:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-a9e48d2ea0&limit=500&resourceVersion=0": dial tcp 65.109.167.77:6443: connect: connection refused Jan 24 00:32:19.113506 kubelet[2367]: E0124 00:32:19.113097 2367 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://65.109.167.77:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-a9e48d2ea0&limit=500&resourceVersion=0\": dial tcp 65.109.167.77:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:32:19.113996 kubelet[2367]: W0124 00:32:19.113947 2367 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://65.109.167.77:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 65.109.167.77:6443: connect: connection refused Jan 24 00:32:19.114113 kubelet[2367]: E0124 00:32:19.114089 2367 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://65.109.167.77:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 65.109.167.77:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:32:19.114309 kubelet[2367]: I0124 00:32:19.114288 2367 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 24 00:32:19.115077 kubelet[2367]: I0124 00:32:19.115055 2367 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 24 00:32:19.116140 kubelet[2367]: W0124 00:32:19.116115 2367 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 24 00:32:19.120840 kubelet[2367]: I0124 00:32:19.120813 2367 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 24 00:32:19.120969 kubelet[2367]: I0124 00:32:19.120954 2367 server.go:1287] "Started kubelet" Jan 24 00:32:19.122992 kubelet[2367]: I0124 00:32:19.122269 2367 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 24 00:32:19.125056 kubelet[2367]: I0124 00:32:19.123724 2367 server.go:479] "Adding debug handlers to kubelet server" Jan 24 00:32:19.127289 kubelet[2367]: I0124 00:32:19.127244 2367 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 24 00:32:19.129305 kubelet[2367]: I0124 00:32:19.128774 2367 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 24 00:32:19.129305 kubelet[2367]: I0124 00:32:19.129126 2367 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 24 00:32:19.131870 kubelet[2367]: E0124 00:32:19.129621 2367 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://65.109.167.77:6443/api/v1/namespaces/default/events\": dial tcp 65.109.167.77:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-6-n-a9e48d2ea0.188d836e78b03326 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-n-a9e48d2ea0,UID:ci-4081-3-6-n-a9e48d2ea0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-a9e48d2ea0,},FirstTimestamp:2026-01-24 00:32:19.120927526 +0000 UTC m=+0.507041702,LastTimestamp:2026-01-24 00:32:19.120927526 +0000 UTC m=+0.507041702,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-a9e48d2ea0,}" Jan 24 00:32:19.133867 kubelet[2367]: I0124 00:32:19.133810 2367 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 24 00:32:19.139081 kubelet[2367]: E0124 00:32:19.139060 2367 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-a9e48d2ea0\" not found" Jan 24 00:32:19.139317 kubelet[2367]: I0124 00:32:19.139266 2367 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 24 00:32:19.139804 kubelet[2367]: I0124 00:32:19.139761 2367 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 24 00:32:19.140442 kubelet[2367]: I0124 00:32:19.139962 2367 reconciler.go:26] "Reconciler: start to sync state" Jan 24 00:32:19.140751 kubelet[2367]: W0124 00:32:19.140666 2367 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://65.109.167.77:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 65.109.167.77:6443: connect: connection refused Jan 24 00:32:19.140896 kubelet[2367]: E0124 00:32:19.140874 2367 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://65.109.167.77:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 65.109.167.77:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:32:19.141188 kubelet[2367]: I0124 00:32:19.141165 2367 factory.go:221] Registration of the systemd container factory successfully Jan 24 00:32:19.141374 kubelet[2367]: I0124 00:32:19.141351 2367 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 24 00:32:19.144357 kubelet[2367]: E0124 00:32:19.144299 2367 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 24 00:32:19.145360 kubelet[2367]: I0124 00:32:19.144671 2367 factory.go:221] Registration of the containerd container factory successfully Jan 24 00:32:19.161300 kubelet[2367]: E0124 00:32:19.161244 2367 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://65.109.167.77:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-a9e48d2ea0?timeout=10s\": dial tcp 65.109.167.77:6443: connect: connection refused" interval="200ms" Jan 24 00:32:19.164379 kubelet[2367]: I0124 00:32:19.164312 2367 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 24 00:32:19.167190 kubelet[2367]: I0124 00:32:19.167134 2367 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 24 00:32:19.167190 kubelet[2367]: I0124 00:32:19.167160 2367 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 24 00:32:19.167190 kubelet[2367]: I0124 00:32:19.167184 2367 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 24 00:32:19.167190 kubelet[2367]: I0124 00:32:19.167194 2367 kubelet.go:2382] "Starting kubelet main sync loop" Jan 24 00:32:19.167438 kubelet[2367]: E0124 00:32:19.167255 2367 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 24 00:32:19.182914 kubelet[2367]: W0124 00:32:19.180968 2367 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://65.109.167.77:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 65.109.167.77:6443: connect: connection refused Jan 24 00:32:19.182914 kubelet[2367]: E0124 00:32:19.181045 2367 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://65.109.167.77:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 65.109.167.77:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:32:19.208824 kubelet[2367]: I0124 00:32:19.208780 2367 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 24 00:32:19.209314 kubelet[2367]: I0124 00:32:19.209260 2367 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 24 00:32:19.209464 kubelet[2367]: I0124 00:32:19.209446 2367 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:32:19.212731 kubelet[2367]: I0124 00:32:19.212707 2367 policy_none.go:49] "None policy: Start" Jan 24 00:32:19.212847 kubelet[2367]: I0124 00:32:19.212832 2367 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 24 00:32:19.213007 kubelet[2367]: I0124 00:32:19.212992 2367 state_mem.go:35] "Initializing new in-memory state store" Jan 24 00:32:19.221754 kubelet[2367]: I0124 00:32:19.221722 2367 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 24 00:32:19.222143 kubelet[2367]: I0124 00:32:19.222121 2367 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 24 00:32:19.222267 kubelet[2367]: I0124 00:32:19.222228 2367 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 24 00:32:19.226001 kubelet[2367]: I0124 00:32:19.225969 2367 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 24 00:32:19.234132 kubelet[2367]: E0124 00:32:19.234103 2367 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 24 00:32:19.234272 kubelet[2367]: E0124 00:32:19.234254 2367 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-6-n-a9e48d2ea0\" not found" Jan 24 00:32:19.281961 kubelet[2367]: E0124 00:32:19.281798 2367 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-a9e48d2ea0\" not found" node="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:32:19.289289 kubelet[2367]: E0124 00:32:19.289143 2367 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-a9e48d2ea0\" not found" node="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:32:19.289857 kubelet[2367]: E0124 00:32:19.289671 2367 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-a9e48d2ea0\" not found" node="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:32:19.325941 kubelet[2367]: I0124 00:32:19.325870 2367 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:32:19.326485 kubelet[2367]: E0124 00:32:19.326370 2367 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://65.109.167.77:6443/api/v1/nodes\": dial tcp 65.109.167.77:6443: connect: connection refused" node="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:32:19.362960 kubelet[2367]: E0124 00:32:19.362916 2367 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://65.109.167.77:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-a9e48d2ea0?timeout=10s\": dial tcp 65.109.167.77:6443: connect: connection refused" interval="400ms" Jan 24 00:32:19.441556 kubelet[2367]: I0124 00:32:19.441464 2367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b6b3f948cc2112cb16d830303c9b1b0-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-a9e48d2ea0\" (UID: \"9b6b3f948cc2112cb16d830303c9b1b0\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:32:19.441556 kubelet[2367]: I0124 00:32:19.441536 2367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8a6139feaf230f263171a889dcbfbc89-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-a9e48d2ea0\" (UID: \"8a6139feaf230f263171a889dcbfbc89\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:32:19.441556 kubelet[2367]: I0124 00:32:19.441563 2367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a6139feaf230f263171a889dcbfbc89-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-a9e48d2ea0\" (UID: \"8a6139feaf230f263171a889dcbfbc89\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:32:19.441818 kubelet[2367]: I0124 00:32:19.441589 2367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8a6139feaf230f263171a889dcbfbc89-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-a9e48d2ea0\" (UID: \"8a6139feaf230f263171a889dcbfbc89\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:32:19.441818 kubelet[2367]: I0124 00:32:19.441639 2367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b9d571bb308c3cb313081f80c59e61eb-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-a9e48d2ea0\" (UID: \"b9d571bb308c3cb313081f80c59e61eb\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:32:19.441818 kubelet[2367]: I0124 00:32:19.441689 2367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b6b3f948cc2112cb16d830303c9b1b0-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-a9e48d2ea0\" (UID: \"9b6b3f948cc2112cb16d830303c9b1b0\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:32:19.441818 kubelet[2367]: I0124 00:32:19.441751 2367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8a6139feaf230f263171a889dcbfbc89-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-a9e48d2ea0\" (UID: \"8a6139feaf230f263171a889dcbfbc89\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:32:19.441818 kubelet[2367]: I0124 00:32:19.441774 2367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b6b3f948cc2112cb16d830303c9b1b0-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-a9e48d2ea0\" (UID: \"9b6b3f948cc2112cb16d830303c9b1b0\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:32:19.442079 kubelet[2367]: I0124 00:32:19.441797 2367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8a6139feaf230f263171a889dcbfbc89-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-a9e48d2ea0\" (UID: \"8a6139feaf230f263171a889dcbfbc89\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:32:19.530654 kubelet[2367]: I0124 00:32:19.529970 2367 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:32:19.530853 kubelet[2367]: E0124 00:32:19.530656 2367 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://65.109.167.77:6443/api/v1/nodes\": dial tcp 65.109.167.77:6443: connect: connection refused" node="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:32:19.585071 containerd[1650]: time="2026-01-24T00:32:19.584977119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-a9e48d2ea0,Uid:9b6b3f948cc2112cb16d830303c9b1b0,Namespace:kube-system,Attempt:0,}" Jan 24 00:32:19.591053 containerd[1650]: time="2026-01-24T00:32:19.590971342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-a9e48d2ea0,Uid:b9d571bb308c3cb313081f80c59e61eb,Namespace:kube-system,Attempt:0,}" Jan 24 00:32:19.591592 containerd[1650]: time="2026-01-24T00:32:19.591498892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-a9e48d2ea0,Uid:8a6139feaf230f263171a889dcbfbc89,Namespace:kube-system,Attempt:0,}" Jan 24 00:32:19.764157 kubelet[2367]: E0124 00:32:19.764066 2367 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://65.109.167.77:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-a9e48d2ea0?timeout=10s\": dial tcp 65.109.167.77:6443: connect: connection refused" interval="800ms" Jan 24 00:32:19.934337 kubelet[2367]: I0124 00:32:19.934133 2367 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:32:19.934835 kubelet[2367]: E0124 00:32:19.934668 2367 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://65.109.167.77:6443/api/v1/nodes\": dial tcp 65.109.167.77:6443: connect: connection refused" node="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:32:20.072975 kubelet[2367]: W0124 00:32:20.072883 2367 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://65.109.167.77:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 65.109.167.77:6443: connect: connection refused Jan 24 00:32:20.073133 kubelet[2367]: E0124 00:32:20.072980 2367 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://65.109.167.77:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 65.109.167.77:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:32:20.078652 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1807703341.mount: Deactivated successfully. Jan 24 00:32:20.089446 containerd[1650]: time="2026-01-24T00:32:20.087842429Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:32:20.089847 containerd[1650]: time="2026-01-24T00:32:20.089757030Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:32:20.091687 containerd[1650]: time="2026-01-24T00:32:20.091563340Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312078" Jan 24 00:32:20.091687 containerd[1650]: time="2026-01-24T00:32:20.091650690Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 24 00:32:20.093502 containerd[1650]: time="2026-01-24T00:32:20.093452341Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:32:20.095032 containerd[1650]: time="2026-01-24T00:32:20.094946472Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 24 00:32:20.097822 containerd[1650]: time="2026-01-24T00:32:20.097768633Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:32:20.101467 containerd[1650]: time="2026-01-24T00:32:20.101345495Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:32:20.103587 containerd[1650]: time="2026-01-24T00:32:20.103169955Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 512.098283ms" Jan 24 00:32:20.107382 containerd[1650]: time="2026-01-24T00:32:20.107293447Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 522.197558ms" Jan 24 00:32:20.119446 containerd[1650]: time="2026-01-24T00:32:20.118675342Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 526.99756ms" Jan 24 00:32:20.306033 containerd[1650]: time="2026-01-24T00:32:20.305508020Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:32:20.306033 containerd[1650]: time="2026-01-24T00:32:20.305580770Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:32:20.306033 containerd[1650]: time="2026-01-24T00:32:20.305624500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:32:20.306033 containerd[1650]: time="2026-01-24T00:32:20.305804980Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:32:20.314362 containerd[1650]: time="2026-01-24T00:32:20.313889713Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:32:20.314362 containerd[1650]: time="2026-01-24T00:32:20.313962373Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:32:20.314362 containerd[1650]: time="2026-01-24T00:32:20.313983013Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:32:20.315219 containerd[1650]: time="2026-01-24T00:32:20.315153764Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:32:20.321090 containerd[1650]: time="2026-01-24T00:32:20.320668576Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:32:20.321090 containerd[1650]: time="2026-01-24T00:32:20.320752426Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:32:20.321090 containerd[1650]: time="2026-01-24T00:32:20.320787376Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:32:20.321090 containerd[1650]: time="2026-01-24T00:32:20.320925586Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:32:20.355630 kubelet[2367]: W0124 00:32:20.355552 2367 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://65.109.167.77:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-a9e48d2ea0&limit=500&resourceVersion=0": dial tcp 65.109.167.77:6443: connect: connection refused Jan 24 00:32:20.355865 kubelet[2367]: E0124 00:32:20.355638 2367 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://65.109.167.77:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-a9e48d2ea0&limit=500&resourceVersion=0\": dial tcp 65.109.167.77:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:32:20.385749 kubelet[2367]: W0124 00:32:20.385654 2367 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://65.109.167.77:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 65.109.167.77:6443: connect: connection refused Jan 24 00:32:20.385857 kubelet[2367]: E0124 00:32:20.385764 2367 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://65.109.167.77:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 65.109.167.77:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:32:20.413834 containerd[1650]: time="2026-01-24T00:32:20.413801495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-a9e48d2ea0,Uid:8a6139feaf230f263171a889dcbfbc89,Namespace:kube-system,Attempt:0,} returns sandbox id \"70b3f891d4bc45b14de43a3bbc58f3486a9472348f1b8b166adc86c971682f92\"" Jan 24 00:32:20.416776 containerd[1650]: time="2026-01-24T00:32:20.416682376Z" level=info msg="CreateContainer within sandbox \"70b3f891d4bc45b14de43a3bbc58f3486a9472348f1b8b166adc86c971682f92\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 24 00:32:20.420141 containerd[1650]: time="2026-01-24T00:32:20.420051387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-a9e48d2ea0,Uid:9b6b3f948cc2112cb16d830303c9b1b0,Namespace:kube-system,Attempt:0,} returns sandbox id \"59344cef71b6c89acfe47210f9eb2a931aa9636196895db84e05875ab483a0fa\"" Jan 24 00:32:20.423526 containerd[1650]: time="2026-01-24T00:32:20.423504719Z" level=info msg="CreateContainer within sandbox \"59344cef71b6c89acfe47210f9eb2a931aa9636196895db84e05875ab483a0fa\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 24 00:32:20.429723 containerd[1650]: time="2026-01-24T00:32:20.429692031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-a9e48d2ea0,Uid:b9d571bb308c3cb313081f80c59e61eb,Namespace:kube-system,Attempt:0,} returns sandbox id \"b64b17aca8a0dca3656be434e2091567c2a3369cf35f36ba179978352284d26d\"" Jan 24 00:32:20.431882 containerd[1650]: time="2026-01-24T00:32:20.431857422Z" level=info msg="CreateContainer within sandbox \"70b3f891d4bc45b14de43a3bbc58f3486a9472348f1b8b166adc86c971682f92\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"09a2a9da0400135caf22eb3d941f2db903dcdb5863a21d1c97f99e8ca8f27e02\"" Jan 24 00:32:20.432319 containerd[1650]: time="2026-01-24T00:32:20.432298782Z" level=info msg="StartContainer for \"09a2a9da0400135caf22eb3d941f2db903dcdb5863a21d1c97f99e8ca8f27e02\"" Jan 24 00:32:20.432875 containerd[1650]: time="2026-01-24T00:32:20.432841903Z" level=info msg="CreateContainer within sandbox \"b64b17aca8a0dca3656be434e2091567c2a3369cf35f36ba179978352284d26d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 24 00:32:20.442611 containerd[1650]: time="2026-01-24T00:32:20.442572807Z" level=info msg="CreateContainer within sandbox \"59344cef71b6c89acfe47210f9eb2a931aa9636196895db84e05875ab483a0fa\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7cc5735e1b7ee9e1e7eabe9fd13ed3540ae417bbf177ff7d379bbadc085666d3\"" Jan 24 00:32:20.443118 containerd[1650]: time="2026-01-24T00:32:20.443103757Z" level=info msg="StartContainer for \"7cc5735e1b7ee9e1e7eabe9fd13ed3540ae417bbf177ff7d379bbadc085666d3\"" Jan 24 00:32:20.446621 containerd[1650]: time="2026-01-24T00:32:20.446469268Z" level=info msg="CreateContainer within sandbox \"b64b17aca8a0dca3656be434e2091567c2a3369cf35f36ba179978352284d26d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d1d3baa983d1c47445a2ced29a55391d0797535f4c0443d4aeb8ee31037772a5\"" Jan 24 00:32:20.446873 containerd[1650]: time="2026-01-24T00:32:20.446860788Z" level=info msg="StartContainer for \"d1d3baa983d1c47445a2ced29a55391d0797535f4c0443d4aeb8ee31037772a5\"" Jan 24 00:32:20.538521 containerd[1650]: time="2026-01-24T00:32:20.538466117Z" level=info msg="StartContainer for \"7cc5735e1b7ee9e1e7eabe9fd13ed3540ae417bbf177ff7d379bbadc085666d3\" returns successfully" Jan 24 00:32:20.540210 containerd[1650]: time="2026-01-24T00:32:20.540186737Z" level=info msg="StartContainer for \"d1d3baa983d1c47445a2ced29a55391d0797535f4c0443d4aeb8ee31037772a5\" returns successfully" Jan 24 00:32:20.543930 containerd[1650]: time="2026-01-24T00:32:20.543906149Z" level=info msg="StartContainer for \"09a2a9da0400135caf22eb3d941f2db903dcdb5863a21d1c97f99e8ca8f27e02\" returns successfully" Jan 24 00:32:20.565098 kubelet[2367]: E0124 00:32:20.564994 2367 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://65.109.167.77:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-a9e48d2ea0?timeout=10s\": dial tcp 65.109.167.77:6443: connect: connection refused" interval="1.6s" Jan 24 00:32:20.736141 kubelet[2367]: I0124 00:32:20.736116 2367 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:32:21.201023 kubelet[2367]: E0124 00:32:21.200594 2367 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-a9e48d2ea0\" not found" node="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:32:21.201023 kubelet[2367]: E0124 00:32:21.200873 2367 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-a9e48d2ea0\" not found" node="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:32:21.207322 kubelet[2367]: E0124 00:32:21.207220 2367 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-a9e48d2ea0\" not found" node="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:32:21.892688 kubelet[2367]: I0124 00:32:21.892628 2367 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:32:21.893285 kubelet[2367]: E0124 00:32:21.892664 2367 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081-3-6-n-a9e48d2ea0\": node \"ci-4081-3-6-n-a9e48d2ea0\" not found" Jan 24 00:32:21.903448 kubelet[2367]: E0124 00:32:21.903427 2367 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-a9e48d2ea0\" not found" Jan 24 00:32:21.924063 kubelet[2367]: E0124 00:32:21.923583 2367 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-3-6-n-a9e48d2ea0.188d836e78b03326 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-n-a9e48d2ea0,UID:ci-4081-3-6-n-a9e48d2ea0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-a9e48d2ea0,},FirstTimestamp:2026-01-24 00:32:19.120927526 +0000 UTC m=+0.507041702,LastTimestamp:2026-01-24 00:32:19.120927526 +0000 UTC m=+0.507041702,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-a9e48d2ea0,}" Jan 24 00:32:22.003647 kubelet[2367]: E0124 00:32:22.003571 2367 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-a9e48d2ea0\" not found" Jan 24 00:32:22.104071 kubelet[2367]: E0124 00:32:22.104038 2367 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-a9e48d2ea0\" not found" Jan 24 00:32:22.206607 kubelet[2367]: E0124 00:32:22.204607 2367 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-a9e48d2ea0\" not found" Jan 24 00:32:22.218411 kubelet[2367]: E0124 00:32:22.217068 2367 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-a9e48d2ea0\" not found" node="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:32:22.219355 kubelet[2367]: E0124 00:32:22.219325 2367 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-a9e48d2ea0\" not found" node="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:32:22.305188 kubelet[2367]: E0124 00:32:22.305129 2367 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-a9e48d2ea0\" not found" Jan 24 00:32:22.377871 systemd[1]: Started sshd@7-65.109.167.77:22-120.79.196.63:37692.service - OpenSSH per-connection server daemon (120.79.196.63:37692). Jan 24 00:32:22.406261 kubelet[2367]: E0124 00:32:22.406208 2367 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-a9e48d2ea0\" not found" Jan 24 00:32:22.410601 sshd[2648]: Connection closed by 120.79.196.63 port 37692 Jan 24 00:32:22.414462 systemd[1]: sshd@7-65.109.167.77:22-120.79.196.63:37692.service: Deactivated successfully. Jan 24 00:32:22.506724 kubelet[2367]: E0124 00:32:22.506528 2367 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-a9e48d2ea0\" not found" Jan 24 00:32:22.607004 kubelet[2367]: E0124 00:32:22.606908 2367 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-a9e48d2ea0\" not found" Jan 24 00:32:22.707849 kubelet[2367]: E0124 00:32:22.707765 2367 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-a9e48d2ea0\" not found" Jan 24 00:32:22.809455 kubelet[2367]: E0124 00:32:22.808825 2367 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-a9e48d2ea0\" not found" Jan 24 00:32:22.910446 kubelet[2367]: E0124 00:32:22.909751 2367 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-a9e48d2ea0\" not found" Jan 24 00:32:23.010228 kubelet[2367]: E0124 00:32:23.010174 2367 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-a9e48d2ea0\" not found" Jan 24 00:32:23.111313 kubelet[2367]: E0124 00:32:23.111255 2367 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-a9e48d2ea0\" not found" Jan 24 00:32:23.211523 kubelet[2367]: E0124 00:32:23.211458 2367 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-a9e48d2ea0\" not found" Jan 24 00:32:23.218511 kubelet[2367]: E0124 00:32:23.217628 2367 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-a9e48d2ea0\" not found" node="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:32:23.291217 kubelet[2367]: I0124 00:32:23.291156 2367 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:32:23.342879 kubelet[2367]: I0124 00:32:23.342812 2367 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:32:23.349254 kubelet[2367]: E0124 00:32:23.349192 2367 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-a9e48d2ea0\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:32:23.349254 kubelet[2367]: I0124 00:32:23.349212 2367 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:32:23.353086 kubelet[2367]: I0124 00:32:23.353056 2367 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:32:24.069716 systemd[1]: Reloading requested from client PID 2653 ('systemctl') (unit session-7.scope)... Jan 24 00:32:24.069743 systemd[1]: Reloading... Jan 24 00:32:24.118058 kubelet[2367]: I0124 00:32:24.115735 2367 apiserver.go:52] "Watching apiserver" Jan 24 00:32:24.141952 kubelet[2367]: I0124 00:32:24.141889 2367 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 24 00:32:24.204433 zram_generator::config[2691]: No configuration found. Jan 24 00:32:24.307308 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:32:24.370561 systemd[1]: Reloading finished in 299 ms. Jan 24 00:32:24.412589 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:32:24.435859 systemd[1]: kubelet.service: Deactivated successfully. Jan 24 00:32:24.436190 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:32:24.448113 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:32:24.608541 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:32:24.621227 (kubelet)[2753]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 24 00:32:24.676699 kubelet[2753]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:32:24.676699 kubelet[2753]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 24 00:32:24.676699 kubelet[2753]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:32:24.677112 kubelet[2753]: I0124 00:32:24.676829 2753 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 24 00:32:24.683455 kubelet[2753]: I0124 00:32:24.683071 2753 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 24 00:32:24.683455 kubelet[2753]: I0124 00:32:24.683088 2753 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 24 00:32:24.683455 kubelet[2753]: I0124 00:32:24.683234 2753 server.go:954] "Client rotation is on, will bootstrap in background" Jan 24 00:32:24.684300 kubelet[2753]: I0124 00:32:24.684285 2753 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 24 00:32:24.688570 kubelet[2753]: I0124 00:32:24.686735 2753 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 24 00:32:24.691135 kubelet[2753]: E0124 00:32:24.691117 2753 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 24 00:32:24.691211 kubelet[2753]: I0124 00:32:24.691204 2753 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 24 00:32:24.694565 kubelet[2753]: I0124 00:32:24.694539 2753 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 24 00:32:24.695100 kubelet[2753]: I0124 00:32:24.695078 2753 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 24 00:32:24.695299 kubelet[2753]: I0124 00:32:24.695138 2753 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-a9e48d2ea0","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jan 24 00:32:24.695599 kubelet[2753]: I0124 00:32:24.695430 2753 topology_manager.go:138] "Creating topology manager with none policy" Jan 24 00:32:24.695599 kubelet[2753]: I0124 00:32:24.695440 2753 container_manager_linux.go:304] "Creating device plugin manager" Jan 24 00:32:24.695599 kubelet[2753]: I0124 00:32:24.695482 2753 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:32:24.695679 kubelet[2753]: I0124 00:32:24.695672 2753 kubelet.go:446] "Attempting to sync node with API server" Jan 24 00:32:24.695729 kubelet[2753]: I0124 00:32:24.695712 2753 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 24 00:32:24.696193 kubelet[2753]: I0124 00:32:24.696182 2753 kubelet.go:352] "Adding apiserver pod source" Jan 24 00:32:24.697482 kubelet[2753]: I0124 00:32:24.697457 2753 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 24 00:32:24.699702 kubelet[2753]: I0124 00:32:24.699690 2753 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 24 00:32:24.700416 kubelet[2753]: I0124 00:32:24.699999 2753 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 24 00:32:24.700416 kubelet[2753]: I0124 00:32:24.700280 2753 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 24 00:32:24.700416 kubelet[2753]: I0124 00:32:24.700299 2753 server.go:1287] "Started kubelet" Jan 24 00:32:24.700725 kubelet[2753]: I0124 00:32:24.700698 2753 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 24 00:32:24.701360 kubelet[2753]: I0124 00:32:24.701350 2753 server.go:479] "Adding debug handlers to kubelet server" Jan 24 00:32:24.702131 kubelet[2753]: I0124 00:32:24.702101 2753 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 24 00:32:24.702200 kubelet[2753]: I0124 00:32:24.702178 2753 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 24 00:32:24.702341 kubelet[2753]: I0124 00:32:24.702332 2753 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 24 00:32:24.704109 kubelet[2753]: I0124 00:32:24.704095 2753 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 24 00:32:24.718964 kubelet[2753]: I0124 00:32:24.718943 2753 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 24 00:32:24.720691 kubelet[2753]: E0124 00:32:24.720668 2753 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 24 00:32:24.721827 kubelet[2753]: I0124 00:32:24.721801 2753 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 24 00:32:24.722006 kubelet[2753]: I0124 00:32:24.721986 2753 reconciler.go:26] "Reconciler: start to sync state" Jan 24 00:32:24.724298 kubelet[2753]: I0124 00:32:24.724278 2753 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 24 00:32:24.728285 kubelet[2753]: I0124 00:32:24.728128 2753 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 24 00:32:24.729719 kubelet[2753]: I0124 00:32:24.729686 2753 factory.go:221] Registration of the containerd container factory successfully Jan 24 00:32:24.729719 kubelet[2753]: I0124 00:32:24.729717 2753 factory.go:221] Registration of the systemd container factory successfully Jan 24 00:32:24.730425 kubelet[2753]: I0124 00:32:24.730324 2753 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 24 00:32:24.730425 kubelet[2753]: I0124 00:32:24.730347 2753 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 24 00:32:24.730425 kubelet[2753]: I0124 00:32:24.730363 2753 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 24 00:32:24.730425 kubelet[2753]: I0124 00:32:24.730369 2753 kubelet.go:2382] "Starting kubelet main sync loop" Jan 24 00:32:24.730733 kubelet[2753]: E0124 00:32:24.730533 2753 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 24 00:32:24.826707 kubelet[2753]: I0124 00:32:24.826663 2753 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 24 00:32:24.826707 kubelet[2753]: I0124 00:32:24.826687 2753 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 24 00:32:24.826707 kubelet[2753]: I0124 00:32:24.826709 2753 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:32:24.826901 kubelet[2753]: I0124 00:32:24.826875 2753 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 24 00:32:24.826901 kubelet[2753]: I0124 00:32:24.826889 2753 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 24 00:32:24.826930 kubelet[2753]: I0124 00:32:24.826904 2753 policy_none.go:49] "None policy: Start" Jan 24 00:32:24.826930 kubelet[2753]: I0124 00:32:24.826912 2753 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 24 00:32:24.826930 kubelet[2753]: I0124 00:32:24.826921 2753 state_mem.go:35] "Initializing new in-memory state store" Jan 24 00:32:24.827027 kubelet[2753]: I0124 00:32:24.827009 2753 state_mem.go:75] "Updated machine memory state" Jan 24 00:32:24.829413 kubelet[2753]: I0124 00:32:24.828420 2753 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 24 00:32:24.829413 kubelet[2753]: I0124 00:32:24.828583 2753 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 24 00:32:24.829413 kubelet[2753]: I0124 00:32:24.828592 2753 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 24 00:32:24.830039 kubelet[2753]: I0124 00:32:24.830018 2753 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 24 00:32:24.831241 kubelet[2753]: I0124 00:32:24.831217 2753 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:32:24.831862 kubelet[2753]: I0124 00:32:24.831826 2753 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:32:24.832090 kubelet[2753]: I0124 00:32:24.832074 2753 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:32:24.833607 kubelet[2753]: E0124 00:32:24.833594 2753 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 24 00:32:24.844061 kubelet[2753]: E0124 00:32:24.844025 2753 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-6-n-a9e48d2ea0\" already exists" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:32:24.844582 kubelet[2753]: E0124 00:32:24.844562 2753 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-n-a9e48d2ea0\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:32:24.845048 kubelet[2753]: E0124 00:32:24.844973 2753 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-a9e48d2ea0\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:32:24.940015 kubelet[2753]: I0124 00:32:24.939295 2753 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:32:24.952156 kubelet[2753]: I0124 00:32:24.952087 2753 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:32:24.952308 kubelet[2753]: I0124 00:32:24.952237 2753 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:32:25.023936 kubelet[2753]: I0124 00:32:25.023849 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8a6139feaf230f263171a889dcbfbc89-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-a9e48d2ea0\" (UID: \"8a6139feaf230f263171a889dcbfbc89\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:32:25.023936 kubelet[2753]: I0124 00:32:25.023918 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8a6139feaf230f263171a889dcbfbc89-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-a9e48d2ea0\" (UID: \"8a6139feaf230f263171a889dcbfbc89\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:32:25.024158 kubelet[2753]: I0124 00:32:25.023953 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b9d571bb308c3cb313081f80c59e61eb-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-a9e48d2ea0\" (UID: \"b9d571bb308c3cb313081f80c59e61eb\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:32:25.024158 kubelet[2753]: I0124 00:32:25.023985 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b6b3f948cc2112cb16d830303c9b1b0-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-a9e48d2ea0\" (UID: \"9b6b3f948cc2112cb16d830303c9b1b0\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:32:25.024158 kubelet[2753]: I0124 00:32:25.024010 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8a6139feaf230f263171a889dcbfbc89-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-a9e48d2ea0\" (UID: \"8a6139feaf230f263171a889dcbfbc89\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:32:25.024158 kubelet[2753]: I0124 00:32:25.024041 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8a6139feaf230f263171a889dcbfbc89-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-a9e48d2ea0\" (UID: \"8a6139feaf230f263171a889dcbfbc89\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:32:25.024158 kubelet[2753]: I0124 00:32:25.024065 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a6139feaf230f263171a889dcbfbc89-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-a9e48d2ea0\" (UID: \"8a6139feaf230f263171a889dcbfbc89\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:32:25.024375 kubelet[2753]: I0124 00:32:25.024087 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b6b3f948cc2112cb16d830303c9b1b0-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-a9e48d2ea0\" (UID: \"9b6b3f948cc2112cb16d830303c9b1b0\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:32:25.024375 kubelet[2753]: I0124 00:32:25.024111 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b6b3f948cc2112cb16d830303c9b1b0-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-a9e48d2ea0\" (UID: \"9b6b3f948cc2112cb16d830303c9b1b0\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:32:25.699482 kubelet[2753]: I0124 00:32:25.698917 2753 apiserver.go:52] "Watching apiserver" Jan 24 00:32:25.723449 kubelet[2753]: I0124 00:32:25.722573 2753 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 24 00:32:25.787094 kubelet[2753]: I0124 00:32:25.786999 2753 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:32:25.800070 kubelet[2753]: E0124 00:32:25.798888 2753 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-a9e48d2ea0\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:32:25.840819 kubelet[2753]: I0124 00:32:25.840656 2753 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-6-n-a9e48d2ea0" podStartSLOduration=2.840632527 podStartE2EDuration="2.840632527s" podCreationTimestamp="2026-01-24 00:32:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:32:25.840291725 +0000 UTC m=+1.208871086" watchObservedRunningTime="2026-01-24 00:32:25.840632527 +0000 UTC m=+1.209211888" Jan 24 00:32:25.841095 kubelet[2753]: I0124 00:32:25.840889 2753 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-6-n-a9e48d2ea0" podStartSLOduration=2.840877832 podStartE2EDuration="2.840877832s" podCreationTimestamp="2026-01-24 00:32:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:32:25.822874463 +0000 UTC m=+1.191453824" watchObservedRunningTime="2026-01-24 00:32:25.840877832 +0000 UTC m=+1.209457193" Jan 24 00:32:25.869593 kubelet[2753]: I0124 00:32:25.869514 2753 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-a9e48d2ea0" podStartSLOduration=2.869492679 podStartE2EDuration="2.869492679s" podCreationTimestamp="2026-01-24 00:32:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:32:25.853066666 +0000 UTC m=+1.221646037" watchObservedRunningTime="2026-01-24 00:32:25.869492679 +0000 UTC m=+1.238072040" Jan 24 00:32:29.076275 kubelet[2753]: I0124 00:32:29.076138 2753 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 24 00:32:29.077047 containerd[1650]: time="2026-01-24T00:32:29.076929321Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 24 00:32:29.077552 kubelet[2753]: I0124 00:32:29.077204 2753 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 24 00:32:30.055486 kubelet[2753]: I0124 00:32:30.055317 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5e67a527-dc56-414c-a054-f5f1e7d8efb5-kube-proxy\") pod \"kube-proxy-kx7b9\" (UID: \"5e67a527-dc56-414c-a054-f5f1e7d8efb5\") " pod="kube-system/kube-proxy-kx7b9" Jan 24 00:32:30.055486 kubelet[2753]: I0124 00:32:30.055378 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5e67a527-dc56-414c-a054-f5f1e7d8efb5-lib-modules\") pod \"kube-proxy-kx7b9\" (UID: \"5e67a527-dc56-414c-a054-f5f1e7d8efb5\") " pod="kube-system/kube-proxy-kx7b9" Jan 24 00:32:30.055486 kubelet[2753]: I0124 00:32:30.055452 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5e67a527-dc56-414c-a054-f5f1e7d8efb5-xtables-lock\") pod \"kube-proxy-kx7b9\" (UID: \"5e67a527-dc56-414c-a054-f5f1e7d8efb5\") " pod="kube-system/kube-proxy-kx7b9" Jan 24 00:32:30.055773 kubelet[2753]: I0124 00:32:30.055502 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hgzp\" (UniqueName: \"kubernetes.io/projected/5e67a527-dc56-414c-a054-f5f1e7d8efb5-kube-api-access-6hgzp\") pod \"kube-proxy-kx7b9\" (UID: \"5e67a527-dc56-414c-a054-f5f1e7d8efb5\") " pod="kube-system/kube-proxy-kx7b9" Jan 24 00:32:30.156738 kubelet[2753]: I0124 00:32:30.156011 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/656a2ee3-91ec-4438-99d2-66fb734308a5-var-lib-calico\") pod \"tigera-operator-7dcd859c48-cg6z7\" (UID: \"656a2ee3-91ec-4438-99d2-66fb734308a5\") " pod="tigera-operator/tigera-operator-7dcd859c48-cg6z7" Jan 24 00:32:30.156738 kubelet[2753]: I0124 00:32:30.156083 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dxfh\" (UniqueName: \"kubernetes.io/projected/656a2ee3-91ec-4438-99d2-66fb734308a5-kube-api-access-9dxfh\") pod \"tigera-operator-7dcd859c48-cg6z7\" (UID: \"656a2ee3-91ec-4438-99d2-66fb734308a5\") " pod="tigera-operator/tigera-operator-7dcd859c48-cg6z7" Jan 24 00:32:30.284697 containerd[1650]: time="2026-01-24T00:32:30.284662025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kx7b9,Uid:5e67a527-dc56-414c-a054-f5f1e7d8efb5,Namespace:kube-system,Attempt:0,}" Jan 24 00:32:30.308484 containerd[1650]: time="2026-01-24T00:32:30.307861421Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:32:30.308484 containerd[1650]: time="2026-01-24T00:32:30.307925510Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:32:30.308484 containerd[1650]: time="2026-01-24T00:32:30.307937830Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:32:30.308484 containerd[1650]: time="2026-01-24T00:32:30.308025640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:32:30.348820 containerd[1650]: time="2026-01-24T00:32:30.348700852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kx7b9,Uid:5e67a527-dc56-414c-a054-f5f1e7d8efb5,Namespace:kube-system,Attempt:0,} returns sandbox id \"deeaa54a9b089992515e3ad076516a1b35fa6940fbc0ad8d39c469b66e580499\"" Jan 24 00:32:30.351192 containerd[1650]: time="2026-01-24T00:32:30.351086805Z" level=info msg="CreateContainer within sandbox \"deeaa54a9b089992515e3ad076516a1b35fa6940fbc0ad8d39c469b66e580499\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 24 00:32:30.362026 containerd[1650]: time="2026-01-24T00:32:30.361961125Z" level=info msg="CreateContainer within sandbox \"deeaa54a9b089992515e3ad076516a1b35fa6940fbc0ad8d39c469b66e580499\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"54dec186289153865b0c7d81c982ab3cf9f8952f14bcf3ce521c2be7be3a2864\"" Jan 24 00:32:30.362618 containerd[1650]: time="2026-01-24T00:32:30.362560795Z" level=info msg="StartContainer for \"54dec186289153865b0c7d81c982ab3cf9f8952f14bcf3ce521c2be7be3a2864\"" Jan 24 00:32:30.430463 containerd[1650]: time="2026-01-24T00:32:30.430002469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-cg6z7,Uid:656a2ee3-91ec-4438-99d2-66fb734308a5,Namespace:tigera-operator,Attempt:0,}" Jan 24 00:32:30.444859 containerd[1650]: time="2026-01-24T00:32:30.444819607Z" level=info msg="StartContainer for \"54dec186289153865b0c7d81c982ab3cf9f8952f14bcf3ce521c2be7be3a2864\" returns successfully" Jan 24 00:32:30.482294 containerd[1650]: time="2026-01-24T00:32:30.481792928Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:32:30.482294 containerd[1650]: time="2026-01-24T00:32:30.481898047Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:32:30.482294 containerd[1650]: time="2026-01-24T00:32:30.481918756Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:32:30.482294 containerd[1650]: time="2026-01-24T00:32:30.482079494Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:32:30.562314 containerd[1650]: time="2026-01-24T00:32:30.561692877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-cg6z7,Uid:656a2ee3-91ec-4438-99d2-66fb734308a5,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"6f59b1c0bacd39e3d2f28183ed42eb756ac26cfe093305f8d9708e6bf466381d\"" Jan 24 00:32:30.565529 containerd[1650]: time="2026-01-24T00:32:30.564799348Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 24 00:32:32.380099 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4143294390.mount: Deactivated successfully. Jan 24 00:32:32.877127 containerd[1650]: time="2026-01-24T00:32:32.877069269Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:32:32.879886 containerd[1650]: time="2026-01-24T00:32:32.879444816Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Jan 24 00:32:32.881997 containerd[1650]: time="2026-01-24T00:32:32.880524021Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:32:32.883664 containerd[1650]: time="2026-01-24T00:32:32.883639678Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:32:32.884558 containerd[1650]: time="2026-01-24T00:32:32.884529976Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.31900612s" Jan 24 00:32:32.884588 containerd[1650]: time="2026-01-24T00:32:32.884564256Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 24 00:32:32.890919 containerd[1650]: time="2026-01-24T00:32:32.890796290Z" level=info msg="CreateContainer within sandbox \"6f59b1c0bacd39e3d2f28183ed42eb756ac26cfe093305f8d9708e6bf466381d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 24 00:32:32.903734 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3751910170.mount: Deactivated successfully. Jan 24 00:32:32.920100 containerd[1650]: time="2026-01-24T00:32:32.919630445Z" level=info msg="CreateContainer within sandbox \"6f59b1c0bacd39e3d2f28183ed42eb756ac26cfe093305f8d9708e6bf466381d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"b95b5c0380ad1f6bd35617c2ba79814116ef2bcd85f2cf575707f75de31c2017\"" Jan 24 00:32:32.921080 containerd[1650]: time="2026-01-24T00:32:32.920944027Z" level=info msg="StartContainer for \"b95b5c0380ad1f6bd35617c2ba79814116ef2bcd85f2cf575707f75de31c2017\"" Jan 24 00:32:32.977989 containerd[1650]: time="2026-01-24T00:32:32.977319103Z" level=info msg="StartContainer for \"b95b5c0380ad1f6bd35617c2ba79814116ef2bcd85f2cf575707f75de31c2017\" returns successfully" Jan 24 00:32:33.818812 kubelet[2753]: I0124 00:32:33.818567 2753 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kx7b9" podStartSLOduration=4.81852404 podStartE2EDuration="4.81852404s" podCreationTimestamp="2026-01-24 00:32:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:32:30.820525334 +0000 UTC m=+6.189104705" watchObservedRunningTime="2026-01-24 00:32:33.81852404 +0000 UTC m=+9.187103401" Jan 24 00:32:34.320308 kubelet[2753]: I0124 00:32:34.320192 2753 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-cg6z7" podStartSLOduration=1.996745152 podStartE2EDuration="4.320176058s" podCreationTimestamp="2026-01-24 00:32:30 +0000 UTC" firstStartedPulling="2026-01-24 00:32:30.564260657 +0000 UTC m=+5.932840008" lastFinishedPulling="2026-01-24 00:32:32.887691583 +0000 UTC m=+8.256270914" observedRunningTime="2026-01-24 00:32:33.819690695 +0000 UTC m=+9.188270056" watchObservedRunningTime="2026-01-24 00:32:34.320176058 +0000 UTC m=+9.688755379" Jan 24 00:32:36.863377 update_engine[1634]: I20260124 00:32:36.863255 1634 update_attempter.cc:509] Updating boot flags... Jan 24 00:32:36.955553 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (3126) Jan 24 00:32:37.040414 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (3130) Jan 24 00:32:37.102418 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (3130) Jan 24 00:32:38.525718 sudo[1857]: pam_unix(sudo:session): session closed for user root Jan 24 00:32:38.650591 sshd[1853]: pam_unix(sshd:session): session closed for user core Jan 24 00:32:38.658720 systemd-logind[1620]: Session 7 logged out. Waiting for processes to exit. Jan 24 00:32:38.660604 systemd[1]: sshd@6-65.109.167.77:22-20.161.92.111:52524.service: Deactivated successfully. Jan 24 00:32:38.670565 systemd[1]: session-7.scope: Deactivated successfully. Jan 24 00:32:38.673071 systemd-logind[1620]: Removed session 7. Jan 24 00:32:42.652262 kubelet[2753]: I0124 00:32:42.651105 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dba46303-7d6c-4311-9a4c-fe9b9fe9b348-tigera-ca-bundle\") pod \"calico-typha-6df585754d-5sqn4\" (UID: \"dba46303-7d6c-4311-9a4c-fe9b9fe9b348\") " pod="calico-system/calico-typha-6df585754d-5sqn4" Jan 24 00:32:42.652262 kubelet[2753]: I0124 00:32:42.651194 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gn5s\" (UniqueName: \"kubernetes.io/projected/dba46303-7d6c-4311-9a4c-fe9b9fe9b348-kube-api-access-7gn5s\") pod \"calico-typha-6df585754d-5sqn4\" (UID: \"dba46303-7d6c-4311-9a4c-fe9b9fe9b348\") " pod="calico-system/calico-typha-6df585754d-5sqn4" Jan 24 00:32:42.653373 kubelet[2753]: I0124 00:32:42.653343 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/dba46303-7d6c-4311-9a4c-fe9b9fe9b348-typha-certs\") pod \"calico-typha-6df585754d-5sqn4\" (UID: \"dba46303-7d6c-4311-9a4c-fe9b9fe9b348\") " pod="calico-system/calico-typha-6df585754d-5sqn4" Jan 24 00:32:42.856212 kubelet[2753]: I0124 00:32:42.854846 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/33f6099b-b6ad-42dd-ac33-9294380e84d1-cni-bin-dir\") pod \"calico-node-kxn7g\" (UID: \"33f6099b-b6ad-42dd-ac33-9294380e84d1\") " pod="calico-system/calico-node-kxn7g" Jan 24 00:32:42.856212 kubelet[2753]: I0124 00:32:42.854877 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/33f6099b-b6ad-42dd-ac33-9294380e84d1-node-certs\") pod \"calico-node-kxn7g\" (UID: \"33f6099b-b6ad-42dd-ac33-9294380e84d1\") " pod="calico-system/calico-node-kxn7g" Jan 24 00:32:42.856212 kubelet[2753]: I0124 00:32:42.854891 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/33f6099b-b6ad-42dd-ac33-9294380e84d1-xtables-lock\") pod \"calico-node-kxn7g\" (UID: \"33f6099b-b6ad-42dd-ac33-9294380e84d1\") " pod="calico-system/calico-node-kxn7g" Jan 24 00:32:42.856212 kubelet[2753]: I0124 00:32:42.854903 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/33f6099b-b6ad-42dd-ac33-9294380e84d1-cni-log-dir\") pod \"calico-node-kxn7g\" (UID: \"33f6099b-b6ad-42dd-ac33-9294380e84d1\") " pod="calico-system/calico-node-kxn7g" Jan 24 00:32:42.856212 kubelet[2753]: I0124 00:32:42.854913 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/33f6099b-b6ad-42dd-ac33-9294380e84d1-cni-net-dir\") pod \"calico-node-kxn7g\" (UID: \"33f6099b-b6ad-42dd-ac33-9294380e84d1\") " pod="calico-system/calico-node-kxn7g" Jan 24 00:32:42.856788 kubelet[2753]: I0124 00:32:42.854924 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33f6099b-b6ad-42dd-ac33-9294380e84d1-tigera-ca-bundle\") pod \"calico-node-kxn7g\" (UID: \"33f6099b-b6ad-42dd-ac33-9294380e84d1\") " pod="calico-system/calico-node-kxn7g" Jan 24 00:32:42.856788 kubelet[2753]: I0124 00:32:42.854935 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/33f6099b-b6ad-42dd-ac33-9294380e84d1-var-run-calico\") pod \"calico-node-kxn7g\" (UID: \"33f6099b-b6ad-42dd-ac33-9294380e84d1\") " pod="calico-system/calico-node-kxn7g" Jan 24 00:32:42.856788 kubelet[2753]: I0124 00:32:42.854946 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvhl9\" (UniqueName: \"kubernetes.io/projected/33f6099b-b6ad-42dd-ac33-9294380e84d1-kube-api-access-qvhl9\") pod \"calico-node-kxn7g\" (UID: \"33f6099b-b6ad-42dd-ac33-9294380e84d1\") " pod="calico-system/calico-node-kxn7g" Jan 24 00:32:42.856788 kubelet[2753]: I0124 00:32:42.854960 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/33f6099b-b6ad-42dd-ac33-9294380e84d1-lib-modules\") pod \"calico-node-kxn7g\" (UID: \"33f6099b-b6ad-42dd-ac33-9294380e84d1\") " pod="calico-system/calico-node-kxn7g" Jan 24 00:32:42.856788 kubelet[2753]: I0124 00:32:42.854973 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/33f6099b-b6ad-42dd-ac33-9294380e84d1-var-lib-calico\") pod \"calico-node-kxn7g\" (UID: \"33f6099b-b6ad-42dd-ac33-9294380e84d1\") " pod="calico-system/calico-node-kxn7g" Jan 24 00:32:42.857001 kubelet[2753]: I0124 00:32:42.854984 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/33f6099b-b6ad-42dd-ac33-9294380e84d1-flexvol-driver-host\") pod \"calico-node-kxn7g\" (UID: \"33f6099b-b6ad-42dd-ac33-9294380e84d1\") " pod="calico-system/calico-node-kxn7g" Jan 24 00:32:42.857001 kubelet[2753]: I0124 00:32:42.854993 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/33f6099b-b6ad-42dd-ac33-9294380e84d1-policysync\") pod \"calico-node-kxn7g\" (UID: \"33f6099b-b6ad-42dd-ac33-9294380e84d1\") " pod="calico-system/calico-node-kxn7g" Jan 24 00:32:42.932338 containerd[1650]: time="2026-01-24T00:32:42.932012685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6df585754d-5sqn4,Uid:dba46303-7d6c-4311-9a4c-fe9b9fe9b348,Namespace:calico-system,Attempt:0,}" Jan 24 00:32:42.961052 kubelet[2753]: E0124 00:32:42.961013 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:42.962580 kubelet[2753]: W0124 00:32:42.961230 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:42.962580 kubelet[2753]: E0124 00:32:42.961992 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:42.969650 kubelet[2753]: E0124 00:32:42.968927 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:42.969650 kubelet[2753]: W0124 00:32:42.968958 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:42.969650 kubelet[2753]: E0124 00:32:42.968989 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:42.978967 containerd[1650]: time="2026-01-24T00:32:42.973595325Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:32:42.978967 containerd[1650]: time="2026-01-24T00:32:42.973723624Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:32:42.978967 containerd[1650]: time="2026-01-24T00:32:42.973779114Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:32:42.978967 containerd[1650]: time="2026-01-24T00:32:42.973969023Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:32:42.987122 kubelet[2753]: E0124 00:32:42.986548 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:42.992468 kubelet[2753]: W0124 00:32:42.992381 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:42.992667 kubelet[2753]: E0124 00:32:42.992646 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:42.999033 kubelet[2753]: E0124 00:32:42.994322 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:42.999033 kubelet[2753]: W0124 00:32:42.994338 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:42.999033 kubelet[2753]: E0124 00:32:42.994355 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:42.999033 kubelet[2753]: E0124 00:32:42.995122 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:42.999033 kubelet[2753]: W0124 00:32:42.995133 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:42.999033 kubelet[2753]: E0124 00:32:42.995883 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:42.999033 kubelet[2753]: W0124 00:32:42.995895 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:42.999033 kubelet[2753]: E0124 00:32:42.995908 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:42.999033 kubelet[2753]: E0124 00:32:42.995948 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:43.028442 kubelet[2753]: E0124 00:32:43.028327 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jv7gx" podUID="08d51dd3-a54b-4b8c-9510-41c1d4106f97" Jan 24 00:32:43.045242 kubelet[2753]: E0124 00:32:43.045173 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:43.045242 kubelet[2753]: W0124 00:32:43.045193 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:43.045242 kubelet[2753]: E0124 00:32:43.045211 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:43.045685 kubelet[2753]: E0124 00:32:43.045632 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:43.045685 kubelet[2753]: W0124 00:32:43.045648 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:43.045685 kubelet[2753]: E0124 00:32:43.045660 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:43.046036 kubelet[2753]: E0124 00:32:43.046004 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:43.046036 kubelet[2753]: W0124 00:32:43.046016 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:43.046036 kubelet[2753]: E0124 00:32:43.046024 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:43.046345 kubelet[2753]: E0124 00:32:43.046320 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:43.046345 kubelet[2753]: W0124 00:32:43.046331 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:43.046345 kubelet[2753]: E0124 00:32:43.046340 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:43.046695 kubelet[2753]: E0124 00:32:43.046669 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:43.046695 kubelet[2753]: W0124 00:32:43.046680 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:43.046695 kubelet[2753]: E0124 00:32:43.046687 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:43.047025 kubelet[2753]: E0124 00:32:43.047000 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:43.047025 kubelet[2753]: W0124 00:32:43.047009 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:43.047025 kubelet[2753]: E0124 00:32:43.047016 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:43.047340 kubelet[2753]: E0124 00:32:43.047316 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:43.047340 kubelet[2753]: W0124 00:32:43.047335 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:43.047451 kubelet[2753]: E0124 00:32:43.047344 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:43.047854 kubelet[2753]: E0124 00:32:43.047830 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:43.047854 kubelet[2753]: W0124 00:32:43.047841 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:43.047854 kubelet[2753]: E0124 00:32:43.047849 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:43.048353 kubelet[2753]: E0124 00:32:43.048203 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:43.048353 kubelet[2753]: W0124 00:32:43.048221 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:43.048353 kubelet[2753]: E0124 00:32:43.048238 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:43.049221 kubelet[2753]: E0124 00:32:43.048997 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:43.049221 kubelet[2753]: W0124 00:32:43.049013 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:43.049221 kubelet[2753]: E0124 00:32:43.049027 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:43.051081 kubelet[2753]: E0124 00:32:43.050595 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:43.051081 kubelet[2753]: W0124 00:32:43.050612 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:43.051081 kubelet[2753]: E0124 00:32:43.050627 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:43.054250 kubelet[2753]: E0124 00:32:43.053809 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:43.054250 kubelet[2753]: W0124 00:32:43.053827 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:43.054250 kubelet[2753]: E0124 00:32:43.053844 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:43.058451 kubelet[2753]: E0124 00:32:43.056582 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:43.058451 kubelet[2753]: W0124 00:32:43.056603 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:43.058451 kubelet[2753]: E0124 00:32:43.056621 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:43.059423 kubelet[2753]: E0124 00:32:43.058932 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:43.059423 kubelet[2753]: W0124 00:32:43.058951 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:43.059423 kubelet[2753]: E0124 00:32:43.058969 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:43.059821 kubelet[2753]: E0124 00:32:43.059647 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:43.060354 kubelet[2753]: W0124 00:32:43.059884 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:43.060354 kubelet[2753]: E0124 00:32:43.059905 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:43.060591 kubelet[2753]: E0124 00:32:43.060572 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:43.060697 kubelet[2753]: W0124 00:32:43.060681 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:43.061360 kubelet[2753]: E0124 00:32:43.061050 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:43.062371 kubelet[2753]: E0124 00:32:43.062352 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:43.062798 kubelet[2753]: W0124 00:32:43.062672 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:43.062798 kubelet[2753]: E0124 00:32:43.062699 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:43.065423 kubelet[2753]: E0124 00:32:43.065272 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:43.065423 kubelet[2753]: W0124 00:32:43.065318 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:43.065423 kubelet[2753]: E0124 00:32:43.065337 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:43.067639 kubelet[2753]: E0124 00:32:43.067481 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:43.067639 kubelet[2753]: W0124 00:32:43.067500 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:43.067639 kubelet[2753]: E0124 00:32:43.067516 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:43.069727 kubelet[2753]: E0124 00:32:43.069479 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:43.069727 kubelet[2753]: W0124 00:32:43.069498 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:43.069727 kubelet[2753]: E0124 00:32:43.069514 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:43.072599 kubelet[2753]: E0124 00:32:43.072235 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:43.072599 kubelet[2753]: W0124 00:32:43.072378 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:43.072599 kubelet[2753]: E0124 00:32:43.072459 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:43.072599 kubelet[2753]: I0124 00:32:43.072492 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/08d51dd3-a54b-4b8c-9510-41c1d4106f97-registration-dir\") pod \"csi-node-driver-jv7gx\" (UID: \"08d51dd3-a54b-4b8c-9510-41c1d4106f97\") " pod="calico-system/csi-node-driver-jv7gx" Jan 24 00:32:43.073654 kubelet[2753]: E0124 00:32:43.073357 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:43.073654 kubelet[2753]: W0124 00:32:43.073375 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:43.073654 kubelet[2753]: E0124 00:32:43.073478 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:43.073654 kubelet[2753]: I0124 00:32:43.073508 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/08d51dd3-a54b-4b8c-9510-41c1d4106f97-socket-dir\") pod \"csi-node-driver-jv7gx\" (UID: \"08d51dd3-a54b-4b8c-9510-41c1d4106f97\") " pod="calico-system/csi-node-driver-jv7gx" Jan 24 00:32:43.073965 kubelet[2753]: E0124 00:32:43.073890 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:43.074052 kubelet[2753]: W0124 00:32:43.074038 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:43.074326 kubelet[2753]: E0124 00:32:43.074241 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:43.075702 kubelet[2753]: E0124 00:32:43.075309 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:43.075702 kubelet[2753]: W0124 00:32:43.075320 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:43.075702 kubelet[2753]: E0124 00:32:43.075422 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:43.075702 kubelet[2753]: E0124 00:32:43.075642 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:43.075702 kubelet[2753]: W0124 00:32:43.075667 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:43.075921 kubelet[2753]: E0124 00:32:43.075761 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:43.075959 kubelet[2753]: E0124 00:32:43.075951 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:43.075959 kubelet[2753]: W0124 00:32:43.075957 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:43.076020 kubelet[2753]: E0124 00:32:43.075971 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:43.077422 kubelet[2753]: I0124 00:32:43.076095 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/08d51dd3-a54b-4b8c-9510-41c1d4106f97-kubelet-dir\") pod \"csi-node-driver-jv7gx\" (UID: \"08d51dd3-a54b-4b8c-9510-41c1d4106f97\") " pod="calico-system/csi-node-driver-jv7gx" Jan 24 00:32:43.077422 kubelet[2753]: E0124 00:32:43.076261 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:43.077422 kubelet[2753]: W0124 00:32:43.076267 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:43.077422 kubelet[2753]: E0124 00:32:43.076274 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:43.078685 kubelet[2753]: E0124 00:32:43.078660 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:43.078834 kubelet[2753]: W0124 00:32:43.078727 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:43.078834 kubelet[2753]: E0124 00:32:43.078760 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:43.079059 kubelet[2753]: E0124 00:32:43.079051 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:43.079116 kubelet[2753]: W0124 00:32:43.079109 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:43.079257 kubelet[2753]: E0124 00:32:43.079248 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:43.079596 kubelet[2753]: E0124 00:32:43.079494 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:43.079596 kubelet[2753]: W0124 00:32:43.079518 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:43.079596 kubelet[2753]: E0124 00:32:43.079526 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:43.079596 kubelet[2753]: I0124 00:32:43.079551 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/08d51dd3-a54b-4b8c-9510-41c1d4106f97-varrun\") pod \"csi-node-driver-jv7gx\" (UID: \"08d51dd3-a54b-4b8c-9510-41c1d4106f97\") " pod="calico-system/csi-node-driver-jv7gx" Jan 24 00:32:43.080014 kubelet[2753]: E0124 00:32:43.079932 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:43.080014 kubelet[2753]: W0124 00:32:43.079941 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:43.080014 kubelet[2753]: E0124 00:32:43.079955 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:43.080014 kubelet[2753]: I0124 00:32:43.079967 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqrxx\" (UniqueName: \"kubernetes.io/projected/08d51dd3-a54b-4b8c-9510-41c1d4106f97-kube-api-access-fqrxx\") pod \"csi-node-driver-jv7gx\" (UID: \"08d51dd3-a54b-4b8c-9510-41c1d4106f97\") " pod="calico-system/csi-node-driver-jv7gx" Jan 24 00:32:43.080430 kubelet[2753]: E0124 00:32:43.080341 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:43.080430 kubelet[2753]: W0124 00:32:43.080349 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:43.080430 kubelet[2753]: E0124 00:32:43.080359 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:43.080668 kubelet[2753]: E0124 00:32:43.080644 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:43.080668 kubelet[2753]: W0124 00:32:43.080652 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:43.080781 kubelet[2753]: E0124 00:32:43.080729 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:43.080957 kubelet[2753]: E0124 00:32:43.080950 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:43.081035 kubelet[2753]: W0124 00:32:43.080992 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:43.081035 kubelet[2753]: E0124 00:32:43.081001 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:43.081249 kubelet[2753]: E0124 00:32:43.081239 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:43.081315 kubelet[2753]: W0124 00:32:43.081290 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:43.081315 kubelet[2753]: E0124 00:32:43.081301 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:43.085556 containerd[1650]: time="2026-01-24T00:32:43.085515611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6df585754d-5sqn4,Uid:dba46303-7d6c-4311-9a4c-fe9b9fe9b348,Namespace:calico-system,Attempt:0,} returns sandbox id \"60c701055a56bc51c7d8989ba52a0aa94830810bc81b8bfd1487df28bab3d6c5\"" Jan 24 00:32:43.087057 containerd[1650]: time="2026-01-24T00:32:43.086894632Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 24 00:32:43.137042 containerd[1650]: time="2026-01-24T00:32:43.137008845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kxn7g,Uid:33f6099b-b6ad-42dd-ac33-9294380e84d1,Namespace:calico-system,Attempt:0,}" Jan 24 00:32:43.159233 containerd[1650]: time="2026-01-24T00:32:43.158996651Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:32:43.159233 containerd[1650]: time="2026-01-24T00:32:43.159048861Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:32:43.159233 containerd[1650]: time="2026-01-24T00:32:43.159059951Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:32:43.159233 containerd[1650]: time="2026-01-24T00:32:43.159130381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:32:43.181768 kubelet[2753]: E0124 00:32:43.181477 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:43.181768 kubelet[2753]: W0124 00:32:43.181494 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:43.181768 kubelet[2753]: E0124 00:32:43.181512 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:43.182054 kubelet[2753]: E0124 00:32:43.182042 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:43.182304 kubelet[2753]: W0124 00:32:43.182095 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:43.182304 kubelet[2753]: E0124 00:32:43.182110 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:43.182741 kubelet[2753]: E0124 00:32:43.182692 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:43.183326 kubelet[2753]: W0124 00:32:43.182877 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:43.183326 kubelet[2753]: E0124 00:32:43.182892 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:43.185487 kubelet[2753]: E0124 00:32:43.184639 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:43.185487 kubelet[2753]: W0124 00:32:43.184649 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:43.185487 kubelet[2753]: E0124 00:32:43.184662 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:43.185724 kubelet[2753]: E0124 00:32:43.185714 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:43.185967 kubelet[2753]: W0124 00:32:43.185870 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:43.186158 kubelet[2753]: E0124 00:32:43.186100 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:43.187540 kubelet[2753]: E0124 00:32:43.187022 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:43.187540 kubelet[2753]: W0124 00:32:43.187035 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:43.187540 kubelet[2753]: E0124 00:32:43.187053 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:43.189396 kubelet[2753]: E0124 00:32:43.188473 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:43.189396 kubelet[2753]: W0124 00:32:43.188487 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:43.189396 kubelet[2753]: E0124 00:32:43.188500 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:43.189478 kubelet[2753]: E0124 00:32:43.189444 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:43.189478 kubelet[2753]: W0124 00:32:43.189453 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:43.189478 kubelet[2753]: E0124 00:32:43.189463 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:43.189914 kubelet[2753]: E0124 00:32:43.189836 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:43.189914 kubelet[2753]: W0124 00:32:43.189849 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:43.190284 kubelet[2753]: E0124 00:32:43.189970 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:43.190512 kubelet[2753]: E0124 00:32:43.190474 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:43.190512 kubelet[2753]: W0124 00:32:43.190485 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:43.190848 kubelet[2753]: E0124 00:32:43.190718 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:43.191738 kubelet[2753]: E0124 00:32:43.191591 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:43.191738 kubelet[2753]: W0124 00:32:43.191600 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:43.191738 kubelet[2753]: E0124 00:32:43.191609 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:43.192187 kubelet[2753]: E0124 00:32:43.191976 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:43.192187 kubelet[2753]: W0124 00:32:43.191984 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:43.192187 kubelet[2753]: E0124 00:32:43.191992 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:43.193340 kubelet[2753]: E0124 00:32:43.193090 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:43.193340 kubelet[2753]: W0124 00:32:43.193273 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:43.193546 kubelet[2753]: E0124 00:32:43.193498 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:43.195562 kubelet[2753]: E0124 00:32:43.195541 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:43.197335 kubelet[2753]: W0124 00:32:43.197175 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:43.197436 kubelet[2753]: E0124 00:32:43.197425 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:43.197480 kubelet[2753]: E0124 00:32:43.197430 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:43.199975 kubelet[2753]: W0124 00:32:43.199417 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:43.199975 kubelet[2753]: E0124 00:32:43.199527 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:43.199975 kubelet[2753]: E0124 00:32:43.199741 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:43.199975 kubelet[2753]: W0124 00:32:43.199747 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:43.199975 kubelet[2753]: E0124 00:32:43.199786 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:43.201400 kubelet[2753]: E0124 00:32:43.200135 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:43.201400 kubelet[2753]: W0124 00:32:43.200144 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:43.201400 kubelet[2753]: E0124 00:32:43.200170 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:43.201400 kubelet[2753]: E0124 00:32:43.200601 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:43.201400 kubelet[2753]: W0124 00:32:43.200609 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:43.201400 kubelet[2753]: E0124 00:32:43.200638 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:43.201726 kubelet[2753]: E0124 00:32:43.201710 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:43.201786 kubelet[2753]: W0124 00:32:43.201778 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:43.201901 kubelet[2753]: E0124 00:32:43.201892 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:43.202092 kubelet[2753]: E0124 00:32:43.202084 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:43.202145 kubelet[2753]: W0124 00:32:43.202139 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:43.202495 kubelet[2753]: E0124 00:32:43.202485 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:43.203690 kubelet[2753]: E0124 00:32:43.203578 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:43.203690 kubelet[2753]: W0124 00:32:43.203588 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:43.203775 kubelet[2753]: E0124 00:32:43.203766 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:43.203953 kubelet[2753]: E0124 00:32:43.203940 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:43.203980 kubelet[2753]: W0124 00:32:43.203953 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:43.204036 kubelet[2753]: E0124 00:32:43.204024 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:43.204224 kubelet[2753]: E0124 00:32:43.204212 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:43.204224 kubelet[2753]: W0124 00:32:43.204223 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:43.204304 kubelet[2753]: E0124 00:32:43.204295 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:43.204480 kubelet[2753]: E0124 00:32:43.204470 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:43.204508 kubelet[2753]: W0124 00:32:43.204481 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:43.204508 kubelet[2753]: E0124 00:32:43.204494 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:43.204728 kubelet[2753]: E0124 00:32:43.204714 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:43.204728 kubelet[2753]: W0124 00:32:43.204725 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:43.204818 kubelet[2753]: E0124 00:32:43.204734 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:43.206358 kubelet[2753]: E0124 00:32:43.206348 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:43.206457 kubelet[2753]: W0124 00:32:43.206445 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:43.206504 kubelet[2753]: E0124 00:32:43.206495 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:43.209933 containerd[1650]: time="2026-01-24T00:32:43.209906889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kxn7g,Uid:33f6099b-b6ad-42dd-ac33-9294380e84d1,Namespace:calico-system,Attempt:0,} returns sandbox id \"211dabbc1c0f92ced783941f1d1265aea74a2b8c9693d871bfd10fd6edf95cbb\"" Jan 24 00:32:44.733668 kubelet[2753]: E0124 00:32:44.732928 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jv7gx" podUID="08d51dd3-a54b-4b8c-9510-41c1d4106f97" Jan 24 00:32:45.018353 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3432801698.mount: Deactivated successfully. Jan 24 00:32:45.975687 containerd[1650]: time="2026-01-24T00:32:45.975367391Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:32:45.976328 containerd[1650]: time="2026-01-24T00:32:45.976287806Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Jan 24 00:32:45.978460 containerd[1650]: time="2026-01-24T00:32:45.977739938Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:32:45.979917 containerd[1650]: time="2026-01-24T00:32:45.979657087Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:32:45.980470 containerd[1650]: time="2026-01-24T00:32:45.980452192Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.89353084s" Jan 24 00:32:45.980520 containerd[1650]: time="2026-01-24T00:32:45.980511002Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 24 00:32:45.981447 containerd[1650]: time="2026-01-24T00:32:45.981432506Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 24 00:32:45.994797 containerd[1650]: time="2026-01-24T00:32:45.994747881Z" level=info msg="CreateContainer within sandbox \"60c701055a56bc51c7d8989ba52a0aa94830810bc81b8bfd1487df28bab3d6c5\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 24 00:32:46.005050 containerd[1650]: time="2026-01-24T00:32:46.005011954Z" level=info msg="CreateContainer within sandbox \"60c701055a56bc51c7d8989ba52a0aa94830810bc81b8bfd1487df28bab3d6c5\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a60c7d185d7786475aaeb8fc532c8ac4cc0710dde464a0ebbf41d99e61b9ad21\"" Jan 24 00:32:46.005619 containerd[1650]: time="2026-01-24T00:32:46.005565071Z" level=info msg="StartContainer for \"a60c7d185d7786475aaeb8fc532c8ac4cc0710dde464a0ebbf41d99e61b9ad21\"" Jan 24 00:32:46.076421 containerd[1650]: time="2026-01-24T00:32:46.076303526Z" level=info msg="StartContainer for \"a60c7d185d7786475aaeb8fc532c8ac4cc0710dde464a0ebbf41d99e61b9ad21\" returns successfully" Jan 24 00:32:46.731171 kubelet[2753]: E0124 00:32:46.730733 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jv7gx" podUID="08d51dd3-a54b-4b8c-9510-41c1d4106f97" Jan 24 00:32:46.894863 kubelet[2753]: E0124 00:32:46.894779 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:46.894863 kubelet[2753]: W0124 00:32:46.894812 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:46.894863 kubelet[2753]: E0124 00:32:46.894836 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:46.895235 kubelet[2753]: E0124 00:32:46.895091 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:46.895235 kubelet[2753]: W0124 00:32:46.895099 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:46.895235 kubelet[2753]: E0124 00:32:46.895108 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:46.895462 kubelet[2753]: E0124 00:32:46.895279 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:46.895462 kubelet[2753]: W0124 00:32:46.895285 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:46.895462 kubelet[2753]: E0124 00:32:46.895291 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:46.895646 kubelet[2753]: E0124 00:32:46.895516 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:46.895646 kubelet[2753]: W0124 00:32:46.895523 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:46.895646 kubelet[2753]: E0124 00:32:46.895529 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:46.895803 kubelet[2753]: E0124 00:32:46.895712 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:46.895803 kubelet[2753]: W0124 00:32:46.895718 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:46.895803 kubelet[2753]: E0124 00:32:46.895724 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:46.895996 kubelet[2753]: E0124 00:32:46.895900 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:46.895996 kubelet[2753]: W0124 00:32:46.895906 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:46.895996 kubelet[2753]: E0124 00:32:46.895912 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:46.896144 kubelet[2753]: E0124 00:32:46.896080 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:46.896144 kubelet[2753]: W0124 00:32:46.896086 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:46.896144 kubelet[2753]: E0124 00:32:46.896092 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:46.896286 kubelet[2753]: E0124 00:32:46.896256 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:46.896286 kubelet[2753]: W0124 00:32:46.896262 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:46.896286 kubelet[2753]: E0124 00:32:46.896268 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:46.896478 kubelet[2753]: E0124 00:32:46.896464 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:46.896478 kubelet[2753]: W0124 00:32:46.896470 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:46.896478 kubelet[2753]: E0124 00:32:46.896476 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:46.896671 kubelet[2753]: E0124 00:32:46.896643 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:46.896671 kubelet[2753]: W0124 00:32:46.896648 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:46.896671 kubelet[2753]: E0124 00:32:46.896654 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:46.896875 kubelet[2753]: E0124 00:32:46.896824 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:46.896875 kubelet[2753]: W0124 00:32:46.896830 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:46.896875 kubelet[2753]: E0124 00:32:46.896836 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:46.897066 kubelet[2753]: E0124 00:32:46.897026 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:46.897066 kubelet[2753]: W0124 00:32:46.897038 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:46.897066 kubelet[2753]: E0124 00:32:46.897045 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:46.897335 kubelet[2753]: E0124 00:32:46.897236 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:46.897335 kubelet[2753]: W0124 00:32:46.897243 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:46.897335 kubelet[2753]: E0124 00:32:46.897249 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:46.897586 kubelet[2753]: E0124 00:32:46.897450 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:46.897586 kubelet[2753]: W0124 00:32:46.897456 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:46.897586 kubelet[2753]: E0124 00:32:46.897462 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:46.897741 kubelet[2753]: E0124 00:32:46.897637 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:46.897741 kubelet[2753]: W0124 00:32:46.897643 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:46.897741 kubelet[2753]: E0124 00:32:46.897648 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:46.913577 kubelet[2753]: E0124 00:32:46.913502 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:46.913577 kubelet[2753]: W0124 00:32:46.913537 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:46.913577 kubelet[2753]: E0124 00:32:46.913564 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:46.914158 kubelet[2753]: E0124 00:32:46.914079 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:46.914158 kubelet[2753]: W0124 00:32:46.914094 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:46.914158 kubelet[2753]: E0124 00:32:46.914125 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:46.914773 kubelet[2753]: E0124 00:32:46.914714 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:46.914773 kubelet[2753]: W0124 00:32:46.914753 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:46.914773 kubelet[2753]: E0124 00:32:46.914815 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:46.915602 kubelet[2753]: E0124 00:32:46.915573 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:46.915602 kubelet[2753]: W0124 00:32:46.915600 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:46.915732 kubelet[2753]: E0124 00:32:46.915717 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:46.916545 kubelet[2753]: E0124 00:32:46.916244 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:46.916545 kubelet[2753]: W0124 00:32:46.916268 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:46.916545 kubelet[2753]: E0124 00:32:46.916373 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:46.917110 kubelet[2753]: E0124 00:32:46.917068 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:46.917110 kubelet[2753]: W0124 00:32:46.917093 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:46.917475 kubelet[2753]: E0124 00:32:46.917221 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:46.917963 kubelet[2753]: E0124 00:32:46.917733 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:46.917963 kubelet[2753]: W0124 00:32:46.917758 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:46.917963 kubelet[2753]: E0124 00:32:46.917829 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:46.918286 kubelet[2753]: E0124 00:32:46.918251 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:46.918286 kubelet[2753]: W0124 00:32:46.918274 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:46.918758 kubelet[2753]: E0124 00:32:46.918566 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:46.918956 kubelet[2753]: E0124 00:32:46.918917 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:46.918956 kubelet[2753]: W0124 00:32:46.918951 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:46.919190 kubelet[2753]: E0124 00:32:46.919142 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:46.919618 kubelet[2753]: E0124 00:32:46.919583 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:46.919618 kubelet[2753]: W0124 00:32:46.919603 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:46.920024 kubelet[2753]: E0124 00:32:46.919770 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:46.920310 kubelet[2753]: E0124 00:32:46.920274 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:46.920310 kubelet[2753]: W0124 00:32:46.920297 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:46.920519 kubelet[2753]: E0124 00:32:46.920484 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:46.921002 kubelet[2753]: E0124 00:32:46.920914 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:46.921002 kubelet[2753]: W0124 00:32:46.920940 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:46.921282 kubelet[2753]: E0124 00:32:46.921102 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:46.921581 kubelet[2753]: E0124 00:32:46.921508 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:46.921581 kubelet[2753]: W0124 00:32:46.921540 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:46.923305 kubelet[2753]: E0124 00:32:46.921695 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:46.923305 kubelet[2753]: E0124 00:32:46.921991 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:46.923305 kubelet[2753]: W0124 00:32:46.922005 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:46.923305 kubelet[2753]: E0124 00:32:46.922097 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:46.923305 kubelet[2753]: E0124 00:32:46.922482 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:46.923305 kubelet[2753]: W0124 00:32:46.922497 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:46.923305 kubelet[2753]: E0124 00:32:46.922514 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:46.924284 kubelet[2753]: E0124 00:32:46.924255 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:46.924284 kubelet[2753]: W0124 00:32:46.924276 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:46.924494 kubelet[2753]: E0124 00:32:46.924307 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:46.925360 kubelet[2753]: E0124 00:32:46.924922 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:46.925360 kubelet[2753]: W0124 00:32:46.924944 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:46.925360 kubelet[2753]: E0124 00:32:46.924989 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:46.925839 kubelet[2753]: E0124 00:32:46.925811 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:46.925839 kubelet[2753]: W0124 00:32:46.925831 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:46.925926 kubelet[2753]: E0124 00:32:46.925852 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:47.862976 kubelet[2753]: I0124 00:32:47.862942 2753 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 00:32:47.888509 containerd[1650]: time="2026-01-24T00:32:47.887897057Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:32:47.888991 containerd[1650]: time="2026-01-24T00:32:47.888928412Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Jan 24 00:32:47.889586 containerd[1650]: time="2026-01-24T00:32:47.889554069Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:32:47.892379 containerd[1650]: time="2026-01-24T00:32:47.892336875Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:32:47.892975 containerd[1650]: time="2026-01-24T00:32:47.892942121Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.911402655s" Jan 24 00:32:47.893000 containerd[1650]: time="2026-01-24T00:32:47.892976432Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 24 00:32:47.895977 containerd[1650]: time="2026-01-24T00:32:47.895942417Z" level=info msg="CreateContainer within sandbox \"211dabbc1c0f92ced783941f1d1265aea74a2b8c9693d871bfd10fd6edf95cbb\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 24 00:32:47.907363 kubelet[2753]: E0124 00:32:47.907300 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:47.907363 kubelet[2753]: W0124 00:32:47.907323 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:47.908428 kubelet[2753]: E0124 00:32:47.907676 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:47.908720 kubelet[2753]: E0124 00:32:47.908567 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:47.908720 kubelet[2753]: W0124 00:32:47.908585 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:47.908720 kubelet[2753]: E0124 00:32:47.908606 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:47.908993 kubelet[2753]: E0124 00:32:47.908907 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:47.908993 kubelet[2753]: W0124 00:32:47.908916 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:47.908993 kubelet[2753]: E0124 00:32:47.908925 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:47.910724 kubelet[2753]: E0124 00:32:47.910643 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:47.910724 kubelet[2753]: W0124 00:32:47.910664 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:47.910724 kubelet[2753]: E0124 00:32:47.910683 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:47.911181 kubelet[2753]: E0124 00:32:47.911068 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:47.911181 kubelet[2753]: W0124 00:32:47.911080 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:47.911181 kubelet[2753]: E0124 00:32:47.911100 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:47.911682 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3893558053.mount: Deactivated successfully. Jan 24 00:32:47.912027 kubelet[2753]: E0124 00:32:47.911988 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:47.912027 kubelet[2753]: W0124 00:32:47.912000 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:47.912027 kubelet[2753]: E0124 00:32:47.912014 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:47.912657 containerd[1650]: time="2026-01-24T00:32:47.912554745Z" level=info msg="CreateContainer within sandbox \"211dabbc1c0f92ced783941f1d1265aea74a2b8c9693d871bfd10fd6edf95cbb\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c4566790717505834fef198acf99d3e2e2e844500d4d6177709e898aa927e43a\"" Jan 24 00:32:47.914186 containerd[1650]: time="2026-01-24T00:32:47.913294241Z" level=info msg="StartContainer for \"c4566790717505834fef198acf99d3e2e2e844500d4d6177709e898aa927e43a\"" Jan 24 00:32:47.914236 kubelet[2753]: E0124 00:32:47.913476 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:47.914236 kubelet[2753]: W0124 00:32:47.913486 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:47.914236 kubelet[2753]: E0124 00:32:47.913498 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:47.914236 kubelet[2753]: E0124 00:32:47.913711 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:47.914236 kubelet[2753]: W0124 00:32:47.913718 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:47.914236 kubelet[2753]: E0124 00:32:47.913726 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:47.914236 kubelet[2753]: E0124 00:32:47.913926 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:47.914236 kubelet[2753]: W0124 00:32:47.913932 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:47.914236 kubelet[2753]: E0124 00:32:47.913940 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:47.914236 kubelet[2753]: E0124 00:32:47.914102 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:47.914423 kubelet[2753]: W0124 00:32:47.914107 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:47.914423 kubelet[2753]: E0124 00:32:47.914119 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:47.914423 kubelet[2753]: E0124 00:32:47.914281 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:47.914423 kubelet[2753]: W0124 00:32:47.914289 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:47.914423 kubelet[2753]: E0124 00:32:47.914295 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:47.915286 kubelet[2753]: E0124 00:32:47.914539 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:47.915286 kubelet[2753]: W0124 00:32:47.914551 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:47.915286 kubelet[2753]: E0124 00:32:47.914559 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:47.915286 kubelet[2753]: E0124 00:32:47.914846 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:47.915286 kubelet[2753]: W0124 00:32:47.914852 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:47.915286 kubelet[2753]: E0124 00:32:47.914860 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:47.915286 kubelet[2753]: E0124 00:32:47.915033 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:47.915286 kubelet[2753]: W0124 00:32:47.915039 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:47.915286 kubelet[2753]: E0124 00:32:47.915045 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:47.915286 kubelet[2753]: E0124 00:32:47.915258 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:47.915496 kubelet[2753]: W0124 00:32:47.915264 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:47.915496 kubelet[2753]: E0124 00:32:47.915270 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:47.921546 kubelet[2753]: E0124 00:32:47.921514 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:47.921546 kubelet[2753]: W0124 00:32:47.921538 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:47.921634 kubelet[2753]: E0124 00:32:47.921557 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:47.923145 kubelet[2753]: E0124 00:32:47.923065 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:47.923145 kubelet[2753]: W0124 00:32:47.923080 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:47.923145 kubelet[2753]: E0124 00:32:47.923107 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:47.924100 kubelet[2753]: E0124 00:32:47.923419 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:47.924100 kubelet[2753]: W0124 00:32:47.923428 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:47.924100 kubelet[2753]: E0124 00:32:47.923551 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:47.924100 kubelet[2753]: E0124 00:32:47.923712 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:47.924100 kubelet[2753]: W0124 00:32:47.923719 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:47.924940 kubelet[2753]: E0124 00:32:47.924379 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:47.924940 kubelet[2753]: E0124 00:32:47.924500 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:47.924940 kubelet[2753]: W0124 00:32:47.924507 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:47.924940 kubelet[2753]: E0124 00:32:47.924601 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:47.924940 kubelet[2753]: E0124 00:32:47.924836 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:47.924940 kubelet[2753]: W0124 00:32:47.924843 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:47.924940 kubelet[2753]: E0124 00:32:47.924911 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:47.925210 kubelet[2753]: E0124 00:32:47.925069 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:47.925210 kubelet[2753]: W0124 00:32:47.925075 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:47.925210 kubelet[2753]: E0124 00:32:47.925188 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:47.925999 kubelet[2753]: E0124 00:32:47.925917 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:47.925999 kubelet[2753]: W0124 00:32:47.925927 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:47.925999 kubelet[2753]: E0124 00:32:47.925939 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:47.926327 kubelet[2753]: E0124 00:32:47.926165 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:47.926327 kubelet[2753]: W0124 00:32:47.926175 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:47.926327 kubelet[2753]: E0124 00:32:47.926336 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:47.926446 kubelet[2753]: E0124 00:32:47.926436 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:47.926446 kubelet[2753]: W0124 00:32:47.926445 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:47.926556 kubelet[2753]: E0124 00:32:47.926535 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:47.926851 kubelet[2753]: E0124 00:32:47.926834 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:47.926851 kubelet[2753]: W0124 00:32:47.926847 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:47.927086 kubelet[2753]: E0124 00:32:47.927049 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:47.927289 kubelet[2753]: E0124 00:32:47.927203 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:47.927289 kubelet[2753]: W0124 00:32:47.927212 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:47.927289 kubelet[2753]: E0124 00:32:47.927222 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:47.930412 kubelet[2753]: E0124 00:32:47.927731 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:47.930412 kubelet[2753]: W0124 00:32:47.927741 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:47.930412 kubelet[2753]: E0124 00:32:47.927850 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:47.933294 kubelet[2753]: E0124 00:32:47.933266 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:47.933294 kubelet[2753]: W0124 00:32:47.933291 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:47.933458 kubelet[2753]: E0124 00:32:47.933383 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:47.933805 kubelet[2753]: E0124 00:32:47.933778 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:47.933805 kubelet[2753]: W0124 00:32:47.933800 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:47.933861 kubelet[2753]: E0124 00:32:47.933815 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:47.934737 kubelet[2753]: E0124 00:32:47.934715 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:47.934737 kubelet[2753]: W0124 00:32:47.934728 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:47.934737 kubelet[2753]: E0124 00:32:47.934737 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:47.935109 kubelet[2753]: E0124 00:32:47.935091 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:47.935109 kubelet[2753]: W0124 00:32:47.935102 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:47.935287 kubelet[2753]: E0124 00:32:47.935185 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:47.935339 kubelet[2753]: E0124 00:32:47.935321 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:47.935339 kubelet[2753]: W0124 00:32:47.935331 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:47.935381 kubelet[2753]: E0124 00:32:47.935338 2753 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:47.984507 containerd[1650]: time="2026-01-24T00:32:47.984466809Z" level=info msg="StartContainer for \"c4566790717505834fef198acf99d3e2e2e844500d4d6177709e898aa927e43a\" returns successfully" Jan 24 00:32:48.082514 containerd[1650]: time="2026-01-24T00:32:48.082433061Z" level=info msg="shim disconnected" id=c4566790717505834fef198acf99d3e2e2e844500d4d6177709e898aa927e43a namespace=k8s.io Jan 24 00:32:48.082514 containerd[1650]: time="2026-01-24T00:32:48.082487301Z" level=warning msg="cleaning up after shim disconnected" id=c4566790717505834fef198acf99d3e2e2e844500d4d6177709e898aa927e43a namespace=k8s.io Jan 24 00:32:48.082514 containerd[1650]: time="2026-01-24T00:32:48.082495151Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:32:48.731237 kubelet[2753]: E0124 00:32:48.730842 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jv7gx" podUID="08d51dd3-a54b-4b8c-9510-41c1d4106f97" Jan 24 00:32:48.869529 containerd[1650]: time="2026-01-24T00:32:48.869346850Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 24 00:32:48.896062 kubelet[2753]: I0124 00:32:48.894485 2753 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6df585754d-5sqn4" podStartSLOduration=3.999698802 podStartE2EDuration="6.894461574s" podCreationTimestamp="2026-01-24 00:32:42 +0000 UTC" firstStartedPulling="2026-01-24 00:32:43.086559675 +0000 UTC m=+18.455138996" lastFinishedPulling="2026-01-24 00:32:45.981322447 +0000 UTC m=+21.349901768" observedRunningTime="2026-01-24 00:32:46.87620905 +0000 UTC m=+22.244788371" watchObservedRunningTime="2026-01-24 00:32:48.894461574 +0000 UTC m=+24.263040945" Jan 24 00:32:48.908186 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c4566790717505834fef198acf99d3e2e2e844500d4d6177709e898aa927e43a-rootfs.mount: Deactivated successfully. Jan 24 00:32:50.393610 kubelet[2753]: I0124 00:32:50.393107 2753 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 00:32:50.731731 kubelet[2753]: E0124 00:32:50.731111 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jv7gx" podUID="08d51dd3-a54b-4b8c-9510-41c1d4106f97" Jan 24 00:32:52.732219 kubelet[2753]: E0124 00:32:52.731959 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jv7gx" podUID="08d51dd3-a54b-4b8c-9510-41c1d4106f97" Jan 24 00:32:52.838452 containerd[1650]: time="2026-01-24T00:32:52.838399062Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:32:52.842817 containerd[1650]: time="2026-01-24T00:32:52.842262908Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 24 00:32:52.843572 containerd[1650]: time="2026-01-24T00:32:52.843538144Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:32:52.845321 containerd[1650]: time="2026-01-24T00:32:52.845288838Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:32:52.845939 containerd[1650]: time="2026-01-24T00:32:52.845746786Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.976318017s" Jan 24 00:32:52.845939 containerd[1650]: time="2026-01-24T00:32:52.845769496Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 24 00:32:52.848787 containerd[1650]: time="2026-01-24T00:32:52.848699397Z" level=info msg="CreateContainer within sandbox \"211dabbc1c0f92ced783941f1d1265aea74a2b8c9693d871bfd10fd6edf95cbb\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 24 00:32:52.866445 containerd[1650]: time="2026-01-24T00:32:52.866160996Z" level=info msg="CreateContainer within sandbox \"211dabbc1c0f92ced783941f1d1265aea74a2b8c9693d871bfd10fd6edf95cbb\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a10f632a06de11f8cbbbec859601ec707c0a4c4e205cc6977ffad99adaf3f46b\"" Jan 24 00:32:52.867442 containerd[1650]: time="2026-01-24T00:32:52.867281991Z" level=info msg="StartContainer for \"a10f632a06de11f8cbbbec859601ec707c0a4c4e205cc6977ffad99adaf3f46b\"" Jan 24 00:32:52.925321 containerd[1650]: time="2026-01-24T00:32:52.925255210Z" level=info msg="StartContainer for \"a10f632a06de11f8cbbbec859601ec707c0a4c4e205cc6977ffad99adaf3f46b\" returns successfully" Jan 24 00:32:53.614057 containerd[1650]: time="2026-01-24T00:32:53.613987679Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 24 00:32:53.628783 kubelet[2753]: I0124 00:32:53.628731 2753 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 24 00:32:53.705898 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a10f632a06de11f8cbbbec859601ec707c0a4c4e205cc6977ffad99adaf3f46b-rootfs.mount: Deactivated successfully. Jan 24 00:32:53.768226 containerd[1650]: time="2026-01-24T00:32:53.768174451Z" level=info msg="shim disconnected" id=a10f632a06de11f8cbbbec859601ec707c0a4c4e205cc6977ffad99adaf3f46b namespace=k8s.io Jan 24 00:32:53.768226 containerd[1650]: time="2026-01-24T00:32:53.768221731Z" level=warning msg="cleaning up after shim disconnected" id=a10f632a06de11f8cbbbec859601ec707c0a4c4e205cc6977ffad99adaf3f46b namespace=k8s.io Jan 24 00:32:53.768226 containerd[1650]: time="2026-01-24T00:32:53.768229181Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:32:53.871223 kubelet[2753]: I0124 00:32:53.871039 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8571ab88-459c-48f7-a296-37c9ac9b6a8a-calico-apiserver-certs\") pod \"calico-apiserver-f4f66fd65-vcrjr\" (UID: \"8571ab88-459c-48f7-a296-37c9ac9b6a8a\") " pod="calico-apiserver/calico-apiserver-f4f66fd65-vcrjr" Jan 24 00:32:53.871223 kubelet[2753]: I0124 00:32:53.871107 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd50a6af-d58c-4f34-800a-f6cc47cfc02e-whisker-ca-bundle\") pod \"whisker-6974f9cb6-rk89t\" (UID: \"dd50a6af-d58c-4f34-800a-f6cc47cfc02e\") " pod="calico-system/whisker-6974f9cb6-rk89t" Jan 24 00:32:53.871223 kubelet[2753]: I0124 00:32:53.871135 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdqtp\" (UniqueName: \"kubernetes.io/projected/d4b6ade8-6f1f-4156-806d-99bf4d2944e2-kube-api-access-zdqtp\") pod \"coredns-668d6bf9bc-9hl7k\" (UID: \"d4b6ade8-6f1f-4156-806d-99bf4d2944e2\") " pod="kube-system/coredns-668d6bf9bc-9hl7k" Jan 24 00:32:53.871223 kubelet[2753]: I0124 00:32:53.871159 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c67637af-e8b0-4286-97b2-b018e1728d18-config-volume\") pod \"coredns-668d6bf9bc-xrqs4\" (UID: \"c67637af-e8b0-4286-97b2-b018e1728d18\") " pod="kube-system/coredns-668d6bf9bc-xrqs4" Jan 24 00:32:53.871223 kubelet[2753]: I0124 00:32:53.871188 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/60b48194-9cf1-4af7-bca5-7353b7dd4d41-tigera-ca-bundle\") pod \"calico-kube-controllers-65d6744f47-ksmv4\" (UID: \"60b48194-9cf1-4af7-bca5-7353b7dd4d41\") " pod="calico-system/calico-kube-controllers-65d6744f47-ksmv4" Jan 24 00:32:53.872933 kubelet[2753]: I0124 00:32:53.871212 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/dd50a6af-d58c-4f34-800a-f6cc47cfc02e-whisker-backend-key-pair\") pod \"whisker-6974f9cb6-rk89t\" (UID: \"dd50a6af-d58c-4f34-800a-f6cc47cfc02e\") " pod="calico-system/whisker-6974f9cb6-rk89t" Jan 24 00:32:53.872933 kubelet[2753]: I0124 00:32:53.871237 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8nk7\" (UniqueName: \"kubernetes.io/projected/8571ab88-459c-48f7-a296-37c9ac9b6a8a-kube-api-access-r8nk7\") pod \"calico-apiserver-f4f66fd65-vcrjr\" (UID: \"8571ab88-459c-48f7-a296-37c9ac9b6a8a\") " pod="calico-apiserver/calico-apiserver-f4f66fd65-vcrjr" Jan 24 00:32:53.872933 kubelet[2753]: I0124 00:32:53.871265 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6vg8\" (UniqueName: \"kubernetes.io/projected/e954bcbc-6a7d-4fa9-9256-747a5b39530e-kube-api-access-j6vg8\") pod \"goldmane-666569f655-r5n27\" (UID: \"e954bcbc-6a7d-4fa9-9256-747a5b39530e\") " pod="calico-system/goldmane-666569f655-r5n27" Jan 24 00:32:53.872933 kubelet[2753]: I0124 00:32:53.871291 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xwzh\" (UniqueName: \"kubernetes.io/projected/45f9a298-5fb0-472f-b747-58a979ff2009-kube-api-access-7xwzh\") pod \"calico-apiserver-f4f66fd65-f9s4x\" (UID: \"45f9a298-5fb0-472f-b747-58a979ff2009\") " pod="calico-apiserver/calico-apiserver-f4f66fd65-f9s4x" Jan 24 00:32:53.872933 kubelet[2753]: I0124 00:32:53.871325 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/45f9a298-5fb0-472f-b747-58a979ff2009-calico-apiserver-certs\") pod \"calico-apiserver-f4f66fd65-f9s4x\" (UID: \"45f9a298-5fb0-472f-b747-58a979ff2009\") " pod="calico-apiserver/calico-apiserver-f4f66fd65-f9s4x" Jan 24 00:32:53.873154 kubelet[2753]: I0124 00:32:53.871352 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkljm\" (UniqueName: \"kubernetes.io/projected/abb81e57-cdeb-458f-9a89-6ad70b4a9133-kube-api-access-hkljm\") pod \"calico-apiserver-6bfcb7c46c-w555v\" (UID: \"abb81e57-cdeb-458f-9a89-6ad70b4a9133\") " pod="calico-apiserver/calico-apiserver-6bfcb7c46c-w555v" Jan 24 00:32:53.873154 kubelet[2753]: I0124 00:32:53.871375 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d4b6ade8-6f1f-4156-806d-99bf4d2944e2-config-volume\") pod \"coredns-668d6bf9bc-9hl7k\" (UID: \"d4b6ade8-6f1f-4156-806d-99bf4d2944e2\") " pod="kube-system/coredns-668d6bf9bc-9hl7k" Jan 24 00:32:53.873154 kubelet[2753]: I0124 00:32:53.871489 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b447x\" (UniqueName: \"kubernetes.io/projected/dd50a6af-d58c-4f34-800a-f6cc47cfc02e-kube-api-access-b447x\") pod \"whisker-6974f9cb6-rk89t\" (UID: \"dd50a6af-d58c-4f34-800a-f6cc47cfc02e\") " pod="calico-system/whisker-6974f9cb6-rk89t" Jan 24 00:32:53.873154 kubelet[2753]: I0124 00:32:53.872505 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e954bcbc-6a7d-4fa9-9256-747a5b39530e-goldmane-ca-bundle\") pod \"goldmane-666569f655-r5n27\" (UID: \"e954bcbc-6a7d-4fa9-9256-747a5b39530e\") " pod="calico-system/goldmane-666569f655-r5n27" Jan 24 00:32:53.873154 kubelet[2753]: I0124 00:32:53.872555 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv79g\" (UniqueName: \"kubernetes.io/projected/60b48194-9cf1-4af7-bca5-7353b7dd4d41-kube-api-access-xv79g\") pod \"calico-kube-controllers-65d6744f47-ksmv4\" (UID: \"60b48194-9cf1-4af7-bca5-7353b7dd4d41\") " pod="calico-system/calico-kube-controllers-65d6744f47-ksmv4" Jan 24 00:32:53.873377 kubelet[2753]: I0124 00:32:53.872590 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e954bcbc-6a7d-4fa9-9256-747a5b39530e-config\") pod \"goldmane-666569f655-r5n27\" (UID: \"e954bcbc-6a7d-4fa9-9256-747a5b39530e\") " pod="calico-system/goldmane-666569f655-r5n27" Jan 24 00:32:53.873377 kubelet[2753]: I0124 00:32:53.872616 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/abb81e57-cdeb-458f-9a89-6ad70b4a9133-calico-apiserver-certs\") pod \"calico-apiserver-6bfcb7c46c-w555v\" (UID: \"abb81e57-cdeb-458f-9a89-6ad70b4a9133\") " pod="calico-apiserver/calico-apiserver-6bfcb7c46c-w555v" Jan 24 00:32:53.873377 kubelet[2753]: I0124 00:32:53.872642 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/e954bcbc-6a7d-4fa9-9256-747a5b39530e-goldmane-key-pair\") pod \"goldmane-666569f655-r5n27\" (UID: \"e954bcbc-6a7d-4fa9-9256-747a5b39530e\") " pod="calico-system/goldmane-666569f655-r5n27" Jan 24 00:32:53.873377 kubelet[2753]: I0124 00:32:53.872670 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2f7nx\" (UniqueName: \"kubernetes.io/projected/c67637af-e8b0-4286-97b2-b018e1728d18-kube-api-access-2f7nx\") pod \"coredns-668d6bf9bc-xrqs4\" (UID: \"c67637af-e8b0-4286-97b2-b018e1728d18\") " pod="kube-system/coredns-668d6bf9bc-xrqs4" Jan 24 00:32:53.895271 containerd[1650]: time="2026-01-24T00:32:53.895136342Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 24 00:32:54.071147 containerd[1650]: time="2026-01-24T00:32:54.071051281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-r5n27,Uid:e954bcbc-6a7d-4fa9-9256-747a5b39530e,Namespace:calico-system,Attempt:0,}" Jan 24 00:32:54.074296 containerd[1650]: time="2026-01-24T00:32:54.074078931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f4f66fd65-vcrjr,Uid:8571ab88-459c-48f7-a296-37c9ac9b6a8a,Namespace:calico-apiserver,Attempt:0,}" Jan 24 00:32:54.079674 containerd[1650]: time="2026-01-24T00:32:54.079650924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bfcb7c46c-w555v,Uid:abb81e57-cdeb-458f-9a89-6ad70b4a9133,Namespace:calico-apiserver,Attempt:0,}" Jan 24 00:32:54.080295 containerd[1650]: time="2026-01-24T00:32:54.080132543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xrqs4,Uid:c67637af-e8b0-4286-97b2-b018e1728d18,Namespace:kube-system,Attempt:0,}" Jan 24 00:32:54.080295 containerd[1650]: time="2026-01-24T00:32:54.080246742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65d6744f47-ksmv4,Uid:60b48194-9cf1-4af7-bca5-7353b7dd4d41,Namespace:calico-system,Attempt:0,}" Jan 24 00:32:54.087618 containerd[1650]: time="2026-01-24T00:32:54.087151202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f4f66fd65-f9s4x,Uid:45f9a298-5fb0-472f-b747-58a979ff2009,Namespace:calico-apiserver,Attempt:0,}" Jan 24 00:32:54.175742 containerd[1650]: time="2026-01-24T00:32:54.173880481Z" level=error msg="Failed to destroy network for sandbox \"01980512e9705e6a944e98d9a1111365ff5a35472bc59dfdb58bb9981274cdef\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:32:54.175742 containerd[1650]: time="2026-01-24T00:32:54.174235391Z" level=error msg="encountered an error cleaning up failed sandbox \"01980512e9705e6a944e98d9a1111365ff5a35472bc59dfdb58bb9981274cdef\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:32:54.175742 containerd[1650]: time="2026-01-24T00:32:54.174280051Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-r5n27,Uid:e954bcbc-6a7d-4fa9-9256-747a5b39530e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"01980512e9705e6a944e98d9a1111365ff5a35472bc59dfdb58bb9981274cdef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:32:54.176001 kubelet[2753]: E0124 00:32:54.174827 2753 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01980512e9705e6a944e98d9a1111365ff5a35472bc59dfdb58bb9981274cdef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:32:54.176001 kubelet[2753]: E0124 00:32:54.174920 2753 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01980512e9705e6a944e98d9a1111365ff5a35472bc59dfdb58bb9981274cdef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-r5n27" Jan 24 00:32:54.176001 kubelet[2753]: E0124 00:32:54.174948 2753 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01980512e9705e6a944e98d9a1111365ff5a35472bc59dfdb58bb9981274cdef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-r5n27" Jan 24 00:32:54.176132 kubelet[2753]: E0124 00:32:54.174994 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-r5n27_calico-system(e954bcbc-6a7d-4fa9-9256-747a5b39530e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-r5n27_calico-system(e954bcbc-6a7d-4fa9-9256-747a5b39530e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"01980512e9705e6a944e98d9a1111365ff5a35472bc59dfdb58bb9981274cdef\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-r5n27" podUID="e954bcbc-6a7d-4fa9-9256-747a5b39530e" Jan 24 00:32:54.239860 containerd[1650]: time="2026-01-24T00:32:54.239362606Z" level=error msg="Failed to destroy network for sandbox \"7da988224e02efacacbf41da865446bcc0b13db71dd771b2ae7221d284a8372d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:32:54.240466 containerd[1650]: time="2026-01-24T00:32:54.240441052Z" level=error msg="encountered an error cleaning up failed sandbox \"7da988224e02efacacbf41da865446bcc0b13db71dd771b2ae7221d284a8372d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:32:54.240718 containerd[1650]: time="2026-01-24T00:32:54.240655291Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65d6744f47-ksmv4,Uid:60b48194-9cf1-4af7-bca5-7353b7dd4d41,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7da988224e02efacacbf41da865446bcc0b13db71dd771b2ae7221d284a8372d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:32:54.241088 kubelet[2753]: E0124 00:32:54.241005 2753 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7da988224e02efacacbf41da865446bcc0b13db71dd771b2ae7221d284a8372d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:32:54.241088 kubelet[2753]: E0124 00:32:54.241070 2753 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7da988224e02efacacbf41da865446bcc0b13db71dd771b2ae7221d284a8372d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-65d6744f47-ksmv4" Jan 24 00:32:54.241088 kubelet[2753]: E0124 00:32:54.241086 2753 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7da988224e02efacacbf41da865446bcc0b13db71dd771b2ae7221d284a8372d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-65d6744f47-ksmv4" Jan 24 00:32:54.241185 kubelet[2753]: E0124 00:32:54.241118 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-65d6744f47-ksmv4_calico-system(60b48194-9cf1-4af7-bca5-7353b7dd4d41)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-65d6744f47-ksmv4_calico-system(60b48194-9cf1-4af7-bca5-7353b7dd4d41)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7da988224e02efacacbf41da865446bcc0b13db71dd771b2ae7221d284a8372d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-65d6744f47-ksmv4" podUID="60b48194-9cf1-4af7-bca5-7353b7dd4d41" Jan 24 00:32:54.261304 containerd[1650]: time="2026-01-24T00:32:54.261064141Z" level=error msg="Failed to destroy network for sandbox \"b87b98a3af7ee3503771164041e0de4cad3a5c161331c13536afbe36658eecd0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:32:54.261595 containerd[1650]: time="2026-01-24T00:32:54.261567959Z" level=error msg="encountered an error cleaning up failed sandbox \"b87b98a3af7ee3503771164041e0de4cad3a5c161331c13536afbe36658eecd0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:32:54.261679 containerd[1650]: time="2026-01-24T00:32:54.261664208Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xrqs4,Uid:c67637af-e8b0-4286-97b2-b018e1728d18,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b87b98a3af7ee3503771164041e0de4cad3a5c161331c13536afbe36658eecd0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:32:54.261986 kubelet[2753]: E0124 00:32:54.261932 2753 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b87b98a3af7ee3503771164041e0de4cad3a5c161331c13536afbe36658eecd0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:32:54.262029 kubelet[2753]: E0124 00:32:54.262000 2753 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b87b98a3af7ee3503771164041e0de4cad3a5c161331c13536afbe36658eecd0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-xrqs4" Jan 24 00:32:54.262029 kubelet[2753]: E0124 00:32:54.262019 2753 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b87b98a3af7ee3503771164041e0de4cad3a5c161331c13536afbe36658eecd0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-xrqs4" Jan 24 00:32:54.262085 kubelet[2753]: E0124 00:32:54.262055 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-xrqs4_kube-system(c67637af-e8b0-4286-97b2-b018e1728d18)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-xrqs4_kube-system(c67637af-e8b0-4286-97b2-b018e1728d18)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b87b98a3af7ee3503771164041e0de4cad3a5c161331c13536afbe36658eecd0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-xrqs4" podUID="c67637af-e8b0-4286-97b2-b018e1728d18" Jan 24 00:32:54.268277 containerd[1650]: time="2026-01-24T00:32:54.268192149Z" level=error msg="Failed to destroy network for sandbox \"0b8c502a59aef7be8a1d38df20e6870e36a886296160f154bc7d1cc308763e9c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:32:54.268646 containerd[1650]: time="2026-01-24T00:32:54.268547268Z" level=error msg="encountered an error cleaning up failed sandbox \"0b8c502a59aef7be8a1d38df20e6870e36a886296160f154bc7d1cc308763e9c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:32:54.268646 containerd[1650]: time="2026-01-24T00:32:54.268581198Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f4f66fd65-f9s4x,Uid:45f9a298-5fb0-472f-b747-58a979ff2009,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0b8c502a59aef7be8a1d38df20e6870e36a886296160f154bc7d1cc308763e9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:32:54.268757 kubelet[2753]: E0124 00:32:54.268717 2753 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b8c502a59aef7be8a1d38df20e6870e36a886296160f154bc7d1cc308763e9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:32:54.268792 kubelet[2753]: E0124 00:32:54.268755 2753 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b8c502a59aef7be8a1d38df20e6870e36a886296160f154bc7d1cc308763e9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f4f66fd65-f9s4x" Jan 24 00:32:54.268792 kubelet[2753]: E0124 00:32:54.268770 2753 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b8c502a59aef7be8a1d38df20e6870e36a886296160f154bc7d1cc308763e9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f4f66fd65-f9s4x" Jan 24 00:32:54.268957 kubelet[2753]: E0124 00:32:54.268798 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-f4f66fd65-f9s4x_calico-apiserver(45f9a298-5fb0-472f-b747-58a979ff2009)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-f4f66fd65-f9s4x_calico-apiserver(45f9a298-5fb0-472f-b747-58a979ff2009)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0b8c502a59aef7be8a1d38df20e6870e36a886296160f154bc7d1cc308763e9c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f4f66fd65-f9s4x" podUID="45f9a298-5fb0-472f-b747-58a979ff2009" Jan 24 00:32:54.274101 containerd[1650]: time="2026-01-24T00:32:54.274067841Z" level=error msg="Failed to destroy network for sandbox \"9962b513d74e499c74e3cb7c19d37cc9c00c9c1d1128c2c9c237f85880db8e8b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:32:54.274622 containerd[1650]: time="2026-01-24T00:32:54.274599959Z" level=error msg="encountered an error cleaning up failed sandbox \"9962b513d74e499c74e3cb7c19d37cc9c00c9c1d1128c2c9c237f85880db8e8b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:32:54.274648 containerd[1650]: time="2026-01-24T00:32:54.274631979Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f4f66fd65-vcrjr,Uid:8571ab88-459c-48f7-a296-37c9ac9b6a8a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9962b513d74e499c74e3cb7c19d37cc9c00c9c1d1128c2c9c237f85880db8e8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:32:54.275421 kubelet[2753]: E0124 00:32:54.274820 2753 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9962b513d74e499c74e3cb7c19d37cc9c00c9c1d1128c2c9c237f85880db8e8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:32:54.275421 kubelet[2753]: E0124 00:32:54.274880 2753 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9962b513d74e499c74e3cb7c19d37cc9c00c9c1d1128c2c9c237f85880db8e8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f4f66fd65-vcrjr" Jan 24 00:32:54.275421 kubelet[2753]: E0124 00:32:54.274893 2753 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9962b513d74e499c74e3cb7c19d37cc9c00c9c1d1128c2c9c237f85880db8e8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f4f66fd65-vcrjr" Jan 24 00:32:54.275510 kubelet[2753]: E0124 00:32:54.274931 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-f4f66fd65-vcrjr_calico-apiserver(8571ab88-459c-48f7-a296-37c9ac9b6a8a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-f4f66fd65-vcrjr_calico-apiserver(8571ab88-459c-48f7-a296-37c9ac9b6a8a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9962b513d74e499c74e3cb7c19d37cc9c00c9c1d1128c2c9c237f85880db8e8b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f4f66fd65-vcrjr" podUID="8571ab88-459c-48f7-a296-37c9ac9b6a8a" Jan 24 00:32:54.276649 containerd[1650]: time="2026-01-24T00:32:54.276618513Z" level=error msg="Failed to destroy network for sandbox \"a217d8c34dbd51e19632ff3d2dc59fbaf1b83772998cdb977c005009e5f829c1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:32:54.276933 containerd[1650]: time="2026-01-24T00:32:54.276910422Z" level=error msg="encountered an error cleaning up failed sandbox \"a217d8c34dbd51e19632ff3d2dc59fbaf1b83772998cdb977c005009e5f829c1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:32:54.276969 containerd[1650]: time="2026-01-24T00:32:54.276945212Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bfcb7c46c-w555v,Uid:abb81e57-cdeb-458f-9a89-6ad70b4a9133,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a217d8c34dbd51e19632ff3d2dc59fbaf1b83772998cdb977c005009e5f829c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:32:54.277317 kubelet[2753]: E0124 00:32:54.277201 2753 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a217d8c34dbd51e19632ff3d2dc59fbaf1b83772998cdb977c005009e5f829c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:32:54.277317 kubelet[2753]: E0124 00:32:54.277232 2753 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a217d8c34dbd51e19632ff3d2dc59fbaf1b83772998cdb977c005009e5f829c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6bfcb7c46c-w555v" Jan 24 00:32:54.277317 kubelet[2753]: E0124 00:32:54.277246 2753 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a217d8c34dbd51e19632ff3d2dc59fbaf1b83772998cdb977c005009e5f829c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6bfcb7c46c-w555v" Jan 24 00:32:54.277415 kubelet[2753]: E0124 00:32:54.277273 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6bfcb7c46c-w555v_calico-apiserver(abb81e57-cdeb-458f-9a89-6ad70b4a9133)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6bfcb7c46c-w555v_calico-apiserver(abb81e57-cdeb-458f-9a89-6ad70b4a9133)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a217d8c34dbd51e19632ff3d2dc59fbaf1b83772998cdb977c005009e5f829c1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6bfcb7c46c-w555v" podUID="abb81e57-cdeb-458f-9a89-6ad70b4a9133" Jan 24 00:32:54.310177 containerd[1650]: time="2026-01-24T00:32:54.310103593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6974f9cb6-rk89t,Uid:dd50a6af-d58c-4f34-800a-f6cc47cfc02e,Namespace:calico-system,Attempt:0,}" Jan 24 00:32:54.354452 containerd[1650]: time="2026-01-24T00:32:54.353922901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9hl7k,Uid:d4b6ade8-6f1f-4156-806d-99bf4d2944e2,Namespace:kube-system,Attempt:0,}" Jan 24 00:32:54.394410 containerd[1650]: time="2026-01-24T00:32:54.394330030Z" level=error msg="Failed to destroy network for sandbox \"d73c6a86204171f06b0afb71fa0d60e7b7d852a50e14c0ac2369222f7b57d106\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:32:54.395649 containerd[1650]: time="2026-01-24T00:32:54.395617137Z" level=error msg="encountered an error cleaning up failed sandbox \"d73c6a86204171f06b0afb71fa0d60e7b7d852a50e14c0ac2369222f7b57d106\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:32:54.395715 containerd[1650]: time="2026-01-24T00:32:54.395659737Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6974f9cb6-rk89t,Uid:dd50a6af-d58c-4f34-800a-f6cc47cfc02e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d73c6a86204171f06b0afb71fa0d60e7b7d852a50e14c0ac2369222f7b57d106\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:32:54.396866 kubelet[2753]: E0124 00:32:54.396821 2753 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d73c6a86204171f06b0afb71fa0d60e7b7d852a50e14c0ac2369222f7b57d106\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:32:54.396950 kubelet[2753]: E0124 00:32:54.396882 2753 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d73c6a86204171f06b0afb71fa0d60e7b7d852a50e14c0ac2369222f7b57d106\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6974f9cb6-rk89t" Jan 24 00:32:54.396950 kubelet[2753]: E0124 00:32:54.396899 2753 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d73c6a86204171f06b0afb71fa0d60e7b7d852a50e14c0ac2369222f7b57d106\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6974f9cb6-rk89t" Jan 24 00:32:54.396950 kubelet[2753]: E0124 00:32:54.396933 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6974f9cb6-rk89t_calico-system(dd50a6af-d58c-4f34-800a-f6cc47cfc02e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6974f9cb6-rk89t_calico-system(dd50a6af-d58c-4f34-800a-f6cc47cfc02e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d73c6a86204171f06b0afb71fa0d60e7b7d852a50e14c0ac2369222f7b57d106\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6974f9cb6-rk89t" podUID="dd50a6af-d58c-4f34-800a-f6cc47cfc02e" Jan 24 00:32:54.430213 containerd[1650]: time="2026-01-24T00:32:54.429776514Z" level=error msg="Failed to destroy network for sandbox \"f0328ead9a379c5c934a4c1395f5cddeec596243749c827fa826d926712e3fac\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:32:54.430527 containerd[1650]: time="2026-01-24T00:32:54.430314292Z" level=error msg="encountered an error cleaning up failed sandbox \"f0328ead9a379c5c934a4c1395f5cddeec596243749c827fa826d926712e3fac\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:32:54.430527 containerd[1650]: time="2026-01-24T00:32:54.430384272Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9hl7k,Uid:d4b6ade8-6f1f-4156-806d-99bf4d2944e2,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f0328ead9a379c5c934a4c1395f5cddeec596243749c827fa826d926712e3fac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:32:54.431922 kubelet[2753]: E0124 00:32:54.430779 2753 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0328ead9a379c5c934a4c1395f5cddeec596243749c827fa826d926712e3fac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:32:54.431922 kubelet[2753]: E0124 00:32:54.430841 2753 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0328ead9a379c5c934a4c1395f5cddeec596243749c827fa826d926712e3fac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-9hl7k" Jan 24 00:32:54.431922 kubelet[2753]: E0124 00:32:54.430881 2753 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0328ead9a379c5c934a4c1395f5cddeec596243749c827fa826d926712e3fac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-9hl7k" Jan 24 00:32:54.432050 kubelet[2753]: E0124 00:32:54.430929 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-9hl7k_kube-system(d4b6ade8-6f1f-4156-806d-99bf4d2944e2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-9hl7k_kube-system(d4b6ade8-6f1f-4156-806d-99bf4d2944e2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f0328ead9a379c5c934a4c1395f5cddeec596243749c827fa826d926712e3fac\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-9hl7k" podUID="d4b6ade8-6f1f-4156-806d-99bf4d2944e2" Jan 24 00:32:54.738479 containerd[1650]: time="2026-01-24T00:32:54.737781990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jv7gx,Uid:08d51dd3-a54b-4b8c-9510-41c1d4106f97,Namespace:calico-system,Attempt:0,}" Jan 24 00:32:54.803607 containerd[1650]: time="2026-01-24T00:32:54.803547763Z" level=error msg="Failed to destroy network for sandbox \"5e6fc284e88673e193c9910bdf52dc2df04392d112b0f429bf831dc0545477c3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:32:54.804188 containerd[1650]: time="2026-01-24T00:32:54.804149881Z" level=error msg="encountered an error cleaning up failed sandbox \"5e6fc284e88673e193c9910bdf52dc2df04392d112b0f429bf831dc0545477c3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:32:54.804243 containerd[1650]: time="2026-01-24T00:32:54.804212651Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jv7gx,Uid:08d51dd3-a54b-4b8c-9510-41c1d4106f97,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5e6fc284e88673e193c9910bdf52dc2df04392d112b0f429bf831dc0545477c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:32:54.804770 kubelet[2753]: E0124 00:32:54.804462 2753 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e6fc284e88673e193c9910bdf52dc2df04392d112b0f429bf831dc0545477c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:32:54.804770 kubelet[2753]: E0124 00:32:54.804522 2753 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e6fc284e88673e193c9910bdf52dc2df04392d112b0f429bf831dc0545477c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jv7gx" Jan 24 00:32:54.804770 kubelet[2753]: E0124 00:32:54.804540 2753 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e6fc284e88673e193c9910bdf52dc2df04392d112b0f429bf831dc0545477c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jv7gx" Jan 24 00:32:54.804874 kubelet[2753]: E0124 00:32:54.804579 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jv7gx_calico-system(08d51dd3-a54b-4b8c-9510-41c1d4106f97)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jv7gx_calico-system(08d51dd3-a54b-4b8c-9510-41c1d4106f97)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5e6fc284e88673e193c9910bdf52dc2df04392d112b0f429bf831dc0545477c3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jv7gx" podUID="08d51dd3-a54b-4b8c-9510-41c1d4106f97" Jan 24 00:32:54.894774 kubelet[2753]: I0124 00:32:54.894749 2753 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7da988224e02efacacbf41da865446bcc0b13db71dd771b2ae7221d284a8372d" Jan 24 00:32:54.897284 containerd[1650]: time="2026-01-24T00:32:54.895886456Z" level=info msg="StopPodSandbox for \"7da988224e02efacacbf41da865446bcc0b13db71dd771b2ae7221d284a8372d\"" Jan 24 00:32:54.897284 containerd[1650]: time="2026-01-24T00:32:54.896031596Z" level=info msg="Ensure that sandbox 7da988224e02efacacbf41da865446bcc0b13db71dd771b2ae7221d284a8372d in task-service has been cleanup successfully" Jan 24 00:32:54.899078 kubelet[2753]: I0124 00:32:54.898733 2753 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b87b98a3af7ee3503771164041e0de4cad3a5c161331c13536afbe36658eecd0" Jan 24 00:32:54.899595 containerd[1650]: time="2026-01-24T00:32:54.899233067Z" level=info msg="StopPodSandbox for \"b87b98a3af7ee3503771164041e0de4cad3a5c161331c13536afbe36658eecd0\"" Jan 24 00:32:54.899595 containerd[1650]: time="2026-01-24T00:32:54.899357596Z" level=info msg="Ensure that sandbox b87b98a3af7ee3503771164041e0de4cad3a5c161331c13536afbe36658eecd0 in task-service has been cleanup successfully" Jan 24 00:32:54.901493 kubelet[2753]: I0124 00:32:54.901468 2753 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01980512e9705e6a944e98d9a1111365ff5a35472bc59dfdb58bb9981274cdef" Jan 24 00:32:54.902639 containerd[1650]: time="2026-01-24T00:32:54.902600666Z" level=info msg="StopPodSandbox for \"01980512e9705e6a944e98d9a1111365ff5a35472bc59dfdb58bb9981274cdef\"" Jan 24 00:32:54.902837 containerd[1650]: time="2026-01-24T00:32:54.902806465Z" level=info msg="Ensure that sandbox 01980512e9705e6a944e98d9a1111365ff5a35472bc59dfdb58bb9981274cdef in task-service has been cleanup successfully" Jan 24 00:32:54.909446 kubelet[2753]: I0124 00:32:54.909129 2753 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e6fc284e88673e193c9910bdf52dc2df04392d112b0f429bf831dc0545477c3" Jan 24 00:32:54.911112 containerd[1650]: time="2026-01-24T00:32:54.911090740Z" level=info msg="StopPodSandbox for \"5e6fc284e88673e193c9910bdf52dc2df04392d112b0f429bf831dc0545477c3\"" Jan 24 00:32:54.911311 containerd[1650]: time="2026-01-24T00:32:54.911296480Z" level=info msg="Ensure that sandbox 5e6fc284e88673e193c9910bdf52dc2df04392d112b0f429bf831dc0545477c3 in task-service has been cleanup successfully" Jan 24 00:32:54.915290 kubelet[2753]: I0124 00:32:54.915264 2753 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0328ead9a379c5c934a4c1395f5cddeec596243749c827fa826d926712e3fac" Jan 24 00:32:54.918604 containerd[1650]: time="2026-01-24T00:32:54.918585889Z" level=info msg="StopPodSandbox for \"f0328ead9a379c5c934a4c1395f5cddeec596243749c827fa826d926712e3fac\"" Jan 24 00:32:54.919888 containerd[1650]: time="2026-01-24T00:32:54.919539655Z" level=info msg="Ensure that sandbox f0328ead9a379c5c934a4c1395f5cddeec596243749c827fa826d926712e3fac in task-service has been cleanup successfully" Jan 24 00:32:54.919955 kubelet[2753]: I0124 00:32:54.919604 2753 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d73c6a86204171f06b0afb71fa0d60e7b7d852a50e14c0ac2369222f7b57d106" Jan 24 00:32:54.924220 containerd[1650]: time="2026-01-24T00:32:54.924172592Z" level=info msg="StopPodSandbox for \"d73c6a86204171f06b0afb71fa0d60e7b7d852a50e14c0ac2369222f7b57d106\"" Jan 24 00:32:54.924655 containerd[1650]: time="2026-01-24T00:32:54.924622790Z" level=info msg="Ensure that sandbox d73c6a86204171f06b0afb71fa0d60e7b7d852a50e14c0ac2369222f7b57d106 in task-service has been cleanup successfully" Jan 24 00:32:54.928094 kubelet[2753]: I0124 00:32:54.928055 2753 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b8c502a59aef7be8a1d38df20e6870e36a886296160f154bc7d1cc308763e9c" Jan 24 00:32:54.933019 containerd[1650]: time="2026-01-24T00:32:54.932761305Z" level=info msg="StopPodSandbox for \"0b8c502a59aef7be8a1d38df20e6870e36a886296160f154bc7d1cc308763e9c\"" Jan 24 00:32:54.933019 containerd[1650]: time="2026-01-24T00:32:54.932873576Z" level=info msg="Ensure that sandbox 0b8c502a59aef7be8a1d38df20e6870e36a886296160f154bc7d1cc308763e9c in task-service has been cleanup successfully" Jan 24 00:32:54.940497 kubelet[2753]: I0124 00:32:54.940465 2753 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9962b513d74e499c74e3cb7c19d37cc9c00c9c1d1128c2c9c237f85880db8e8b" Jan 24 00:32:54.941801 kubelet[2753]: I0124 00:32:54.941791 2753 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a217d8c34dbd51e19632ff3d2dc59fbaf1b83772998cdb977c005009e5f829c1" Jan 24 00:32:54.942573 containerd[1650]: time="2026-01-24T00:32:54.942558477Z" level=info msg="StopPodSandbox for \"a217d8c34dbd51e19632ff3d2dc59fbaf1b83772998cdb977c005009e5f829c1\"" Jan 24 00:32:54.942809 containerd[1650]: time="2026-01-24T00:32:54.942798366Z" level=info msg="StopPodSandbox for \"9962b513d74e499c74e3cb7c19d37cc9c00c9c1d1128c2c9c237f85880db8e8b\"" Jan 24 00:32:54.942998 containerd[1650]: time="2026-01-24T00:32:54.942987585Z" level=info msg="Ensure that sandbox 9962b513d74e499c74e3cb7c19d37cc9c00c9c1d1128c2c9c237f85880db8e8b in task-service has been cleanup successfully" Jan 24 00:32:54.943177 containerd[1650]: time="2026-01-24T00:32:54.943166655Z" level=info msg="Ensure that sandbox a217d8c34dbd51e19632ff3d2dc59fbaf1b83772998cdb977c005009e5f829c1 in task-service has been cleanup successfully" Jan 24 00:32:54.964250 containerd[1650]: time="2026-01-24T00:32:54.964194742Z" level=error msg="StopPodSandbox for \"01980512e9705e6a944e98d9a1111365ff5a35472bc59dfdb58bb9981274cdef\" failed" error="failed to destroy network for sandbox \"01980512e9705e6a944e98d9a1111365ff5a35472bc59dfdb58bb9981274cdef\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:32:54.965060 kubelet[2753]: E0124 00:32:54.964750 2753 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"01980512e9705e6a944e98d9a1111365ff5a35472bc59dfdb58bb9981274cdef\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="01980512e9705e6a944e98d9a1111365ff5a35472bc59dfdb58bb9981274cdef" Jan 24 00:32:54.965060 kubelet[2753]: E0124 00:32:54.964811 2753 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"01980512e9705e6a944e98d9a1111365ff5a35472bc59dfdb58bb9981274cdef"} Jan 24 00:32:54.965060 kubelet[2753]: E0124 00:32:54.964879 2753 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e954bcbc-6a7d-4fa9-9256-747a5b39530e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"01980512e9705e6a944e98d9a1111365ff5a35472bc59dfdb58bb9981274cdef\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:32:54.965060 kubelet[2753]: E0124 00:32:54.964903 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e954bcbc-6a7d-4fa9-9256-747a5b39530e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"01980512e9705e6a944e98d9a1111365ff5a35472bc59dfdb58bb9981274cdef\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-r5n27" podUID="e954bcbc-6a7d-4fa9-9256-747a5b39530e" Jan 24 00:32:55.009679 containerd[1650]: time="2026-01-24T00:32:55.009577667Z" level=error msg="StopPodSandbox for \"5e6fc284e88673e193c9910bdf52dc2df04392d112b0f429bf831dc0545477c3\" failed" error="failed to destroy network for sandbox \"5e6fc284e88673e193c9910bdf52dc2df04392d112b0f429bf831dc0545477c3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:32:55.009981 kubelet[2753]: E0124 00:32:55.009769 2753 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5e6fc284e88673e193c9910bdf52dc2df04392d112b0f429bf831dc0545477c3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5e6fc284e88673e193c9910bdf52dc2df04392d112b0f429bf831dc0545477c3" Jan 24 00:32:55.009981 kubelet[2753]: E0124 00:32:55.009839 2753 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5e6fc284e88673e193c9910bdf52dc2df04392d112b0f429bf831dc0545477c3"} Jan 24 00:32:55.009981 kubelet[2753]: E0124 00:32:55.009874 2753 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"08d51dd3-a54b-4b8c-9510-41c1d4106f97\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5e6fc284e88673e193c9910bdf52dc2df04392d112b0f429bf831dc0545477c3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:32:55.009981 kubelet[2753]: E0124 00:32:55.009891 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"08d51dd3-a54b-4b8c-9510-41c1d4106f97\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5e6fc284e88673e193c9910bdf52dc2df04392d112b0f429bf831dc0545477c3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jv7gx" podUID="08d51dd3-a54b-4b8c-9510-41c1d4106f97" Jan 24 00:32:55.026165 containerd[1650]: time="2026-01-24T00:32:55.026134141Z" level=error msg="StopPodSandbox for \"7da988224e02efacacbf41da865446bcc0b13db71dd771b2ae7221d284a8372d\" failed" error="failed to destroy network for sandbox \"7da988224e02efacacbf41da865446bcc0b13db71dd771b2ae7221d284a8372d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:32:55.028301 kubelet[2753]: E0124 00:32:55.027374 2753 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7da988224e02efacacbf41da865446bcc0b13db71dd771b2ae7221d284a8372d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7da988224e02efacacbf41da865446bcc0b13db71dd771b2ae7221d284a8372d" Jan 24 00:32:55.028301 kubelet[2753]: E0124 00:32:55.027440 2753 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7da988224e02efacacbf41da865446bcc0b13db71dd771b2ae7221d284a8372d"} Jan 24 00:32:55.028301 kubelet[2753]: E0124 00:32:55.027473 2753 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"60b48194-9cf1-4af7-bca5-7353b7dd4d41\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7da988224e02efacacbf41da865446bcc0b13db71dd771b2ae7221d284a8372d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:32:55.028301 kubelet[2753]: E0124 00:32:55.027491 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"60b48194-9cf1-4af7-bca5-7353b7dd4d41\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7da988224e02efacacbf41da865446bcc0b13db71dd771b2ae7221d284a8372d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-65d6744f47-ksmv4" podUID="60b48194-9cf1-4af7-bca5-7353b7dd4d41" Jan 24 00:32:55.035598 containerd[1650]: time="2026-01-24T00:32:55.035554304Z" level=error msg="StopPodSandbox for \"b87b98a3af7ee3503771164041e0de4cad3a5c161331c13536afbe36658eecd0\" failed" error="failed to destroy network for sandbox \"b87b98a3af7ee3503771164041e0de4cad3a5c161331c13536afbe36658eecd0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:32:55.036470 containerd[1650]: time="2026-01-24T00:32:55.036449802Z" level=error msg="StopPodSandbox for \"f0328ead9a379c5c934a4c1395f5cddeec596243749c827fa826d926712e3fac\" failed" error="failed to destroy network for sandbox \"f0328ead9a379c5c934a4c1395f5cddeec596243749c827fa826d926712e3fac\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:32:55.036647 kubelet[2753]: E0124 00:32:55.036629 2753 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b87b98a3af7ee3503771164041e0de4cad3a5c161331c13536afbe36658eecd0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b87b98a3af7ee3503771164041e0de4cad3a5c161331c13536afbe36658eecd0" Jan 24 00:32:55.036698 kubelet[2753]: E0124 00:32:55.036656 2753 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b87b98a3af7ee3503771164041e0de4cad3a5c161331c13536afbe36658eecd0"} Jan 24 00:32:55.036698 kubelet[2753]: E0124 00:32:55.036684 2753 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c67637af-e8b0-4286-97b2-b018e1728d18\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b87b98a3af7ee3503771164041e0de4cad3a5c161331c13536afbe36658eecd0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:32:55.036756 kubelet[2753]: E0124 00:32:55.036699 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c67637af-e8b0-4286-97b2-b018e1728d18\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b87b98a3af7ee3503771164041e0de4cad3a5c161331c13536afbe36658eecd0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-xrqs4" podUID="c67637af-e8b0-4286-97b2-b018e1728d18" Jan 24 00:32:55.036756 kubelet[2753]: E0124 00:32:55.036598 2753 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f0328ead9a379c5c934a4c1395f5cddeec596243749c827fa826d926712e3fac\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f0328ead9a379c5c934a4c1395f5cddeec596243749c827fa826d926712e3fac" Jan 24 00:32:55.036756 kubelet[2753]: E0124 00:32:55.036729 2753 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f0328ead9a379c5c934a4c1395f5cddeec596243749c827fa826d926712e3fac"} Jan 24 00:32:55.036756 kubelet[2753]: E0124 00:32:55.036743 2753 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d4b6ade8-6f1f-4156-806d-99bf4d2944e2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f0328ead9a379c5c934a4c1395f5cddeec596243749c827fa826d926712e3fac\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:32:55.036889 kubelet[2753]: E0124 00:32:55.036757 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d4b6ade8-6f1f-4156-806d-99bf4d2944e2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f0328ead9a379c5c934a4c1395f5cddeec596243749c827fa826d926712e3fac\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-9hl7k" podUID="d4b6ade8-6f1f-4156-806d-99bf4d2944e2" Jan 24 00:32:55.046169 containerd[1650]: time="2026-01-24T00:32:55.046144695Z" level=error msg="StopPodSandbox for \"0b8c502a59aef7be8a1d38df20e6870e36a886296160f154bc7d1cc308763e9c\" failed" error="failed to destroy network for sandbox \"0b8c502a59aef7be8a1d38df20e6870e36a886296160f154bc7d1cc308763e9c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:32:55.046375 kubelet[2753]: E0124 00:32:55.046349 2753 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0b8c502a59aef7be8a1d38df20e6870e36a886296160f154bc7d1cc308763e9c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0b8c502a59aef7be8a1d38df20e6870e36a886296160f154bc7d1cc308763e9c" Jan 24 00:32:55.046425 kubelet[2753]: E0124 00:32:55.046384 2753 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0b8c502a59aef7be8a1d38df20e6870e36a886296160f154bc7d1cc308763e9c"} Jan 24 00:32:55.046451 kubelet[2753]: E0124 00:32:55.046423 2753 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"45f9a298-5fb0-472f-b747-58a979ff2009\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0b8c502a59aef7be8a1d38df20e6870e36a886296160f154bc7d1cc308763e9c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:32:55.046451 kubelet[2753]: E0124 00:32:55.046446 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"45f9a298-5fb0-472f-b747-58a979ff2009\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0b8c502a59aef7be8a1d38df20e6870e36a886296160f154bc7d1cc308763e9c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f4f66fd65-f9s4x" podUID="45f9a298-5fb0-472f-b747-58a979ff2009" Jan 24 00:32:55.047706 containerd[1650]: time="2026-01-24T00:32:55.047686451Z" level=error msg="StopPodSandbox for \"9962b513d74e499c74e3cb7c19d37cc9c00c9c1d1128c2c9c237f85880db8e8b\" failed" error="failed to destroy network for sandbox \"9962b513d74e499c74e3cb7c19d37cc9c00c9c1d1128c2c9c237f85880db8e8b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:32:55.047927 kubelet[2753]: E0124 00:32:55.047897 2753 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9962b513d74e499c74e3cb7c19d37cc9c00c9c1d1128c2c9c237f85880db8e8b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9962b513d74e499c74e3cb7c19d37cc9c00c9c1d1128c2c9c237f85880db8e8b" Jan 24 00:32:55.048000 kubelet[2753]: E0124 00:32:55.047988 2753 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9962b513d74e499c74e3cb7c19d37cc9c00c9c1d1128c2c9c237f85880db8e8b"} Jan 24 00:32:55.048080 kubelet[2753]: E0124 00:32:55.048038 2753 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8571ab88-459c-48f7-a296-37c9ac9b6a8a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9962b513d74e499c74e3cb7c19d37cc9c00c9c1d1128c2c9c237f85880db8e8b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:32:55.048080 kubelet[2753]: E0124 00:32:55.048059 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8571ab88-459c-48f7-a296-37c9ac9b6a8a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9962b513d74e499c74e3cb7c19d37cc9c00c9c1d1128c2c9c237f85880db8e8b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f4f66fd65-vcrjr" podUID="8571ab88-459c-48f7-a296-37c9ac9b6a8a" Jan 24 00:32:55.051368 containerd[1650]: time="2026-01-24T00:32:55.051349311Z" level=error msg="StopPodSandbox for \"d73c6a86204171f06b0afb71fa0d60e7b7d852a50e14c0ac2369222f7b57d106\" failed" error="failed to destroy network for sandbox \"d73c6a86204171f06b0afb71fa0d60e7b7d852a50e14c0ac2369222f7b57d106\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:32:55.051558 kubelet[2753]: E0124 00:32:55.051534 2753 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d73c6a86204171f06b0afb71fa0d60e7b7d852a50e14c0ac2369222f7b57d106\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d73c6a86204171f06b0afb71fa0d60e7b7d852a50e14c0ac2369222f7b57d106" Jan 24 00:32:55.051603 kubelet[2753]: E0124 00:32:55.051561 2753 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d73c6a86204171f06b0afb71fa0d60e7b7d852a50e14c0ac2369222f7b57d106"} Jan 24 00:32:55.051603 kubelet[2753]: E0124 00:32:55.051577 2753 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"dd50a6af-d58c-4f34-800a-f6cc47cfc02e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d73c6a86204171f06b0afb71fa0d60e7b7d852a50e14c0ac2369222f7b57d106\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:32:55.051603 kubelet[2753]: E0124 00:32:55.051597 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"dd50a6af-d58c-4f34-800a-f6cc47cfc02e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d73c6a86204171f06b0afb71fa0d60e7b7d852a50e14c0ac2369222f7b57d106\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6974f9cb6-rk89t" podUID="dd50a6af-d58c-4f34-800a-f6cc47cfc02e" Jan 24 00:32:55.053372 containerd[1650]: time="2026-01-24T00:32:55.053352505Z" level=error msg="StopPodSandbox for \"a217d8c34dbd51e19632ff3d2dc59fbaf1b83772998cdb977c005009e5f829c1\" failed" error="failed to destroy network for sandbox \"a217d8c34dbd51e19632ff3d2dc59fbaf1b83772998cdb977c005009e5f829c1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:32:55.053587 kubelet[2753]: E0124 00:32:55.053557 2753 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a217d8c34dbd51e19632ff3d2dc59fbaf1b83772998cdb977c005009e5f829c1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a217d8c34dbd51e19632ff3d2dc59fbaf1b83772998cdb977c005009e5f829c1" Jan 24 00:32:55.053629 kubelet[2753]: E0124 00:32:55.053587 2753 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a217d8c34dbd51e19632ff3d2dc59fbaf1b83772998cdb977c005009e5f829c1"} Jan 24 00:32:55.053629 kubelet[2753]: E0124 00:32:55.053621 2753 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"abb81e57-cdeb-458f-9a89-6ad70b4a9133\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a217d8c34dbd51e19632ff3d2dc59fbaf1b83772998cdb977c005009e5f829c1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:32:55.053679 kubelet[2753]: E0124 00:32:55.053633 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"abb81e57-cdeb-458f-9a89-6ad70b4a9133\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a217d8c34dbd51e19632ff3d2dc59fbaf1b83772998cdb977c005009e5f829c1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6bfcb7c46c-w555v" podUID="abb81e57-cdeb-458f-9a89-6ad70b4a9133" Jan 24 00:33:01.172114 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3184777932.mount: Deactivated successfully. Jan 24 00:33:01.201850 containerd[1650]: time="2026-01-24T00:33:01.201802326Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:33:01.202696 containerd[1650]: time="2026-01-24T00:33:01.202663235Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 24 00:33:01.203562 containerd[1650]: time="2026-01-24T00:33:01.203541433Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:33:01.205427 containerd[1650]: time="2026-01-24T00:33:01.205406410Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:33:01.205878 containerd[1650]: time="2026-01-24T00:33:01.205850379Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 7.310629668s" Jan 24 00:33:01.205920 containerd[1650]: time="2026-01-24T00:33:01.205882069Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 24 00:33:01.234795 containerd[1650]: time="2026-01-24T00:33:01.234756658Z" level=info msg="CreateContainer within sandbox \"211dabbc1c0f92ced783941f1d1265aea74a2b8c9693d871bfd10fd6edf95cbb\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 24 00:33:01.249688 containerd[1650]: time="2026-01-24T00:33:01.249646882Z" level=info msg="CreateContainer within sandbox \"211dabbc1c0f92ced783941f1d1265aea74a2b8c9693d871bfd10fd6edf95cbb\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"72e124bfd081a37ef5d18427e9b5058337de1f5436a8138954b59547cb076868\"" Jan 24 00:33:01.252020 containerd[1650]: time="2026-01-24T00:33:01.251205679Z" level=info msg="StartContainer for \"72e124bfd081a37ef5d18427e9b5058337de1f5436a8138954b59547cb076868\"" Jan 24 00:33:01.314971 containerd[1650]: time="2026-01-24T00:33:01.314934898Z" level=info msg="StartContainer for \"72e124bfd081a37ef5d18427e9b5058337de1f5436a8138954b59547cb076868\" returns successfully" Jan 24 00:33:01.387803 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 24 00:33:01.388107 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 24 00:33:01.465493 containerd[1650]: time="2026-01-24T00:33:01.463603757Z" level=info msg="StopPodSandbox for \"d73c6a86204171f06b0afb71fa0d60e7b7d852a50e14c0ac2369222f7b57d106\"" Jan 24 00:33:01.602455 containerd[1650]: 2026-01-24 00:33:01.559 [INFO][4038] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d73c6a86204171f06b0afb71fa0d60e7b7d852a50e14c0ac2369222f7b57d106" Jan 24 00:33:01.602455 containerd[1650]: 2026-01-24 00:33:01.560 [INFO][4038] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d73c6a86204171f06b0afb71fa0d60e7b7d852a50e14c0ac2369222f7b57d106" iface="eth0" netns="/var/run/netns/cni-b8c350ff-d6ab-9e0d-7d56-81a61c493f3f" Jan 24 00:33:01.602455 containerd[1650]: 2026-01-24 00:33:01.561 [INFO][4038] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d73c6a86204171f06b0afb71fa0d60e7b7d852a50e14c0ac2369222f7b57d106" iface="eth0" netns="/var/run/netns/cni-b8c350ff-d6ab-9e0d-7d56-81a61c493f3f" Jan 24 00:33:01.602455 containerd[1650]: 2026-01-24 00:33:01.562 [INFO][4038] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d73c6a86204171f06b0afb71fa0d60e7b7d852a50e14c0ac2369222f7b57d106" iface="eth0" netns="/var/run/netns/cni-b8c350ff-d6ab-9e0d-7d56-81a61c493f3f" Jan 24 00:33:01.602455 containerd[1650]: 2026-01-24 00:33:01.562 [INFO][4038] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d73c6a86204171f06b0afb71fa0d60e7b7d852a50e14c0ac2369222f7b57d106" Jan 24 00:33:01.602455 containerd[1650]: 2026-01-24 00:33:01.562 [INFO][4038] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d73c6a86204171f06b0afb71fa0d60e7b7d852a50e14c0ac2369222f7b57d106" Jan 24 00:33:01.602455 containerd[1650]: 2026-01-24 00:33:01.589 [INFO][4052] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d73c6a86204171f06b0afb71fa0d60e7b7d852a50e14c0ac2369222f7b57d106" HandleID="k8s-pod-network.d73c6a86204171f06b0afb71fa0d60e7b7d852a50e14c0ac2369222f7b57d106" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-whisker--6974f9cb6--rk89t-eth0" Jan 24 00:33:01.602455 containerd[1650]: 2026-01-24 00:33:01.589 [INFO][4052] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:33:01.602455 containerd[1650]: 2026-01-24 00:33:01.590 [INFO][4052] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:33:01.602455 containerd[1650]: 2026-01-24 00:33:01.595 [WARNING][4052] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d73c6a86204171f06b0afb71fa0d60e7b7d852a50e14c0ac2369222f7b57d106" HandleID="k8s-pod-network.d73c6a86204171f06b0afb71fa0d60e7b7d852a50e14c0ac2369222f7b57d106" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-whisker--6974f9cb6--rk89t-eth0" Jan 24 00:33:01.602455 containerd[1650]: 2026-01-24 00:33:01.595 [INFO][4052] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d73c6a86204171f06b0afb71fa0d60e7b7d852a50e14c0ac2369222f7b57d106" HandleID="k8s-pod-network.d73c6a86204171f06b0afb71fa0d60e7b7d852a50e14c0ac2369222f7b57d106" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-whisker--6974f9cb6--rk89t-eth0" Jan 24 00:33:01.602455 containerd[1650]: 2026-01-24 00:33:01.596 [INFO][4052] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:33:01.602455 containerd[1650]: 2026-01-24 00:33:01.599 [INFO][4038] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d73c6a86204171f06b0afb71fa0d60e7b7d852a50e14c0ac2369222f7b57d106" Jan 24 00:33:01.602455 containerd[1650]: time="2026-01-24T00:33:01.601277984Z" level=info msg="TearDown network for sandbox \"d73c6a86204171f06b0afb71fa0d60e7b7d852a50e14c0ac2369222f7b57d106\" successfully" Jan 24 00:33:01.602455 containerd[1650]: time="2026-01-24T00:33:01.601299594Z" level=info msg="StopPodSandbox for \"d73c6a86204171f06b0afb71fa0d60e7b7d852a50e14c0ac2369222f7b57d106\" returns successfully" Jan 24 00:33:01.627994 kubelet[2753]: I0124 00:33:01.627956 2753 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd50a6af-d58c-4f34-800a-f6cc47cfc02e-whisker-ca-bundle\") pod \"dd50a6af-d58c-4f34-800a-f6cc47cfc02e\" (UID: \"dd50a6af-d58c-4f34-800a-f6cc47cfc02e\") " Jan 24 00:33:01.627994 kubelet[2753]: I0124 00:33:01.627998 2753 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/dd50a6af-d58c-4f34-800a-f6cc47cfc02e-whisker-backend-key-pair\") pod \"dd50a6af-d58c-4f34-800a-f6cc47cfc02e\" (UID: \"dd50a6af-d58c-4f34-800a-f6cc47cfc02e\") " Jan 24 00:33:01.627994 kubelet[2753]: I0124 00:33:01.628012 2753 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b447x\" (UniqueName: \"kubernetes.io/projected/dd50a6af-d58c-4f34-800a-f6cc47cfc02e-kube-api-access-b447x\") pod \"dd50a6af-d58c-4f34-800a-f6cc47cfc02e\" (UID: \"dd50a6af-d58c-4f34-800a-f6cc47cfc02e\") " Jan 24 00:33:01.630528 kubelet[2753]: I0124 00:33:01.630497 2753 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd50a6af-d58c-4f34-800a-f6cc47cfc02e-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "dd50a6af-d58c-4f34-800a-f6cc47cfc02e" (UID: "dd50a6af-d58c-4f34-800a-f6cc47cfc02e"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 24 00:33:01.632993 kubelet[2753]: I0124 00:33:01.632971 2753 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd50a6af-d58c-4f34-800a-f6cc47cfc02e-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "dd50a6af-d58c-4f34-800a-f6cc47cfc02e" (UID: "dd50a6af-d58c-4f34-800a-f6cc47cfc02e"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 24 00:33:01.633355 kubelet[2753]: I0124 00:33:01.633314 2753 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd50a6af-d58c-4f34-800a-f6cc47cfc02e-kube-api-access-b447x" (OuterVolumeSpecName: "kube-api-access-b447x") pod "dd50a6af-d58c-4f34-800a-f6cc47cfc02e" (UID: "dd50a6af-d58c-4f34-800a-f6cc47cfc02e"). InnerVolumeSpecName "kube-api-access-b447x". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 24 00:33:01.728770 kubelet[2753]: I0124 00:33:01.728621 2753 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd50a6af-d58c-4f34-800a-f6cc47cfc02e-whisker-ca-bundle\") on node \"ci-4081-3-6-n-a9e48d2ea0\" DevicePath \"\"" Jan 24 00:33:01.728770 kubelet[2753]: I0124 00:33:01.728656 2753 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/dd50a6af-d58c-4f34-800a-f6cc47cfc02e-whisker-backend-key-pair\") on node \"ci-4081-3-6-n-a9e48d2ea0\" DevicePath \"\"" Jan 24 00:33:01.728770 kubelet[2753]: I0124 00:33:01.728664 2753 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-b447x\" (UniqueName: \"kubernetes.io/projected/dd50a6af-d58c-4f34-800a-f6cc47cfc02e-kube-api-access-b447x\") on node \"ci-4081-3-6-n-a9e48d2ea0\" DevicePath \"\"" Jan 24 00:33:02.020616 kubelet[2753]: I0124 00:33:02.020439 2753 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-kxn7g" podStartSLOduration=2.024594723 podStartE2EDuration="20.020421959s" podCreationTimestamp="2026-01-24 00:32:42 +0000 UTC" firstStartedPulling="2026-01-24 00:32:43.211228061 +0000 UTC m=+18.579807392" lastFinishedPulling="2026-01-24 00:33:01.207055307 +0000 UTC m=+36.575634628" observedRunningTime="2026-01-24 00:33:02.017262135 +0000 UTC m=+37.385841466" watchObservedRunningTime="2026-01-24 00:33:02.020421959 +0000 UTC m=+37.389001290" Jan 24 00:33:02.172415 systemd[1]: run-netns-cni\x2db8c350ff\x2dd6ab\x2d9e0d\x2d7d56\x2d81a61c493f3f.mount: Deactivated successfully. Jan 24 00:33:02.172553 systemd[1]: var-lib-kubelet-pods-dd50a6af\x2dd58c\x2d4f34\x2d800a\x2df6cc47cfc02e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2db447x.mount: Deactivated successfully. Jan 24 00:33:02.172667 systemd[1]: var-lib-kubelet-pods-dd50a6af\x2dd58c\x2d4f34\x2d800a\x2df6cc47cfc02e-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 24 00:33:02.232768 kubelet[2753]: I0124 00:33:02.232701 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjdj4\" (UniqueName: \"kubernetes.io/projected/de21b4a8-c633-4342-967f-dd18be2c5322-kube-api-access-pjdj4\") pod \"whisker-58745d67dd-ct89f\" (UID: \"de21b4a8-c633-4342-967f-dd18be2c5322\") " pod="calico-system/whisker-58745d67dd-ct89f" Jan 24 00:33:02.232918 kubelet[2753]: I0124 00:33:02.232781 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/de21b4a8-c633-4342-967f-dd18be2c5322-whisker-backend-key-pair\") pod \"whisker-58745d67dd-ct89f\" (UID: \"de21b4a8-c633-4342-967f-dd18be2c5322\") " pod="calico-system/whisker-58745d67dd-ct89f" Jan 24 00:33:02.232918 kubelet[2753]: I0124 00:33:02.232816 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de21b4a8-c633-4342-967f-dd18be2c5322-whisker-ca-bundle\") pod \"whisker-58745d67dd-ct89f\" (UID: \"de21b4a8-c633-4342-967f-dd18be2c5322\") " pod="calico-system/whisker-58745d67dd-ct89f" Jan 24 00:33:02.643795 containerd[1650]: time="2026-01-24T00:33:02.643731359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-58745d67dd-ct89f,Uid:de21b4a8-c633-4342-967f-dd18be2c5322,Namespace:calico-system,Attempt:0,}" Jan 24 00:33:02.744479 kubelet[2753]: I0124 00:33:02.743188 2753 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd50a6af-d58c-4f34-800a-f6cc47cfc02e" path="/var/lib/kubelet/pods/dd50a6af-d58c-4f34-800a-f6cc47cfc02e/volumes" Jan 24 00:33:02.834918 systemd-networkd[1262]: cali4896d52b338: Link UP Jan 24 00:33:02.838480 systemd-networkd[1262]: cali4896d52b338: Gained carrier Jan 24 00:33:02.873044 containerd[1650]: 2026-01-24 00:33:02.696 [INFO][4096] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 00:33:02.873044 containerd[1650]: 2026-01-24 00:33:02.718 [INFO][4096] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--a9e48d2ea0-k8s-whisker--58745d67dd--ct89f-eth0 whisker-58745d67dd- calico-system de21b4a8-c633-4342-967f-dd18be2c5322 929 0 2026-01-24 00:33:02 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:58745d67dd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-6-n-a9e48d2ea0 whisker-58745d67dd-ct89f eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali4896d52b338 [] [] }} ContainerID="afd71079857133ebe88b9856204b3a3189834d7dd86bc7fde338a0f34e55fdbe" Namespace="calico-system" Pod="whisker-58745d67dd-ct89f" WorkloadEndpoint="ci--4081--3--6--n--a9e48d2ea0-k8s-whisker--58745d67dd--ct89f-" Jan 24 00:33:02.873044 containerd[1650]: 2026-01-24 00:33:02.719 [INFO][4096] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="afd71079857133ebe88b9856204b3a3189834d7dd86bc7fde338a0f34e55fdbe" Namespace="calico-system" Pod="whisker-58745d67dd-ct89f" WorkloadEndpoint="ci--4081--3--6--n--a9e48d2ea0-k8s-whisker--58745d67dd--ct89f-eth0" Jan 24 00:33:02.873044 containerd[1650]: 2026-01-24 00:33:02.758 [INFO][4138] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="afd71079857133ebe88b9856204b3a3189834d7dd86bc7fde338a0f34e55fdbe" HandleID="k8s-pod-network.afd71079857133ebe88b9856204b3a3189834d7dd86bc7fde338a0f34e55fdbe" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-whisker--58745d67dd--ct89f-eth0" Jan 24 00:33:02.873044 containerd[1650]: 2026-01-24 00:33:02.758 [INFO][4138] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="afd71079857133ebe88b9856204b3a3189834d7dd86bc7fde338a0f34e55fdbe" HandleID="k8s-pod-network.afd71079857133ebe88b9856204b3a3189834d7dd86bc7fde338a0f34e55fdbe" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-whisker--58745d67dd--ct89f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d56d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-a9e48d2ea0", "pod":"whisker-58745d67dd-ct89f", "timestamp":"2026-01-24 00:33:02.758544943 +0000 UTC"}, Hostname:"ci-4081-3-6-n-a9e48d2ea0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:33:02.873044 containerd[1650]: 2026-01-24 00:33:02.758 [INFO][4138] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:33:02.873044 containerd[1650]: 2026-01-24 00:33:02.758 [INFO][4138] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:33:02.873044 containerd[1650]: 2026-01-24 00:33:02.758 [INFO][4138] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-a9e48d2ea0' Jan 24 00:33:02.873044 containerd[1650]: 2026-01-24 00:33:02.769 [INFO][4138] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.afd71079857133ebe88b9856204b3a3189834d7dd86bc7fde338a0f34e55fdbe" host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:02.873044 containerd[1650]: 2026-01-24 00:33:02.776 [INFO][4138] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:02.873044 containerd[1650]: 2026-01-24 00:33:02.783 [INFO][4138] ipam/ipam.go 511: Trying affinity for 192.168.43.0/26 host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:02.873044 containerd[1650]: 2026-01-24 00:33:02.785 [INFO][4138] ipam/ipam.go 158: Attempting to load block cidr=192.168.43.0/26 host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:02.873044 containerd[1650]: 2026-01-24 00:33:02.788 [INFO][4138] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.43.0/26 host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:02.873044 containerd[1650]: 2026-01-24 00:33:02.788 [INFO][4138] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.43.0/26 handle="k8s-pod-network.afd71079857133ebe88b9856204b3a3189834d7dd86bc7fde338a0f34e55fdbe" host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:02.873044 containerd[1650]: 2026-01-24 00:33:02.791 [INFO][4138] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.afd71079857133ebe88b9856204b3a3189834d7dd86bc7fde338a0f34e55fdbe Jan 24 00:33:02.873044 containerd[1650]: 2026-01-24 00:33:02.796 [INFO][4138] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.43.0/26 handle="k8s-pod-network.afd71079857133ebe88b9856204b3a3189834d7dd86bc7fde338a0f34e55fdbe" host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:02.873044 containerd[1650]: 2026-01-24 00:33:02.806 [INFO][4138] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.43.1/26] block=192.168.43.0/26 handle="k8s-pod-network.afd71079857133ebe88b9856204b3a3189834d7dd86bc7fde338a0f34e55fdbe" host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:02.873044 containerd[1650]: 2026-01-24 00:33:02.806 [INFO][4138] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.43.1/26] handle="k8s-pod-network.afd71079857133ebe88b9856204b3a3189834d7dd86bc7fde338a0f34e55fdbe" host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:02.873044 containerd[1650]: 2026-01-24 00:33:02.806 [INFO][4138] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:33:02.873044 containerd[1650]: 2026-01-24 00:33:02.806 [INFO][4138] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.43.1/26] IPv6=[] ContainerID="afd71079857133ebe88b9856204b3a3189834d7dd86bc7fde338a0f34e55fdbe" HandleID="k8s-pod-network.afd71079857133ebe88b9856204b3a3189834d7dd86bc7fde338a0f34e55fdbe" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-whisker--58745d67dd--ct89f-eth0" Jan 24 00:33:02.875235 containerd[1650]: 2026-01-24 00:33:02.809 [INFO][4096] cni-plugin/k8s.go 418: Populated endpoint ContainerID="afd71079857133ebe88b9856204b3a3189834d7dd86bc7fde338a0f34e55fdbe" Namespace="calico-system" Pod="whisker-58745d67dd-ct89f" WorkloadEndpoint="ci--4081--3--6--n--a9e48d2ea0-k8s-whisker--58745d67dd--ct89f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--a9e48d2ea0-k8s-whisker--58745d67dd--ct89f-eth0", GenerateName:"whisker-58745d67dd-", Namespace:"calico-system", SelfLink:"", UID:"de21b4a8-c633-4342-967f-dd18be2c5322", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 33, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"58745d67dd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-a9e48d2ea0", ContainerID:"", Pod:"whisker-58745d67dd-ct89f", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.43.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali4896d52b338", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:33:02.875235 containerd[1650]: 2026-01-24 00:33:02.809 [INFO][4096] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.43.1/32] ContainerID="afd71079857133ebe88b9856204b3a3189834d7dd86bc7fde338a0f34e55fdbe" Namespace="calico-system" Pod="whisker-58745d67dd-ct89f" WorkloadEndpoint="ci--4081--3--6--n--a9e48d2ea0-k8s-whisker--58745d67dd--ct89f-eth0" Jan 24 00:33:02.875235 containerd[1650]: 2026-01-24 00:33:02.809 [INFO][4096] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4896d52b338 ContainerID="afd71079857133ebe88b9856204b3a3189834d7dd86bc7fde338a0f34e55fdbe" Namespace="calico-system" Pod="whisker-58745d67dd-ct89f" WorkloadEndpoint="ci--4081--3--6--n--a9e48d2ea0-k8s-whisker--58745d67dd--ct89f-eth0" Jan 24 00:33:02.875235 containerd[1650]: 2026-01-24 00:33:02.845 [INFO][4096] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="afd71079857133ebe88b9856204b3a3189834d7dd86bc7fde338a0f34e55fdbe" Namespace="calico-system" Pod="whisker-58745d67dd-ct89f" WorkloadEndpoint="ci--4081--3--6--n--a9e48d2ea0-k8s-whisker--58745d67dd--ct89f-eth0" Jan 24 00:33:02.875235 containerd[1650]: 2026-01-24 00:33:02.845 [INFO][4096] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="afd71079857133ebe88b9856204b3a3189834d7dd86bc7fde338a0f34e55fdbe" Namespace="calico-system" Pod="whisker-58745d67dd-ct89f" WorkloadEndpoint="ci--4081--3--6--n--a9e48d2ea0-k8s-whisker--58745d67dd--ct89f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--a9e48d2ea0-k8s-whisker--58745d67dd--ct89f-eth0", GenerateName:"whisker-58745d67dd-", Namespace:"calico-system", SelfLink:"", UID:"de21b4a8-c633-4342-967f-dd18be2c5322", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 33, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"58745d67dd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-a9e48d2ea0", ContainerID:"afd71079857133ebe88b9856204b3a3189834d7dd86bc7fde338a0f34e55fdbe", Pod:"whisker-58745d67dd-ct89f", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.43.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali4896d52b338", MAC:"62:83:83:66:4b:b2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:33:02.875235 containerd[1650]: 2026-01-24 00:33:02.854 [INFO][4096] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="afd71079857133ebe88b9856204b3a3189834d7dd86bc7fde338a0f34e55fdbe" Namespace="calico-system" Pod="whisker-58745d67dd-ct89f" WorkloadEndpoint="ci--4081--3--6--n--a9e48d2ea0-k8s-whisker--58745d67dd--ct89f-eth0" Jan 24 00:33:02.934380 containerd[1650]: time="2026-01-24T00:33:02.932623100Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:33:02.935750 containerd[1650]: time="2026-01-24T00:33:02.934428317Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:33:02.935750 containerd[1650]: time="2026-01-24T00:33:02.934448407Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:33:02.935750 containerd[1650]: time="2026-01-24T00:33:02.934542847Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:33:03.127295 containerd[1650]: time="2026-01-24T00:33:03.127265180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-58745d67dd-ct89f,Uid:de21b4a8-c633-4342-967f-dd18be2c5322,Namespace:calico-system,Attempt:0,} returns sandbox id \"afd71079857133ebe88b9856204b3a3189834d7dd86bc7fde338a0f34e55fdbe\"" Jan 24 00:33:03.131247 containerd[1650]: time="2026-01-24T00:33:03.130634086Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:33:03.199429 kernel: bpftool[4303]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 24 00:33:03.438980 systemd-networkd[1262]: vxlan.calico: Link UP Jan 24 00:33:03.439002 systemd-networkd[1262]: vxlan.calico: Gained carrier Jan 24 00:33:03.564613 containerd[1650]: time="2026-01-24T00:33:03.564431707Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:33:03.565856 containerd[1650]: time="2026-01-24T00:33:03.565681396Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:33:03.565856 containerd[1650]: time="2026-01-24T00:33:03.565752815Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 24 00:33:03.566023 kubelet[2753]: E0124 00:33:03.565971 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:33:03.566094 kubelet[2753]: E0124 00:33:03.566042 2753 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:33:03.578815 kubelet[2753]: E0124 00:33:03.577697 2753 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:212ab3e8d9b24b12bd1bf6f88681001f,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pjdj4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-58745d67dd-ct89f_calico-system(de21b4a8-c633-4342-967f-dd18be2c5322): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:33:03.584161 containerd[1650]: time="2026-01-24T00:33:03.583861109Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:33:04.005379 containerd[1650]: time="2026-01-24T00:33:04.005303709Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:33:04.007100 containerd[1650]: time="2026-01-24T00:33:04.006979027Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:33:04.007279 containerd[1650]: time="2026-01-24T00:33:04.007114837Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 24 00:33:04.007677 kubelet[2753]: E0124 00:33:04.007593 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:33:04.007677 kubelet[2753]: E0124 00:33:04.007669 2753 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:33:04.008572 kubelet[2753]: E0124 00:33:04.007817 2753 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pjdj4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-58745d67dd-ct89f_calico-system(de21b4a8-c633-4342-967f-dd18be2c5322): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:33:04.009624 kubelet[2753]: E0124 00:33:04.009488 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58745d67dd-ct89f" podUID="de21b4a8-c633-4342-967f-dd18be2c5322" Jan 24 00:33:04.596452 systemd-networkd[1262]: cali4896d52b338: Gained IPv6LL Jan 24 00:33:04.974184 kubelet[2753]: E0124 00:33:04.973798 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58745d67dd-ct89f" podUID="de21b4a8-c633-4342-967f-dd18be2c5322" Jan 24 00:33:05.299688 systemd-networkd[1262]: vxlan.calico: Gained IPv6LL Jan 24 00:33:05.732279 containerd[1650]: time="2026-01-24T00:33:05.731649357Z" level=info msg="StopPodSandbox for \"f0328ead9a379c5c934a4c1395f5cddeec596243749c827fa826d926712e3fac\"" Jan 24 00:33:05.732999 containerd[1650]: time="2026-01-24T00:33:05.732374796Z" level=info msg="StopPodSandbox for \"9962b513d74e499c74e3cb7c19d37cc9c00c9c1d1128c2c9c237f85880db8e8b\"" Jan 24 00:33:05.908653 containerd[1650]: 2026-01-24 00:33:05.837 [INFO][4404] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f0328ead9a379c5c934a4c1395f5cddeec596243749c827fa826d926712e3fac" Jan 24 00:33:05.908653 containerd[1650]: 2026-01-24 00:33:05.837 [INFO][4404] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f0328ead9a379c5c934a4c1395f5cddeec596243749c827fa826d926712e3fac" iface="eth0" netns="/var/run/netns/cni-0dc9b680-c3fd-25f4-ef1f-6cd96d639a38" Jan 24 00:33:05.908653 containerd[1650]: 2026-01-24 00:33:05.839 [INFO][4404] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f0328ead9a379c5c934a4c1395f5cddeec596243749c827fa826d926712e3fac" iface="eth0" netns="/var/run/netns/cni-0dc9b680-c3fd-25f4-ef1f-6cd96d639a38" Jan 24 00:33:05.908653 containerd[1650]: 2026-01-24 00:33:05.840 [INFO][4404] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f0328ead9a379c5c934a4c1395f5cddeec596243749c827fa826d926712e3fac" iface="eth0" netns="/var/run/netns/cni-0dc9b680-c3fd-25f4-ef1f-6cd96d639a38" Jan 24 00:33:05.908653 containerd[1650]: 2026-01-24 00:33:05.840 [INFO][4404] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f0328ead9a379c5c934a4c1395f5cddeec596243749c827fa826d926712e3fac" Jan 24 00:33:05.908653 containerd[1650]: 2026-01-24 00:33:05.840 [INFO][4404] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f0328ead9a379c5c934a4c1395f5cddeec596243749c827fa826d926712e3fac" Jan 24 00:33:05.908653 containerd[1650]: 2026-01-24 00:33:05.890 [INFO][4422] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f0328ead9a379c5c934a4c1395f5cddeec596243749c827fa826d926712e3fac" HandleID="k8s-pod-network.f0328ead9a379c5c934a4c1395f5cddeec596243749c827fa826d926712e3fac" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-coredns--668d6bf9bc--9hl7k-eth0" Jan 24 00:33:05.908653 containerd[1650]: 2026-01-24 00:33:05.891 [INFO][4422] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:33:05.908653 containerd[1650]: 2026-01-24 00:33:05.891 [INFO][4422] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:33:05.908653 containerd[1650]: 2026-01-24 00:33:05.899 [WARNING][4422] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f0328ead9a379c5c934a4c1395f5cddeec596243749c827fa826d926712e3fac" HandleID="k8s-pod-network.f0328ead9a379c5c934a4c1395f5cddeec596243749c827fa826d926712e3fac" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-coredns--668d6bf9bc--9hl7k-eth0" Jan 24 00:33:05.908653 containerd[1650]: 2026-01-24 00:33:05.899 [INFO][4422] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f0328ead9a379c5c934a4c1395f5cddeec596243749c827fa826d926712e3fac" HandleID="k8s-pod-network.f0328ead9a379c5c934a4c1395f5cddeec596243749c827fa826d926712e3fac" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-coredns--668d6bf9bc--9hl7k-eth0" Jan 24 00:33:05.908653 containerd[1650]: 2026-01-24 00:33:05.901 [INFO][4422] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:33:05.908653 containerd[1650]: 2026-01-24 00:33:05.906 [INFO][4404] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f0328ead9a379c5c934a4c1395f5cddeec596243749c827fa826d926712e3fac" Jan 24 00:33:05.910466 containerd[1650]: time="2026-01-24T00:33:05.909516752Z" level=info msg="TearDown network for sandbox \"f0328ead9a379c5c934a4c1395f5cddeec596243749c827fa826d926712e3fac\" successfully" Jan 24 00:33:05.910466 containerd[1650]: time="2026-01-24T00:33:05.909618722Z" level=info msg="StopPodSandbox for \"f0328ead9a379c5c934a4c1395f5cddeec596243749c827fa826d926712e3fac\" returns successfully" Jan 24 00:33:05.912176 systemd[1]: run-netns-cni\x2d0dc9b680\x2dc3fd\x2d25f4\x2def1f\x2d6cd96d639a38.mount: Deactivated successfully. Jan 24 00:33:05.913606 containerd[1650]: time="2026-01-24T00:33:05.913589118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9hl7k,Uid:d4b6ade8-6f1f-4156-806d-99bf4d2944e2,Namespace:kube-system,Attempt:1,}" Jan 24 00:33:05.950554 containerd[1650]: 2026-01-24 00:33:05.849 [INFO][4408] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9962b513d74e499c74e3cb7c19d37cc9c00c9c1d1128c2c9c237f85880db8e8b" Jan 24 00:33:05.950554 containerd[1650]: 2026-01-24 00:33:05.849 [INFO][4408] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9962b513d74e499c74e3cb7c19d37cc9c00c9c1d1128c2c9c237f85880db8e8b" iface="eth0" netns="/var/run/netns/cni-6e680e0b-ddee-c735-984d-27a32b8dd599" Jan 24 00:33:05.950554 containerd[1650]: 2026-01-24 00:33:05.850 [INFO][4408] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9962b513d74e499c74e3cb7c19d37cc9c00c9c1d1128c2c9c237f85880db8e8b" iface="eth0" netns="/var/run/netns/cni-6e680e0b-ddee-c735-984d-27a32b8dd599" Jan 24 00:33:05.950554 containerd[1650]: 2026-01-24 00:33:05.854 [INFO][4408] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9962b513d74e499c74e3cb7c19d37cc9c00c9c1d1128c2c9c237f85880db8e8b" iface="eth0" netns="/var/run/netns/cni-6e680e0b-ddee-c735-984d-27a32b8dd599" Jan 24 00:33:05.950554 containerd[1650]: 2026-01-24 00:33:05.854 [INFO][4408] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9962b513d74e499c74e3cb7c19d37cc9c00c9c1d1128c2c9c237f85880db8e8b" Jan 24 00:33:05.950554 containerd[1650]: 2026-01-24 00:33:05.854 [INFO][4408] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9962b513d74e499c74e3cb7c19d37cc9c00c9c1d1128c2c9c237f85880db8e8b" Jan 24 00:33:05.950554 containerd[1650]: 2026-01-24 00:33:05.911 [INFO][4427] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9962b513d74e499c74e3cb7c19d37cc9c00c9c1d1128c2c9c237f85880db8e8b" HandleID="k8s-pod-network.9962b513d74e499c74e3cb7c19d37cc9c00c9c1d1128c2c9c237f85880db8e8b" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--apiserver--f4f66fd65--vcrjr-eth0" Jan 24 00:33:05.950554 containerd[1650]: 2026-01-24 00:33:05.914 [INFO][4427] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:33:05.950554 containerd[1650]: 2026-01-24 00:33:05.914 [INFO][4427] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:33:05.950554 containerd[1650]: 2026-01-24 00:33:05.928 [WARNING][4427] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9962b513d74e499c74e3cb7c19d37cc9c00c9c1d1128c2c9c237f85880db8e8b" HandleID="k8s-pod-network.9962b513d74e499c74e3cb7c19d37cc9c00c9c1d1128c2c9c237f85880db8e8b" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--apiserver--f4f66fd65--vcrjr-eth0" Jan 24 00:33:05.950554 containerd[1650]: 2026-01-24 00:33:05.928 [INFO][4427] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9962b513d74e499c74e3cb7c19d37cc9c00c9c1d1128c2c9c237f85880db8e8b" HandleID="k8s-pod-network.9962b513d74e499c74e3cb7c19d37cc9c00c9c1d1128c2c9c237f85880db8e8b" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--apiserver--f4f66fd65--vcrjr-eth0" Jan 24 00:33:05.950554 containerd[1650]: 2026-01-24 00:33:05.933 [INFO][4427] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:33:05.950554 containerd[1650]: 2026-01-24 00:33:05.944 [INFO][4408] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9962b513d74e499c74e3cb7c19d37cc9c00c9c1d1128c2c9c237f85880db8e8b" Jan 24 00:33:05.950554 containerd[1650]: time="2026-01-24T00:33:05.949797362Z" level=info msg="TearDown network for sandbox \"9962b513d74e499c74e3cb7c19d37cc9c00c9c1d1128c2c9c237f85880db8e8b\" successfully" Jan 24 00:33:05.950554 containerd[1650]: time="2026-01-24T00:33:05.949826032Z" level=info msg="StopPodSandbox for \"9962b513d74e499c74e3cb7c19d37cc9c00c9c1d1128c2c9c237f85880db8e8b\" returns successfully" Jan 24 00:33:05.952189 systemd[1]: run-netns-cni\x2d6e680e0b\x2dddee\x2dc735\x2d984d\x2d27a32b8dd599.mount: Deactivated successfully. Jan 24 00:33:05.954680 containerd[1650]: time="2026-01-24T00:33:05.954650006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f4f66fd65-vcrjr,Uid:8571ab88-459c-48f7-a296-37c9ac9b6a8a,Namespace:calico-apiserver,Attempt:1,}" Jan 24 00:33:06.058524 systemd-networkd[1262]: cali3f5ba50de34: Link UP Jan 24 00:33:06.060146 systemd-networkd[1262]: cali3f5ba50de34: Gained carrier Jan 24 00:33:06.073877 containerd[1650]: 2026-01-24 00:33:05.982 [INFO][4436] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--a9e48d2ea0-k8s-coredns--668d6bf9bc--9hl7k-eth0 coredns-668d6bf9bc- kube-system d4b6ade8-6f1f-4156-806d-99bf4d2944e2 959 0 2026-01-24 00:32:30 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-n-a9e48d2ea0 coredns-668d6bf9bc-9hl7k eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3f5ba50de34 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="6344b074c04bbb494edb54ddbd7318166c6311068424985073107b6d58e07967" Namespace="kube-system" Pod="coredns-668d6bf9bc-9hl7k" WorkloadEndpoint="ci--4081--3--6--n--a9e48d2ea0-k8s-coredns--668d6bf9bc--9hl7k-" Jan 24 00:33:06.073877 containerd[1650]: 2026-01-24 00:33:05.982 [INFO][4436] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6344b074c04bbb494edb54ddbd7318166c6311068424985073107b6d58e07967" Namespace="kube-system" Pod="coredns-668d6bf9bc-9hl7k" WorkloadEndpoint="ci--4081--3--6--n--a9e48d2ea0-k8s-coredns--668d6bf9bc--9hl7k-eth0" Jan 24 00:33:06.073877 containerd[1650]: 2026-01-24 00:33:06.020 [INFO][4457] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6344b074c04bbb494edb54ddbd7318166c6311068424985073107b6d58e07967" HandleID="k8s-pod-network.6344b074c04bbb494edb54ddbd7318166c6311068424985073107b6d58e07967" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-coredns--668d6bf9bc--9hl7k-eth0" Jan 24 00:33:06.073877 containerd[1650]: 2026-01-24 00:33:06.021 [INFO][4457] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6344b074c04bbb494edb54ddbd7318166c6311068424985073107b6d58e07967" HandleID="k8s-pod-network.6344b074c04bbb494edb54ddbd7318166c6311068424985073107b6d58e07967" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-coredns--668d6bf9bc--9hl7k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d55d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-n-a9e48d2ea0", "pod":"coredns-668d6bf9bc-9hl7k", "timestamp":"2026-01-24 00:33:06.020952514 +0000 UTC"}, Hostname:"ci-4081-3-6-n-a9e48d2ea0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:33:06.073877 containerd[1650]: 2026-01-24 00:33:06.021 [INFO][4457] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:33:06.073877 containerd[1650]: 2026-01-24 00:33:06.021 [INFO][4457] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:33:06.073877 containerd[1650]: 2026-01-24 00:33:06.021 [INFO][4457] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-a9e48d2ea0' Jan 24 00:33:06.073877 containerd[1650]: 2026-01-24 00:33:06.026 [INFO][4457] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6344b074c04bbb494edb54ddbd7318166c6311068424985073107b6d58e07967" host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:06.073877 containerd[1650]: 2026-01-24 00:33:06.031 [INFO][4457] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:06.073877 containerd[1650]: 2026-01-24 00:33:06.037 [INFO][4457] ipam/ipam.go 511: Trying affinity for 192.168.43.0/26 host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:06.073877 containerd[1650]: 2026-01-24 00:33:06.039 [INFO][4457] ipam/ipam.go 158: Attempting to load block cidr=192.168.43.0/26 host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:06.073877 containerd[1650]: 2026-01-24 00:33:06.040 [INFO][4457] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.43.0/26 host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:06.073877 containerd[1650]: 2026-01-24 00:33:06.040 [INFO][4457] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.43.0/26 handle="k8s-pod-network.6344b074c04bbb494edb54ddbd7318166c6311068424985073107b6d58e07967" host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:06.073877 containerd[1650]: 2026-01-24 00:33:06.041 [INFO][4457] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6344b074c04bbb494edb54ddbd7318166c6311068424985073107b6d58e07967 Jan 24 00:33:06.073877 containerd[1650]: 2026-01-24 00:33:06.045 [INFO][4457] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.43.0/26 handle="k8s-pod-network.6344b074c04bbb494edb54ddbd7318166c6311068424985073107b6d58e07967" host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:06.073877 containerd[1650]: 2026-01-24 00:33:06.050 [INFO][4457] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.43.2/26] block=192.168.43.0/26 handle="k8s-pod-network.6344b074c04bbb494edb54ddbd7318166c6311068424985073107b6d58e07967" host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:06.073877 containerd[1650]: 2026-01-24 00:33:06.051 [INFO][4457] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.43.2/26] handle="k8s-pod-network.6344b074c04bbb494edb54ddbd7318166c6311068424985073107b6d58e07967" host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:06.073877 containerd[1650]: 2026-01-24 00:33:06.051 [INFO][4457] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:33:06.073877 containerd[1650]: 2026-01-24 00:33:06.051 [INFO][4457] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.43.2/26] IPv6=[] ContainerID="6344b074c04bbb494edb54ddbd7318166c6311068424985073107b6d58e07967" HandleID="k8s-pod-network.6344b074c04bbb494edb54ddbd7318166c6311068424985073107b6d58e07967" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-coredns--668d6bf9bc--9hl7k-eth0" Jan 24 00:33:06.074306 containerd[1650]: 2026-01-24 00:33:06.052 [INFO][4436] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6344b074c04bbb494edb54ddbd7318166c6311068424985073107b6d58e07967" Namespace="kube-system" Pod="coredns-668d6bf9bc-9hl7k" WorkloadEndpoint="ci--4081--3--6--n--a9e48d2ea0-k8s-coredns--668d6bf9bc--9hl7k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--a9e48d2ea0-k8s-coredns--668d6bf9bc--9hl7k-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d4b6ade8-6f1f-4156-806d-99bf4d2944e2", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 32, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-a9e48d2ea0", ContainerID:"", Pod:"coredns-668d6bf9bc-9hl7k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.43.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3f5ba50de34", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:33:06.074306 containerd[1650]: 2026-01-24 00:33:06.052 [INFO][4436] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.43.2/32] ContainerID="6344b074c04bbb494edb54ddbd7318166c6311068424985073107b6d58e07967" Namespace="kube-system" Pod="coredns-668d6bf9bc-9hl7k" WorkloadEndpoint="ci--4081--3--6--n--a9e48d2ea0-k8s-coredns--668d6bf9bc--9hl7k-eth0" Jan 24 00:33:06.074306 containerd[1650]: 2026-01-24 00:33:06.052 [INFO][4436] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3f5ba50de34 ContainerID="6344b074c04bbb494edb54ddbd7318166c6311068424985073107b6d58e07967" Namespace="kube-system" Pod="coredns-668d6bf9bc-9hl7k" WorkloadEndpoint="ci--4081--3--6--n--a9e48d2ea0-k8s-coredns--668d6bf9bc--9hl7k-eth0" Jan 24 00:33:06.074306 containerd[1650]: 2026-01-24 00:33:06.061 [INFO][4436] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6344b074c04bbb494edb54ddbd7318166c6311068424985073107b6d58e07967" Namespace="kube-system" Pod="coredns-668d6bf9bc-9hl7k" WorkloadEndpoint="ci--4081--3--6--n--a9e48d2ea0-k8s-coredns--668d6bf9bc--9hl7k-eth0" Jan 24 00:33:06.074306 containerd[1650]: 2026-01-24 00:33:06.061 [INFO][4436] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6344b074c04bbb494edb54ddbd7318166c6311068424985073107b6d58e07967" Namespace="kube-system" Pod="coredns-668d6bf9bc-9hl7k" WorkloadEndpoint="ci--4081--3--6--n--a9e48d2ea0-k8s-coredns--668d6bf9bc--9hl7k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--a9e48d2ea0-k8s-coredns--668d6bf9bc--9hl7k-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d4b6ade8-6f1f-4156-806d-99bf4d2944e2", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 32, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-a9e48d2ea0", ContainerID:"6344b074c04bbb494edb54ddbd7318166c6311068424985073107b6d58e07967", Pod:"coredns-668d6bf9bc-9hl7k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.43.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3f5ba50de34", MAC:"8a:54:75:26:b3:f9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:33:06.074306 containerd[1650]: 2026-01-24 00:33:06.071 [INFO][4436] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6344b074c04bbb494edb54ddbd7318166c6311068424985073107b6d58e07967" Namespace="kube-system" Pod="coredns-668d6bf9bc-9hl7k" WorkloadEndpoint="ci--4081--3--6--n--a9e48d2ea0-k8s-coredns--668d6bf9bc--9hl7k-eth0" Jan 24 00:33:06.095201 containerd[1650]: time="2026-01-24T00:33:06.095011298Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:33:06.095201 containerd[1650]: time="2026-01-24T00:33:06.095140808Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:33:06.095201 containerd[1650]: time="2026-01-24T00:33:06.095150908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:33:06.095442 containerd[1650]: time="2026-01-24T00:33:06.095257958Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:33:06.158044 containerd[1650]: time="2026-01-24T00:33:06.157972476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9hl7k,Uid:d4b6ade8-6f1f-4156-806d-99bf4d2944e2,Namespace:kube-system,Attempt:1,} returns sandbox id \"6344b074c04bbb494edb54ddbd7318166c6311068424985073107b6d58e07967\"" Jan 24 00:33:06.161923 containerd[1650]: time="2026-01-24T00:33:06.161550061Z" level=info msg="CreateContainer within sandbox \"6344b074c04bbb494edb54ddbd7318166c6311068424985073107b6d58e07967\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 24 00:33:06.173057 systemd-networkd[1262]: cali57b1768dcce: Link UP Jan 24 00:33:06.174436 systemd-networkd[1262]: cali57b1768dcce: Gained carrier Jan 24 00:33:06.188552 containerd[1650]: time="2026-01-24T00:33:06.187982240Z" level=info msg="CreateContainer within sandbox \"6344b074c04bbb494edb54ddbd7318166c6311068424985073107b6d58e07967\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0906f976acc62f58dcb2b6b06daab83f0495e6f02bf5cbf8f6ca8dfea67a1f5f\"" Jan 24 00:33:06.189543 containerd[1650]: time="2026-01-24T00:33:06.189163909Z" level=info msg="StartContainer for \"0906f976acc62f58dcb2b6b06daab83f0495e6f02bf5cbf8f6ca8dfea67a1f5f\"" Jan 24 00:33:06.192473 containerd[1650]: 2026-01-24 00:33:06.013 [INFO][4444] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--a9e48d2ea0-k8s-calico--apiserver--f4f66fd65--vcrjr-eth0 calico-apiserver-f4f66fd65- calico-apiserver 8571ab88-459c-48f7-a296-37c9ac9b6a8a 960 0 2026-01-24 00:32:39 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:f4f66fd65 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-n-a9e48d2ea0 calico-apiserver-f4f66fd65-vcrjr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali57b1768dcce [] [] }} ContainerID="5296bd6875785b8e88829804e387070d88304634cc02ae8d5bb17d37edce9921" Namespace="calico-apiserver" Pod="calico-apiserver-f4f66fd65-vcrjr" WorkloadEndpoint="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--apiserver--f4f66fd65--vcrjr-" Jan 24 00:33:06.192473 containerd[1650]: 2026-01-24 00:33:06.013 [INFO][4444] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5296bd6875785b8e88829804e387070d88304634cc02ae8d5bb17d37edce9921" Namespace="calico-apiserver" Pod="calico-apiserver-f4f66fd65-vcrjr" WorkloadEndpoint="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--apiserver--f4f66fd65--vcrjr-eth0" Jan 24 00:33:06.192473 containerd[1650]: 2026-01-24 00:33:06.036 [INFO][4463] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5296bd6875785b8e88829804e387070d88304634cc02ae8d5bb17d37edce9921" HandleID="k8s-pod-network.5296bd6875785b8e88829804e387070d88304634cc02ae8d5bb17d37edce9921" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--apiserver--f4f66fd65--vcrjr-eth0" Jan 24 00:33:06.192473 containerd[1650]: 2026-01-24 00:33:06.036 [INFO][4463] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5296bd6875785b8e88829804e387070d88304634cc02ae8d5bb17d37edce9921" HandleID="k8s-pod-network.5296bd6875785b8e88829804e387070d88304634cc02ae8d5bb17d37edce9921" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--apiserver--f4f66fd65--vcrjr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d55e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-6-n-a9e48d2ea0", "pod":"calico-apiserver-f4f66fd65-vcrjr", "timestamp":"2026-01-24 00:33:06.036129406 +0000 UTC"}, Hostname:"ci-4081-3-6-n-a9e48d2ea0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:33:06.192473 containerd[1650]: 2026-01-24 00:33:06.036 [INFO][4463] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:33:06.192473 containerd[1650]: 2026-01-24 00:33:06.051 [INFO][4463] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:33:06.192473 containerd[1650]: 2026-01-24 00:33:06.051 [INFO][4463] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-a9e48d2ea0' Jan 24 00:33:06.192473 containerd[1650]: 2026-01-24 00:33:06.128 [INFO][4463] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5296bd6875785b8e88829804e387070d88304634cc02ae8d5bb17d37edce9921" host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:06.192473 containerd[1650]: 2026-01-24 00:33:06.136 [INFO][4463] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:06.192473 containerd[1650]: 2026-01-24 00:33:06.143 [INFO][4463] ipam/ipam.go 511: Trying affinity for 192.168.43.0/26 host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:06.192473 containerd[1650]: 2026-01-24 00:33:06.148 [INFO][4463] ipam/ipam.go 158: Attempting to load block cidr=192.168.43.0/26 host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:06.192473 containerd[1650]: 2026-01-24 00:33:06.152 [INFO][4463] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.43.0/26 host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:06.192473 containerd[1650]: 2026-01-24 00:33:06.152 [INFO][4463] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.43.0/26 handle="k8s-pod-network.5296bd6875785b8e88829804e387070d88304634cc02ae8d5bb17d37edce9921" host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:06.192473 containerd[1650]: 2026-01-24 00:33:06.153 [INFO][4463] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5296bd6875785b8e88829804e387070d88304634cc02ae8d5bb17d37edce9921 Jan 24 00:33:06.192473 containerd[1650]: 2026-01-24 00:33:06.157 [INFO][4463] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.43.0/26 handle="k8s-pod-network.5296bd6875785b8e88829804e387070d88304634cc02ae8d5bb17d37edce9921" host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:06.192473 containerd[1650]: 2026-01-24 00:33:06.164 [INFO][4463] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.43.3/26] block=192.168.43.0/26 handle="k8s-pod-network.5296bd6875785b8e88829804e387070d88304634cc02ae8d5bb17d37edce9921" host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:06.192473 containerd[1650]: 2026-01-24 00:33:06.164 [INFO][4463] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.43.3/26] handle="k8s-pod-network.5296bd6875785b8e88829804e387070d88304634cc02ae8d5bb17d37edce9921" host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:06.192473 containerd[1650]: 2026-01-24 00:33:06.164 [INFO][4463] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:33:06.192473 containerd[1650]: 2026-01-24 00:33:06.164 [INFO][4463] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.43.3/26] IPv6=[] ContainerID="5296bd6875785b8e88829804e387070d88304634cc02ae8d5bb17d37edce9921" HandleID="k8s-pod-network.5296bd6875785b8e88829804e387070d88304634cc02ae8d5bb17d37edce9921" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--apiserver--f4f66fd65--vcrjr-eth0" Jan 24 00:33:06.193247 containerd[1650]: 2026-01-24 00:33:06.168 [INFO][4444] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5296bd6875785b8e88829804e387070d88304634cc02ae8d5bb17d37edce9921" Namespace="calico-apiserver" Pod="calico-apiserver-f4f66fd65-vcrjr" WorkloadEndpoint="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--apiserver--f4f66fd65--vcrjr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--a9e48d2ea0-k8s-calico--apiserver--f4f66fd65--vcrjr-eth0", GenerateName:"calico-apiserver-f4f66fd65-", Namespace:"calico-apiserver", SelfLink:"", UID:"8571ab88-459c-48f7-a296-37c9ac9b6a8a", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 32, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f4f66fd65", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-a9e48d2ea0", ContainerID:"", Pod:"calico-apiserver-f4f66fd65-vcrjr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.43.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali57b1768dcce", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:33:06.193247 containerd[1650]: 2026-01-24 00:33:06.168 [INFO][4444] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.43.3/32] ContainerID="5296bd6875785b8e88829804e387070d88304634cc02ae8d5bb17d37edce9921" Namespace="calico-apiserver" Pod="calico-apiserver-f4f66fd65-vcrjr" WorkloadEndpoint="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--apiserver--f4f66fd65--vcrjr-eth0" Jan 24 00:33:06.193247 containerd[1650]: 2026-01-24 00:33:06.168 [INFO][4444] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali57b1768dcce ContainerID="5296bd6875785b8e88829804e387070d88304634cc02ae8d5bb17d37edce9921" Namespace="calico-apiserver" Pod="calico-apiserver-f4f66fd65-vcrjr" WorkloadEndpoint="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--apiserver--f4f66fd65--vcrjr-eth0" Jan 24 00:33:06.193247 containerd[1650]: 2026-01-24 00:33:06.174 [INFO][4444] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5296bd6875785b8e88829804e387070d88304634cc02ae8d5bb17d37edce9921" Namespace="calico-apiserver" Pod="calico-apiserver-f4f66fd65-vcrjr" WorkloadEndpoint="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--apiserver--f4f66fd65--vcrjr-eth0" Jan 24 00:33:06.193247 containerd[1650]: 2026-01-24 00:33:06.175 [INFO][4444] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5296bd6875785b8e88829804e387070d88304634cc02ae8d5bb17d37edce9921" Namespace="calico-apiserver" Pod="calico-apiserver-f4f66fd65-vcrjr" WorkloadEndpoint="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--apiserver--f4f66fd65--vcrjr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--a9e48d2ea0-k8s-calico--apiserver--f4f66fd65--vcrjr-eth0", GenerateName:"calico-apiserver-f4f66fd65-", Namespace:"calico-apiserver", SelfLink:"", UID:"8571ab88-459c-48f7-a296-37c9ac9b6a8a", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 32, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f4f66fd65", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-a9e48d2ea0", ContainerID:"5296bd6875785b8e88829804e387070d88304634cc02ae8d5bb17d37edce9921", Pod:"calico-apiserver-f4f66fd65-vcrjr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.43.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali57b1768dcce", MAC:"22:af:12:12:d1:7a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:33:06.193247 containerd[1650]: 2026-01-24 00:33:06.187 [INFO][4444] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5296bd6875785b8e88829804e387070d88304634cc02ae8d5bb17d37edce9921" Namespace="calico-apiserver" Pod="calico-apiserver-f4f66fd65-vcrjr" WorkloadEndpoint="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--apiserver--f4f66fd65--vcrjr-eth0" Jan 24 00:33:06.218325 containerd[1650]: time="2026-01-24T00:33:06.217812776Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:33:06.218325 containerd[1650]: time="2026-01-24T00:33:06.217887136Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:33:06.218325 containerd[1650]: time="2026-01-24T00:33:06.217897766Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:33:06.218325 containerd[1650]: time="2026-01-24T00:33:06.218038936Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:33:06.265135 containerd[1650]: time="2026-01-24T00:33:06.265074921Z" level=info msg="StartContainer for \"0906f976acc62f58dcb2b6b06daab83f0495e6f02bf5cbf8f6ca8dfea67a1f5f\" returns successfully" Jan 24 00:33:06.299271 containerd[1650]: time="2026-01-24T00:33:06.298803322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f4f66fd65-vcrjr,Uid:8571ab88-459c-48f7-a296-37c9ac9b6a8a,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"5296bd6875785b8e88829804e387070d88304634cc02ae8d5bb17d37edce9921\"" Jan 24 00:33:06.301419 containerd[1650]: time="2026-01-24T00:33:06.301241689Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:33:06.733009 containerd[1650]: time="2026-01-24T00:33:06.732926519Z" level=info msg="StopPodSandbox for \"5e6fc284e88673e193c9910bdf52dc2df04392d112b0f429bf831dc0545477c3\"" Jan 24 00:33:06.735946 containerd[1650]: time="2026-01-24T00:33:06.735813116Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:33:06.738072 containerd[1650]: time="2026-01-24T00:33:06.737985753Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:33:06.738171 containerd[1650]: time="2026-01-24T00:33:06.738100663Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:33:06.738443 kubelet[2753]: E0124 00:33:06.738347 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:33:06.740102 kubelet[2753]: E0124 00:33:06.738468 2753 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:33:06.740102 kubelet[2753]: E0124 00:33:06.738622 2753 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r8nk7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-f4f66fd65-vcrjr_calico-apiserver(8571ab88-459c-48f7-a296-37c9ac9b6a8a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:33:06.740102 kubelet[2753]: E0124 00:33:06.739984 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f4f66fd65-vcrjr" podUID="8571ab88-459c-48f7-a296-37c9ac9b6a8a" Jan 24 00:33:06.862330 containerd[1650]: 2026-01-24 00:33:06.790 [INFO][4618] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5e6fc284e88673e193c9910bdf52dc2df04392d112b0f429bf831dc0545477c3" Jan 24 00:33:06.862330 containerd[1650]: 2026-01-24 00:33:06.790 [INFO][4618] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5e6fc284e88673e193c9910bdf52dc2df04392d112b0f429bf831dc0545477c3" iface="eth0" netns="/var/run/netns/cni-3c24e19b-ce7e-35ea-91eb-f3fee4b87af5" Jan 24 00:33:06.862330 containerd[1650]: 2026-01-24 00:33:06.792 [INFO][4618] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5e6fc284e88673e193c9910bdf52dc2df04392d112b0f429bf831dc0545477c3" iface="eth0" netns="/var/run/netns/cni-3c24e19b-ce7e-35ea-91eb-f3fee4b87af5" Jan 24 00:33:06.862330 containerd[1650]: 2026-01-24 00:33:06.793 [INFO][4618] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5e6fc284e88673e193c9910bdf52dc2df04392d112b0f429bf831dc0545477c3" iface="eth0" netns="/var/run/netns/cni-3c24e19b-ce7e-35ea-91eb-f3fee4b87af5" Jan 24 00:33:06.862330 containerd[1650]: 2026-01-24 00:33:06.793 [INFO][4618] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5e6fc284e88673e193c9910bdf52dc2df04392d112b0f429bf831dc0545477c3" Jan 24 00:33:06.862330 containerd[1650]: 2026-01-24 00:33:06.793 [INFO][4618] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5e6fc284e88673e193c9910bdf52dc2df04392d112b0f429bf831dc0545477c3" Jan 24 00:33:06.862330 containerd[1650]: 2026-01-24 00:33:06.844 [INFO][4625] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5e6fc284e88673e193c9910bdf52dc2df04392d112b0f429bf831dc0545477c3" HandleID="k8s-pod-network.5e6fc284e88673e193c9910bdf52dc2df04392d112b0f429bf831dc0545477c3" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-csi--node--driver--jv7gx-eth0" Jan 24 00:33:06.862330 containerd[1650]: 2026-01-24 00:33:06.845 [INFO][4625] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:33:06.862330 containerd[1650]: 2026-01-24 00:33:06.845 [INFO][4625] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:33:06.862330 containerd[1650]: 2026-01-24 00:33:06.854 [WARNING][4625] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5e6fc284e88673e193c9910bdf52dc2df04392d112b0f429bf831dc0545477c3" HandleID="k8s-pod-network.5e6fc284e88673e193c9910bdf52dc2df04392d112b0f429bf831dc0545477c3" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-csi--node--driver--jv7gx-eth0" Jan 24 00:33:06.862330 containerd[1650]: 2026-01-24 00:33:06.854 [INFO][4625] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5e6fc284e88673e193c9910bdf52dc2df04392d112b0f429bf831dc0545477c3" HandleID="k8s-pod-network.5e6fc284e88673e193c9910bdf52dc2df04392d112b0f429bf831dc0545477c3" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-csi--node--driver--jv7gx-eth0" Jan 24 00:33:06.862330 containerd[1650]: 2026-01-24 00:33:06.856 [INFO][4625] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:33:06.862330 containerd[1650]: 2026-01-24 00:33:06.859 [INFO][4618] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5e6fc284e88673e193c9910bdf52dc2df04392d112b0f429bf831dc0545477c3" Jan 24 00:33:06.862942 containerd[1650]: time="2026-01-24T00:33:06.862365109Z" level=info msg="TearDown network for sandbox \"5e6fc284e88673e193c9910bdf52dc2df04392d112b0f429bf831dc0545477c3\" successfully" Jan 24 00:33:06.862942 containerd[1650]: time="2026-01-24T00:33:06.862415899Z" level=info msg="StopPodSandbox for \"5e6fc284e88673e193c9910bdf52dc2df04392d112b0f429bf831dc0545477c3\" returns successfully" Jan 24 00:33:06.863281 containerd[1650]: time="2026-01-24T00:33:06.863248518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jv7gx,Uid:08d51dd3-a54b-4b8c-9510-41c1d4106f97,Namespace:calico-system,Attempt:1,}" Jan 24 00:33:06.924701 systemd[1]: run-netns-cni\x2d3c24e19b\x2dce7e\x2d35ea\x2d91eb\x2df3fee4b87af5.mount: Deactivated successfully. Jan 24 00:33:06.990240 systemd-networkd[1262]: cali41069dfb3fc: Link UP Jan 24 00:33:06.996887 systemd-networkd[1262]: cali41069dfb3fc: Gained carrier Jan 24 00:33:07.010553 kubelet[2753]: E0124 00:33:07.010451 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f4f66fd65-vcrjr" podUID="8571ab88-459c-48f7-a296-37c9ac9b6a8a" Jan 24 00:33:07.023893 kubelet[2753]: I0124 00:33:07.023670 2753 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-9hl7k" podStartSLOduration=37.021278117 podStartE2EDuration="37.021278117s" podCreationTimestamp="2026-01-24 00:32:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:33:07.019696319 +0000 UTC m=+42.388275670" watchObservedRunningTime="2026-01-24 00:33:07.021278117 +0000 UTC m=+42.389857448" Jan 24 00:33:07.034568 containerd[1650]: 2026-01-24 00:33:06.907 [INFO][4631] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--a9e48d2ea0-k8s-csi--node--driver--jv7gx-eth0 csi-node-driver- calico-system 08d51dd3-a54b-4b8c-9510-41c1d4106f97 979 0 2026-01-24 00:32:43 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-6-n-a9e48d2ea0 csi-node-driver-jv7gx eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali41069dfb3fc [] [] }} ContainerID="350cbd8ab696e753bb377ec100ed03139c4f9c090e390ab01fe5b3b4f5072e96" Namespace="calico-system" Pod="csi-node-driver-jv7gx" WorkloadEndpoint="ci--4081--3--6--n--a9e48d2ea0-k8s-csi--node--driver--jv7gx-" Jan 24 00:33:07.034568 containerd[1650]: 2026-01-24 00:33:06.907 [INFO][4631] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="350cbd8ab696e753bb377ec100ed03139c4f9c090e390ab01fe5b3b4f5072e96" Namespace="calico-system" Pod="csi-node-driver-jv7gx" WorkloadEndpoint="ci--4081--3--6--n--a9e48d2ea0-k8s-csi--node--driver--jv7gx-eth0" Jan 24 00:33:07.034568 containerd[1650]: 2026-01-24 00:33:06.947 [INFO][4644] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="350cbd8ab696e753bb377ec100ed03139c4f9c090e390ab01fe5b3b4f5072e96" HandleID="k8s-pod-network.350cbd8ab696e753bb377ec100ed03139c4f9c090e390ab01fe5b3b4f5072e96" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-csi--node--driver--jv7gx-eth0" Jan 24 00:33:07.034568 containerd[1650]: 2026-01-24 00:33:06.947 [INFO][4644] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="350cbd8ab696e753bb377ec100ed03139c4f9c090e390ab01fe5b3b4f5072e96" HandleID="k8s-pod-network.350cbd8ab696e753bb377ec100ed03139c4f9c090e390ab01fe5b3b4f5072e96" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-csi--node--driver--jv7gx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f010), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-a9e48d2ea0", "pod":"csi-node-driver-jv7gx", "timestamp":"2026-01-24 00:33:06.947198891 +0000 UTC"}, Hostname:"ci-4081-3-6-n-a9e48d2ea0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:33:07.034568 containerd[1650]: 2026-01-24 00:33:06.947 [INFO][4644] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:33:07.034568 containerd[1650]: 2026-01-24 00:33:06.947 [INFO][4644] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:33:07.034568 containerd[1650]: 2026-01-24 00:33:06.947 [INFO][4644] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-a9e48d2ea0' Jan 24 00:33:07.034568 containerd[1650]: 2026-01-24 00:33:06.955 [INFO][4644] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.350cbd8ab696e753bb377ec100ed03139c4f9c090e390ab01fe5b3b4f5072e96" host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:07.034568 containerd[1650]: 2026-01-24 00:33:06.959 [INFO][4644] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:07.034568 containerd[1650]: 2026-01-24 00:33:06.962 [INFO][4644] ipam/ipam.go 511: Trying affinity for 192.168.43.0/26 host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:07.034568 containerd[1650]: 2026-01-24 00:33:06.964 [INFO][4644] ipam/ipam.go 158: Attempting to load block cidr=192.168.43.0/26 host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:07.034568 containerd[1650]: 2026-01-24 00:33:06.966 [INFO][4644] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.43.0/26 host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:07.034568 containerd[1650]: 2026-01-24 00:33:06.966 [INFO][4644] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.43.0/26 handle="k8s-pod-network.350cbd8ab696e753bb377ec100ed03139c4f9c090e390ab01fe5b3b4f5072e96" host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:07.034568 containerd[1650]: 2026-01-24 00:33:06.967 [INFO][4644] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.350cbd8ab696e753bb377ec100ed03139c4f9c090e390ab01fe5b3b4f5072e96 Jan 24 00:33:07.034568 containerd[1650]: 2026-01-24 00:33:06.972 [INFO][4644] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.43.0/26 handle="k8s-pod-network.350cbd8ab696e753bb377ec100ed03139c4f9c090e390ab01fe5b3b4f5072e96" host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:07.034568 containerd[1650]: 2026-01-24 00:33:06.979 [INFO][4644] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.43.4/26] block=192.168.43.0/26 handle="k8s-pod-network.350cbd8ab696e753bb377ec100ed03139c4f9c090e390ab01fe5b3b4f5072e96" host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:07.034568 containerd[1650]: 2026-01-24 00:33:06.979 [INFO][4644] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.43.4/26] handle="k8s-pod-network.350cbd8ab696e753bb377ec100ed03139c4f9c090e390ab01fe5b3b4f5072e96" host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:07.034568 containerd[1650]: 2026-01-24 00:33:06.979 [INFO][4644] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:33:07.034568 containerd[1650]: 2026-01-24 00:33:06.979 [INFO][4644] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.43.4/26] IPv6=[] ContainerID="350cbd8ab696e753bb377ec100ed03139c4f9c090e390ab01fe5b3b4f5072e96" HandleID="k8s-pod-network.350cbd8ab696e753bb377ec100ed03139c4f9c090e390ab01fe5b3b4f5072e96" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-csi--node--driver--jv7gx-eth0" Jan 24 00:33:07.037705 containerd[1650]: 2026-01-24 00:33:06.981 [INFO][4631] cni-plugin/k8s.go 418: Populated endpoint ContainerID="350cbd8ab696e753bb377ec100ed03139c4f9c090e390ab01fe5b3b4f5072e96" Namespace="calico-system" Pod="csi-node-driver-jv7gx" WorkloadEndpoint="ci--4081--3--6--n--a9e48d2ea0-k8s-csi--node--driver--jv7gx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--a9e48d2ea0-k8s-csi--node--driver--jv7gx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"08d51dd3-a54b-4b8c-9510-41c1d4106f97", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 32, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-a9e48d2ea0", ContainerID:"", Pod:"csi-node-driver-jv7gx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.43.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali41069dfb3fc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:33:07.037705 containerd[1650]: 2026-01-24 00:33:06.982 [INFO][4631] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.43.4/32] ContainerID="350cbd8ab696e753bb377ec100ed03139c4f9c090e390ab01fe5b3b4f5072e96" Namespace="calico-system" Pod="csi-node-driver-jv7gx" WorkloadEndpoint="ci--4081--3--6--n--a9e48d2ea0-k8s-csi--node--driver--jv7gx-eth0" Jan 24 00:33:07.037705 containerd[1650]: 2026-01-24 00:33:06.982 [INFO][4631] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali41069dfb3fc ContainerID="350cbd8ab696e753bb377ec100ed03139c4f9c090e390ab01fe5b3b4f5072e96" Namespace="calico-system" Pod="csi-node-driver-jv7gx" WorkloadEndpoint="ci--4081--3--6--n--a9e48d2ea0-k8s-csi--node--driver--jv7gx-eth0" Jan 24 00:33:07.037705 containerd[1650]: 2026-01-24 00:33:06.997 [INFO][4631] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="350cbd8ab696e753bb377ec100ed03139c4f9c090e390ab01fe5b3b4f5072e96" Namespace="calico-system" Pod="csi-node-driver-jv7gx" WorkloadEndpoint="ci--4081--3--6--n--a9e48d2ea0-k8s-csi--node--driver--jv7gx-eth0" Jan 24 00:33:07.037705 containerd[1650]: 2026-01-24 00:33:06.999 [INFO][4631] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="350cbd8ab696e753bb377ec100ed03139c4f9c090e390ab01fe5b3b4f5072e96" Namespace="calico-system" Pod="csi-node-driver-jv7gx" WorkloadEndpoint="ci--4081--3--6--n--a9e48d2ea0-k8s-csi--node--driver--jv7gx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--a9e48d2ea0-k8s-csi--node--driver--jv7gx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"08d51dd3-a54b-4b8c-9510-41c1d4106f97", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 32, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-a9e48d2ea0", ContainerID:"350cbd8ab696e753bb377ec100ed03139c4f9c090e390ab01fe5b3b4f5072e96", Pod:"csi-node-driver-jv7gx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.43.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali41069dfb3fc", MAC:"36:77:8a:97:12:c7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:33:07.037705 containerd[1650]: 2026-01-24 00:33:07.025 [INFO][4631] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="350cbd8ab696e753bb377ec100ed03139c4f9c090e390ab01fe5b3b4f5072e96" Namespace="calico-system" Pod="csi-node-driver-jv7gx" WorkloadEndpoint="ci--4081--3--6--n--a9e48d2ea0-k8s-csi--node--driver--jv7gx-eth0" Jan 24 00:33:07.087415 containerd[1650]: time="2026-01-24T00:33:07.086222328Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:33:07.087415 containerd[1650]: time="2026-01-24T00:33:07.086322418Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:33:07.087415 containerd[1650]: time="2026-01-24T00:33:07.086419488Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:33:07.087415 containerd[1650]: time="2026-01-24T00:33:07.086642938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:33:07.116749 systemd[1]: run-containerd-runc-k8s.io-350cbd8ab696e753bb377ec100ed03139c4f9c090e390ab01fe5b3b4f5072e96-runc.hEre9s.mount: Deactivated successfully. Jan 24 00:33:07.133824 containerd[1650]: time="2026-01-24T00:33:07.133783478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jv7gx,Uid:08d51dd3-a54b-4b8c-9510-41c1d4106f97,Namespace:calico-system,Attempt:1,} returns sandbox id \"350cbd8ab696e753bb377ec100ed03139c4f9c090e390ab01fe5b3b4f5072e96\"" Jan 24 00:33:07.136647 containerd[1650]: time="2026-01-24T00:33:07.136539265Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:33:07.475994 systemd-networkd[1262]: cali3f5ba50de34: Gained IPv6LL Jan 24 00:33:07.574137 containerd[1650]: time="2026-01-24T00:33:07.574030731Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:33:07.603695 containerd[1650]: time="2026-01-24T00:33:07.603568890Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 24 00:33:07.603695 containerd[1650]: time="2026-01-24T00:33:07.603605260Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:33:07.604363 kubelet[2753]: E0124 00:33:07.603944 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:33:07.604363 kubelet[2753]: E0124 00:33:07.604027 2753 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:33:07.604762 kubelet[2753]: E0124 00:33:07.604566 2753 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fqrxx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jv7gx_calico-system(08d51dd3-a54b-4b8c-9510-41c1d4106f97): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:33:07.607720 containerd[1650]: time="2026-01-24T00:33:07.607660735Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:33:07.732629 containerd[1650]: time="2026-01-24T00:33:07.732207383Z" level=info msg="StopPodSandbox for \"7da988224e02efacacbf41da865446bcc0b13db71dd771b2ae7221d284a8372d\"" Jan 24 00:33:07.733593 containerd[1650]: time="2026-01-24T00:33:07.732924143Z" level=info msg="StopPodSandbox for \"b87b98a3af7ee3503771164041e0de4cad3a5c161331c13536afbe36658eecd0\"" Jan 24 00:33:07.737541 containerd[1650]: time="2026-01-24T00:33:07.737501629Z" level=info msg="StopPodSandbox for \"0b8c502a59aef7be8a1d38df20e6870e36a886296160f154bc7d1cc308763e9c\"" Jan 24 00:33:07.884515 containerd[1650]: 2026-01-24 00:33:07.831 [INFO][4727] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7da988224e02efacacbf41da865446bcc0b13db71dd771b2ae7221d284a8372d" Jan 24 00:33:07.884515 containerd[1650]: 2026-01-24 00:33:07.831 [INFO][4727] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7da988224e02efacacbf41da865446bcc0b13db71dd771b2ae7221d284a8372d" iface="eth0" netns="/var/run/netns/cni-bf95a9ac-bb7a-a43b-00a4-e49639c2941c" Jan 24 00:33:07.884515 containerd[1650]: 2026-01-24 00:33:07.834 [INFO][4727] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7da988224e02efacacbf41da865446bcc0b13db71dd771b2ae7221d284a8372d" iface="eth0" netns="/var/run/netns/cni-bf95a9ac-bb7a-a43b-00a4-e49639c2941c" Jan 24 00:33:07.884515 containerd[1650]: 2026-01-24 00:33:07.834 [INFO][4727] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7da988224e02efacacbf41da865446bcc0b13db71dd771b2ae7221d284a8372d" iface="eth0" netns="/var/run/netns/cni-bf95a9ac-bb7a-a43b-00a4-e49639c2941c" Jan 24 00:33:07.884515 containerd[1650]: 2026-01-24 00:33:07.834 [INFO][4727] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7da988224e02efacacbf41da865446bcc0b13db71dd771b2ae7221d284a8372d" Jan 24 00:33:07.884515 containerd[1650]: 2026-01-24 00:33:07.834 [INFO][4727] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7da988224e02efacacbf41da865446bcc0b13db71dd771b2ae7221d284a8372d" Jan 24 00:33:07.884515 containerd[1650]: 2026-01-24 00:33:07.867 [INFO][4751] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7da988224e02efacacbf41da865446bcc0b13db71dd771b2ae7221d284a8372d" HandleID="k8s-pod-network.7da988224e02efacacbf41da865446bcc0b13db71dd771b2ae7221d284a8372d" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--kube--controllers--65d6744f47--ksmv4-eth0" Jan 24 00:33:07.884515 containerd[1650]: 2026-01-24 00:33:07.867 [INFO][4751] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:33:07.884515 containerd[1650]: 2026-01-24 00:33:07.867 [INFO][4751] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:33:07.884515 containerd[1650]: 2026-01-24 00:33:07.875 [WARNING][4751] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7da988224e02efacacbf41da865446bcc0b13db71dd771b2ae7221d284a8372d" HandleID="k8s-pod-network.7da988224e02efacacbf41da865446bcc0b13db71dd771b2ae7221d284a8372d" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--kube--controllers--65d6744f47--ksmv4-eth0" Jan 24 00:33:07.884515 containerd[1650]: 2026-01-24 00:33:07.875 [INFO][4751] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7da988224e02efacacbf41da865446bcc0b13db71dd771b2ae7221d284a8372d" HandleID="k8s-pod-network.7da988224e02efacacbf41da865446bcc0b13db71dd771b2ae7221d284a8372d" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--kube--controllers--65d6744f47--ksmv4-eth0" Jan 24 00:33:07.884515 containerd[1650]: 2026-01-24 00:33:07.876 [INFO][4751] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:33:07.884515 containerd[1650]: 2026-01-24 00:33:07.878 [INFO][4727] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7da988224e02efacacbf41da865446bcc0b13db71dd771b2ae7221d284a8372d" Jan 24 00:33:07.890771 containerd[1650]: time="2026-01-24T00:33:07.888478118Z" level=info msg="TearDown network for sandbox \"7da988224e02efacacbf41da865446bcc0b13db71dd771b2ae7221d284a8372d\" successfully" Jan 24 00:33:07.890771 containerd[1650]: time="2026-01-24T00:33:07.888506688Z" level=info msg="StopPodSandbox for \"7da988224e02efacacbf41da865446bcc0b13db71dd771b2ae7221d284a8372d\" returns successfully" Jan 24 00:33:07.888821 systemd[1]: run-netns-cni\x2dbf95a9ac\x2dbb7a\x2da43b\x2d00a4\x2de49639c2941c.mount: Deactivated successfully. Jan 24 00:33:07.891233 containerd[1650]: time="2026-01-24T00:33:07.891215116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65d6744f47-ksmv4,Uid:60b48194-9cf1-4af7-bca5-7353b7dd4d41,Namespace:calico-system,Attempt:1,}" Jan 24 00:33:07.908732 containerd[1650]: 2026-01-24 00:33:07.844 [INFO][4729] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b87b98a3af7ee3503771164041e0de4cad3a5c161331c13536afbe36658eecd0" Jan 24 00:33:07.908732 containerd[1650]: 2026-01-24 00:33:07.844 [INFO][4729] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b87b98a3af7ee3503771164041e0de4cad3a5c161331c13536afbe36658eecd0" iface="eth0" netns="/var/run/netns/cni-d743c950-23bd-1788-87bc-082586882285" Jan 24 00:33:07.908732 containerd[1650]: 2026-01-24 00:33:07.845 [INFO][4729] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b87b98a3af7ee3503771164041e0de4cad3a5c161331c13536afbe36658eecd0" iface="eth0" netns="/var/run/netns/cni-d743c950-23bd-1788-87bc-082586882285" Jan 24 00:33:07.908732 containerd[1650]: 2026-01-24 00:33:07.845 [INFO][4729] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b87b98a3af7ee3503771164041e0de4cad3a5c161331c13536afbe36658eecd0" iface="eth0" netns="/var/run/netns/cni-d743c950-23bd-1788-87bc-082586882285" Jan 24 00:33:07.908732 containerd[1650]: 2026-01-24 00:33:07.845 [INFO][4729] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b87b98a3af7ee3503771164041e0de4cad3a5c161331c13536afbe36658eecd0" Jan 24 00:33:07.908732 containerd[1650]: 2026-01-24 00:33:07.845 [INFO][4729] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b87b98a3af7ee3503771164041e0de4cad3a5c161331c13536afbe36658eecd0" Jan 24 00:33:07.908732 containerd[1650]: 2026-01-24 00:33:07.893 [INFO][4756] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b87b98a3af7ee3503771164041e0de4cad3a5c161331c13536afbe36658eecd0" HandleID="k8s-pod-network.b87b98a3af7ee3503771164041e0de4cad3a5c161331c13536afbe36658eecd0" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-coredns--668d6bf9bc--xrqs4-eth0" Jan 24 00:33:07.908732 containerd[1650]: 2026-01-24 00:33:07.893 [INFO][4756] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:33:07.908732 containerd[1650]: 2026-01-24 00:33:07.893 [INFO][4756] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:33:07.908732 containerd[1650]: 2026-01-24 00:33:07.899 [WARNING][4756] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b87b98a3af7ee3503771164041e0de4cad3a5c161331c13536afbe36658eecd0" HandleID="k8s-pod-network.b87b98a3af7ee3503771164041e0de4cad3a5c161331c13536afbe36658eecd0" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-coredns--668d6bf9bc--xrqs4-eth0" Jan 24 00:33:07.908732 containerd[1650]: 2026-01-24 00:33:07.899 [INFO][4756] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b87b98a3af7ee3503771164041e0de4cad3a5c161331c13536afbe36658eecd0" HandleID="k8s-pod-network.b87b98a3af7ee3503771164041e0de4cad3a5c161331c13536afbe36658eecd0" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-coredns--668d6bf9bc--xrqs4-eth0" Jan 24 00:33:07.908732 containerd[1650]: 2026-01-24 00:33:07.901 [INFO][4756] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:33:07.908732 containerd[1650]: 2026-01-24 00:33:07.904 [INFO][4729] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b87b98a3af7ee3503771164041e0de4cad3a5c161331c13536afbe36658eecd0" Jan 24 00:33:07.911502 containerd[1650]: time="2026-01-24T00:33:07.911473484Z" level=info msg="TearDown network for sandbox \"b87b98a3af7ee3503771164041e0de4cad3a5c161331c13536afbe36658eecd0\" successfully" Jan 24 00:33:07.911622 containerd[1650]: time="2026-01-24T00:33:07.911611394Z" level=info msg="StopPodSandbox for \"b87b98a3af7ee3503771164041e0de4cad3a5c161331c13536afbe36658eecd0\" returns successfully" Jan 24 00:33:07.912968 containerd[1650]: time="2026-01-24T00:33:07.912932973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xrqs4,Uid:c67637af-e8b0-4286-97b2-b018e1728d18,Namespace:kube-system,Attempt:1,}" Jan 24 00:33:07.919922 systemd[1]: run-netns-cni\x2dd743c950\x2d23bd\x2d1788\x2d87bc\x2d082586882285.mount: Deactivated successfully. Jan 24 00:33:07.926955 containerd[1650]: 2026-01-24 00:33:07.861 [INFO][4733] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0b8c502a59aef7be8a1d38df20e6870e36a886296160f154bc7d1cc308763e9c" Jan 24 00:33:07.926955 containerd[1650]: 2026-01-24 00:33:07.861 [INFO][4733] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0b8c502a59aef7be8a1d38df20e6870e36a886296160f154bc7d1cc308763e9c" iface="eth0" netns="/var/run/netns/cni-cc3ce523-e6b4-7621-f46c-377a1f060c67" Jan 24 00:33:07.926955 containerd[1650]: 2026-01-24 00:33:07.861 [INFO][4733] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0b8c502a59aef7be8a1d38df20e6870e36a886296160f154bc7d1cc308763e9c" iface="eth0" netns="/var/run/netns/cni-cc3ce523-e6b4-7621-f46c-377a1f060c67" Jan 24 00:33:07.926955 containerd[1650]: 2026-01-24 00:33:07.862 [INFO][4733] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0b8c502a59aef7be8a1d38df20e6870e36a886296160f154bc7d1cc308763e9c" iface="eth0" netns="/var/run/netns/cni-cc3ce523-e6b4-7621-f46c-377a1f060c67" Jan 24 00:33:07.926955 containerd[1650]: 2026-01-24 00:33:07.862 [INFO][4733] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0b8c502a59aef7be8a1d38df20e6870e36a886296160f154bc7d1cc308763e9c" Jan 24 00:33:07.926955 containerd[1650]: 2026-01-24 00:33:07.862 [INFO][4733] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0b8c502a59aef7be8a1d38df20e6870e36a886296160f154bc7d1cc308763e9c" Jan 24 00:33:07.926955 containerd[1650]: 2026-01-24 00:33:07.905 [INFO][4762] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0b8c502a59aef7be8a1d38df20e6870e36a886296160f154bc7d1cc308763e9c" HandleID="k8s-pod-network.0b8c502a59aef7be8a1d38df20e6870e36a886296160f154bc7d1cc308763e9c" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--apiserver--f4f66fd65--f9s4x-eth0" Jan 24 00:33:07.926955 containerd[1650]: 2026-01-24 00:33:07.908 [INFO][4762] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:33:07.926955 containerd[1650]: 2026-01-24 00:33:07.908 [INFO][4762] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:33:07.926955 containerd[1650]: 2026-01-24 00:33:07.920 [WARNING][4762] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0b8c502a59aef7be8a1d38df20e6870e36a886296160f154bc7d1cc308763e9c" HandleID="k8s-pod-network.0b8c502a59aef7be8a1d38df20e6870e36a886296160f154bc7d1cc308763e9c" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--apiserver--f4f66fd65--f9s4x-eth0" Jan 24 00:33:07.926955 containerd[1650]: 2026-01-24 00:33:07.920 [INFO][4762] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0b8c502a59aef7be8a1d38df20e6870e36a886296160f154bc7d1cc308763e9c" HandleID="k8s-pod-network.0b8c502a59aef7be8a1d38df20e6870e36a886296160f154bc7d1cc308763e9c" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--apiserver--f4f66fd65--f9s4x-eth0" Jan 24 00:33:07.926955 containerd[1650]: 2026-01-24 00:33:07.922 [INFO][4762] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:33:07.926955 containerd[1650]: 2026-01-24 00:33:07.924 [INFO][4733] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0b8c502a59aef7be8a1d38df20e6870e36a886296160f154bc7d1cc308763e9c" Jan 24 00:33:07.928843 containerd[1650]: time="2026-01-24T00:33:07.927121327Z" level=info msg="TearDown network for sandbox \"0b8c502a59aef7be8a1d38df20e6870e36a886296160f154bc7d1cc308763e9c\" successfully" Jan 24 00:33:07.928843 containerd[1650]: time="2026-01-24T00:33:07.927171188Z" level=info msg="StopPodSandbox for \"0b8c502a59aef7be8a1d38df20e6870e36a886296160f154bc7d1cc308763e9c\" returns successfully" Jan 24 00:33:07.932356 containerd[1650]: time="2026-01-24T00:33:07.931290783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f4f66fd65-f9s4x,Uid:45f9a298-5fb0-472f-b747-58a979ff2009,Namespace:calico-apiserver,Attempt:1,}" Jan 24 00:33:07.931915 systemd[1]: run-netns-cni\x2dcc3ce523\x2de6b4\x2d7621\x2df46c\x2d377a1f060c67.mount: Deactivated successfully. Jan 24 00:33:08.011781 kubelet[2753]: E0124 00:33:08.011475 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f4f66fd65-vcrjr" podUID="8571ab88-459c-48f7-a296-37c9ac9b6a8a" Jan 24 00:33:08.064245 systemd-networkd[1262]: calie17e4dc2dea: Link UP Jan 24 00:33:08.065485 systemd-networkd[1262]: calie17e4dc2dea: Gained carrier Jan 24 00:33:08.073077 containerd[1650]: time="2026-01-24T00:33:08.072915529Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:33:08.076827 containerd[1650]: time="2026-01-24T00:33:08.076638186Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:33:08.076827 containerd[1650]: time="2026-01-24T00:33:08.076664786Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 24 00:33:08.077201 kubelet[2753]: E0124 00:33:08.077059 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:33:08.079462 kubelet[2753]: E0124 00:33:08.079426 2753 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:33:08.079858 kubelet[2753]: E0124 00:33:08.079659 2753 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fqrxx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jv7gx_calico-system(08d51dd3-a54b-4b8c-9510-41c1d4106f97): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:33:08.081497 kubelet[2753]: E0124 00:33:08.081269 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jv7gx" podUID="08d51dd3-a54b-4b8c-9510-41c1d4106f97" Jan 24 00:33:08.082251 containerd[1650]: 2026-01-24 00:33:07.969 [INFO][4772] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--a9e48d2ea0-k8s-calico--kube--controllers--65d6744f47--ksmv4-eth0 calico-kube-controllers-65d6744f47- calico-system 60b48194-9cf1-4af7-bca5-7353b7dd4d41 1003 0 2026-01-24 00:32:43 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:65d6744f47 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-6-n-a9e48d2ea0 calico-kube-controllers-65d6744f47-ksmv4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie17e4dc2dea [] [] }} ContainerID="9b16d4ab10e954ab7e5502f0fc3f2e8c0dca3a9e5d11a8eab54187283e6f6d6c" Namespace="calico-system" Pod="calico-kube-controllers-65d6744f47-ksmv4" WorkloadEndpoint="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--kube--controllers--65d6744f47--ksmv4-" Jan 24 00:33:08.082251 containerd[1650]: 2026-01-24 00:33:07.970 [INFO][4772] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9b16d4ab10e954ab7e5502f0fc3f2e8c0dca3a9e5d11a8eab54187283e6f6d6c" Namespace="calico-system" Pod="calico-kube-controllers-65d6744f47-ksmv4" WorkloadEndpoint="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--kube--controllers--65d6744f47--ksmv4-eth0" Jan 24 00:33:08.082251 containerd[1650]: 2026-01-24 00:33:08.004 [INFO][4807] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9b16d4ab10e954ab7e5502f0fc3f2e8c0dca3a9e5d11a8eab54187283e6f6d6c" HandleID="k8s-pod-network.9b16d4ab10e954ab7e5502f0fc3f2e8c0dca3a9e5d11a8eab54187283e6f6d6c" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--kube--controllers--65d6744f47--ksmv4-eth0" Jan 24 00:33:08.082251 containerd[1650]: 2026-01-24 00:33:08.007 [INFO][4807] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9b16d4ab10e954ab7e5502f0fc3f2e8c0dca3a9e5d11a8eab54187283e6f6d6c" HandleID="k8s-pod-network.9b16d4ab10e954ab7e5502f0fc3f2e8c0dca3a9e5d11a8eab54187283e6f6d6c" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--kube--controllers--65d6744f47--ksmv4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f010), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-a9e48d2ea0", "pod":"calico-kube-controllers-65d6744f47-ksmv4", "timestamp":"2026-01-24 00:33:08.004851445 +0000 UTC"}, Hostname:"ci-4081-3-6-n-a9e48d2ea0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:33:08.082251 containerd[1650]: 2026-01-24 00:33:08.007 [INFO][4807] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:33:08.082251 containerd[1650]: 2026-01-24 00:33:08.007 [INFO][4807] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:33:08.082251 containerd[1650]: 2026-01-24 00:33:08.008 [INFO][4807] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-a9e48d2ea0' Jan 24 00:33:08.082251 containerd[1650]: 2026-01-24 00:33:08.016 [INFO][4807] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9b16d4ab10e954ab7e5502f0fc3f2e8c0dca3a9e5d11a8eab54187283e6f6d6c" host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:08.082251 containerd[1650]: 2026-01-24 00:33:08.028 [INFO][4807] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:08.082251 containerd[1650]: 2026-01-24 00:33:08.033 [INFO][4807] ipam/ipam.go 511: Trying affinity for 192.168.43.0/26 host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:08.082251 containerd[1650]: 2026-01-24 00:33:08.036 [INFO][4807] ipam/ipam.go 158: Attempting to load block cidr=192.168.43.0/26 host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:08.082251 containerd[1650]: 2026-01-24 00:33:08.042 [INFO][4807] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.43.0/26 host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:08.082251 containerd[1650]: 2026-01-24 00:33:08.043 [INFO][4807] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.43.0/26 handle="k8s-pod-network.9b16d4ab10e954ab7e5502f0fc3f2e8c0dca3a9e5d11a8eab54187283e6f6d6c" host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:08.082251 containerd[1650]: 2026-01-24 00:33:08.045 [INFO][4807] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9b16d4ab10e954ab7e5502f0fc3f2e8c0dca3a9e5d11a8eab54187283e6f6d6c Jan 24 00:33:08.082251 containerd[1650]: 2026-01-24 00:33:08.050 [INFO][4807] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.43.0/26 handle="k8s-pod-network.9b16d4ab10e954ab7e5502f0fc3f2e8c0dca3a9e5d11a8eab54187283e6f6d6c" host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:08.082251 containerd[1650]: 2026-01-24 00:33:08.056 [INFO][4807] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.43.5/26] block=192.168.43.0/26 handle="k8s-pod-network.9b16d4ab10e954ab7e5502f0fc3f2e8c0dca3a9e5d11a8eab54187283e6f6d6c" host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:08.082251 containerd[1650]: 2026-01-24 00:33:08.056 [INFO][4807] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.43.5/26] handle="k8s-pod-network.9b16d4ab10e954ab7e5502f0fc3f2e8c0dca3a9e5d11a8eab54187283e6f6d6c" host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:08.082251 containerd[1650]: 2026-01-24 00:33:08.056 [INFO][4807] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:33:08.082251 containerd[1650]: 2026-01-24 00:33:08.056 [INFO][4807] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.43.5/26] IPv6=[] ContainerID="9b16d4ab10e954ab7e5502f0fc3f2e8c0dca3a9e5d11a8eab54187283e6f6d6c" HandleID="k8s-pod-network.9b16d4ab10e954ab7e5502f0fc3f2e8c0dca3a9e5d11a8eab54187283e6f6d6c" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--kube--controllers--65d6744f47--ksmv4-eth0" Jan 24 00:33:08.083170 containerd[1650]: 2026-01-24 00:33:08.061 [INFO][4772] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9b16d4ab10e954ab7e5502f0fc3f2e8c0dca3a9e5d11a8eab54187283e6f6d6c" Namespace="calico-system" Pod="calico-kube-controllers-65d6744f47-ksmv4" WorkloadEndpoint="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--kube--controllers--65d6744f47--ksmv4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--a9e48d2ea0-k8s-calico--kube--controllers--65d6744f47--ksmv4-eth0", GenerateName:"calico-kube-controllers-65d6744f47-", Namespace:"calico-system", SelfLink:"", UID:"60b48194-9cf1-4af7-bca5-7353b7dd4d41", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 32, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"65d6744f47", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-a9e48d2ea0", ContainerID:"", Pod:"calico-kube-controllers-65d6744f47-ksmv4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.43.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie17e4dc2dea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:33:08.083170 containerd[1650]: 2026-01-24 00:33:08.061 [INFO][4772] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.43.5/32] ContainerID="9b16d4ab10e954ab7e5502f0fc3f2e8c0dca3a9e5d11a8eab54187283e6f6d6c" Namespace="calico-system" Pod="calico-kube-controllers-65d6744f47-ksmv4" WorkloadEndpoint="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--kube--controllers--65d6744f47--ksmv4-eth0" Jan 24 00:33:08.083170 containerd[1650]: 2026-01-24 00:33:08.061 [INFO][4772] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie17e4dc2dea ContainerID="9b16d4ab10e954ab7e5502f0fc3f2e8c0dca3a9e5d11a8eab54187283e6f6d6c" Namespace="calico-system" Pod="calico-kube-controllers-65d6744f47-ksmv4" WorkloadEndpoint="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--kube--controllers--65d6744f47--ksmv4-eth0" Jan 24 00:33:08.083170 containerd[1650]: 2026-01-24 00:33:08.064 [INFO][4772] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9b16d4ab10e954ab7e5502f0fc3f2e8c0dca3a9e5d11a8eab54187283e6f6d6c" Namespace="calico-system" Pod="calico-kube-controllers-65d6744f47-ksmv4" WorkloadEndpoint="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--kube--controllers--65d6744f47--ksmv4-eth0" Jan 24 00:33:08.083170 containerd[1650]: 2026-01-24 00:33:08.065 [INFO][4772] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9b16d4ab10e954ab7e5502f0fc3f2e8c0dca3a9e5d11a8eab54187283e6f6d6c" Namespace="calico-system" Pod="calico-kube-controllers-65d6744f47-ksmv4" WorkloadEndpoint="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--kube--controllers--65d6744f47--ksmv4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--a9e48d2ea0-k8s-calico--kube--controllers--65d6744f47--ksmv4-eth0", GenerateName:"calico-kube-controllers-65d6744f47-", Namespace:"calico-system", SelfLink:"", UID:"60b48194-9cf1-4af7-bca5-7353b7dd4d41", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 32, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"65d6744f47", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-a9e48d2ea0", ContainerID:"9b16d4ab10e954ab7e5502f0fc3f2e8c0dca3a9e5d11a8eab54187283e6f6d6c", Pod:"calico-kube-controllers-65d6744f47-ksmv4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.43.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie17e4dc2dea", MAC:"ae:2f:6b:b4:56:6d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:33:08.083170 containerd[1650]: 2026-01-24 00:33:08.077 [INFO][4772] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9b16d4ab10e954ab7e5502f0fc3f2e8c0dca3a9e5d11a8eab54187283e6f6d6c" Namespace="calico-system" Pod="calico-kube-controllers-65d6744f47-ksmv4" WorkloadEndpoint="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--kube--controllers--65d6744f47--ksmv4-eth0" Jan 24 00:33:08.105936 containerd[1650]: time="2026-01-24T00:33:08.105790338Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:33:08.105936 containerd[1650]: time="2026-01-24T00:33:08.105849658Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:33:08.105936 containerd[1650]: time="2026-01-24T00:33:08.105857638Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:33:08.106145 containerd[1650]: time="2026-01-24T00:33:08.106092317Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:33:08.115603 systemd-networkd[1262]: cali57b1768dcce: Gained IPv6LL Jan 24 00:33:08.172319 systemd-networkd[1262]: cali460353233fb: Link UP Jan 24 00:33:08.174669 systemd-networkd[1262]: cali460353233fb: Gained carrier Jan 24 00:33:08.186724 containerd[1650]: time="2026-01-24T00:33:08.186632110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65d6744f47-ksmv4,Uid:60b48194-9cf1-4af7-bca5-7353b7dd4d41,Namespace:calico-system,Attempt:1,} returns sandbox id \"9b16d4ab10e954ab7e5502f0fc3f2e8c0dca3a9e5d11a8eab54187283e6f6d6c\"" Jan 24 00:33:08.195573 containerd[1650]: time="2026-01-24T00:33:08.195282181Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:33:08.196072 containerd[1650]: 2026-01-24 00:33:07.991 [INFO][4782] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--a9e48d2ea0-k8s-coredns--668d6bf9bc--xrqs4-eth0 coredns-668d6bf9bc- kube-system c67637af-e8b0-4286-97b2-b018e1728d18 1004 0 2026-01-24 00:32:30 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-n-a9e48d2ea0 coredns-668d6bf9bc-xrqs4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali460353233fb [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="7a2d73ae482d9ff189d0dde4488635d28cb7d596ecd30233797626f7d9252258" Namespace="kube-system" Pod="coredns-668d6bf9bc-xrqs4" WorkloadEndpoint="ci--4081--3--6--n--a9e48d2ea0-k8s-coredns--668d6bf9bc--xrqs4-" Jan 24 00:33:08.196072 containerd[1650]: 2026-01-24 00:33:07.992 [INFO][4782] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7a2d73ae482d9ff189d0dde4488635d28cb7d596ecd30233797626f7d9252258" Namespace="kube-system" Pod="coredns-668d6bf9bc-xrqs4" WorkloadEndpoint="ci--4081--3--6--n--a9e48d2ea0-k8s-coredns--668d6bf9bc--xrqs4-eth0" Jan 24 00:33:08.196072 containerd[1650]: 2026-01-24 00:33:08.047 [INFO][4815] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7a2d73ae482d9ff189d0dde4488635d28cb7d596ecd30233797626f7d9252258" HandleID="k8s-pod-network.7a2d73ae482d9ff189d0dde4488635d28cb7d596ecd30233797626f7d9252258" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-coredns--668d6bf9bc--xrqs4-eth0" Jan 24 00:33:08.196072 containerd[1650]: 2026-01-24 00:33:08.047 [INFO][4815] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7a2d73ae482d9ff189d0dde4488635d28cb7d596ecd30233797626f7d9252258" HandleID="k8s-pod-network.7a2d73ae482d9ff189d0dde4488635d28cb7d596ecd30233797626f7d9252258" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-coredns--668d6bf9bc--xrqs4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5600), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-n-a9e48d2ea0", "pod":"coredns-668d6bf9bc-xrqs4", "timestamp":"2026-01-24 00:33:08.047335795 +0000 UTC"}, Hostname:"ci-4081-3-6-n-a9e48d2ea0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:33:08.196072 containerd[1650]: 2026-01-24 00:33:08.047 [INFO][4815] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:33:08.196072 containerd[1650]: 2026-01-24 00:33:08.056 [INFO][4815] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:33:08.196072 containerd[1650]: 2026-01-24 00:33:08.057 [INFO][4815] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-a9e48d2ea0' Jan 24 00:33:08.196072 containerd[1650]: 2026-01-24 00:33:08.121 [INFO][4815] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7a2d73ae482d9ff189d0dde4488635d28cb7d596ecd30233797626f7d9252258" host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:08.196072 containerd[1650]: 2026-01-24 00:33:08.134 [INFO][4815] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:08.196072 containerd[1650]: 2026-01-24 00:33:08.140 [INFO][4815] ipam/ipam.go 511: Trying affinity for 192.168.43.0/26 host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:08.196072 containerd[1650]: 2026-01-24 00:33:08.141 [INFO][4815] ipam/ipam.go 158: Attempting to load block cidr=192.168.43.0/26 host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:08.196072 containerd[1650]: 2026-01-24 00:33:08.143 [INFO][4815] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.43.0/26 host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:08.196072 containerd[1650]: 2026-01-24 00:33:08.143 [INFO][4815] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.43.0/26 handle="k8s-pod-network.7a2d73ae482d9ff189d0dde4488635d28cb7d596ecd30233797626f7d9252258" host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:08.196072 containerd[1650]: 2026-01-24 00:33:08.144 [INFO][4815] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7a2d73ae482d9ff189d0dde4488635d28cb7d596ecd30233797626f7d9252258 Jan 24 00:33:08.196072 containerd[1650]: 2026-01-24 00:33:08.149 [INFO][4815] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.43.0/26 handle="k8s-pod-network.7a2d73ae482d9ff189d0dde4488635d28cb7d596ecd30233797626f7d9252258" host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:08.196072 containerd[1650]: 2026-01-24 00:33:08.155 [INFO][4815] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.43.6/26] block=192.168.43.0/26 handle="k8s-pod-network.7a2d73ae482d9ff189d0dde4488635d28cb7d596ecd30233797626f7d9252258" host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:08.196072 containerd[1650]: 2026-01-24 00:33:08.155 [INFO][4815] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.43.6/26] handle="k8s-pod-network.7a2d73ae482d9ff189d0dde4488635d28cb7d596ecd30233797626f7d9252258" host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:08.196072 containerd[1650]: 2026-01-24 00:33:08.155 [INFO][4815] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:33:08.196072 containerd[1650]: 2026-01-24 00:33:08.155 [INFO][4815] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.43.6/26] IPv6=[] ContainerID="7a2d73ae482d9ff189d0dde4488635d28cb7d596ecd30233797626f7d9252258" HandleID="k8s-pod-network.7a2d73ae482d9ff189d0dde4488635d28cb7d596ecd30233797626f7d9252258" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-coredns--668d6bf9bc--xrqs4-eth0" Jan 24 00:33:08.197898 containerd[1650]: 2026-01-24 00:33:08.163 [INFO][4782] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7a2d73ae482d9ff189d0dde4488635d28cb7d596ecd30233797626f7d9252258" Namespace="kube-system" Pod="coredns-668d6bf9bc-xrqs4" WorkloadEndpoint="ci--4081--3--6--n--a9e48d2ea0-k8s-coredns--668d6bf9bc--xrqs4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--a9e48d2ea0-k8s-coredns--668d6bf9bc--xrqs4-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c67637af-e8b0-4286-97b2-b018e1728d18", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 32, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-a9e48d2ea0", ContainerID:"", Pod:"coredns-668d6bf9bc-xrqs4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.43.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali460353233fb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:33:08.197898 containerd[1650]: 2026-01-24 00:33:08.163 [INFO][4782] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.43.6/32] ContainerID="7a2d73ae482d9ff189d0dde4488635d28cb7d596ecd30233797626f7d9252258" Namespace="kube-system" Pod="coredns-668d6bf9bc-xrqs4" WorkloadEndpoint="ci--4081--3--6--n--a9e48d2ea0-k8s-coredns--668d6bf9bc--xrqs4-eth0" Jan 24 00:33:08.197898 containerd[1650]: 2026-01-24 00:33:08.164 [INFO][4782] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali460353233fb ContainerID="7a2d73ae482d9ff189d0dde4488635d28cb7d596ecd30233797626f7d9252258" Namespace="kube-system" Pod="coredns-668d6bf9bc-xrqs4" WorkloadEndpoint="ci--4081--3--6--n--a9e48d2ea0-k8s-coredns--668d6bf9bc--xrqs4-eth0" Jan 24 00:33:08.197898 containerd[1650]: 2026-01-24 00:33:08.176 [INFO][4782] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7a2d73ae482d9ff189d0dde4488635d28cb7d596ecd30233797626f7d9252258" Namespace="kube-system" Pod="coredns-668d6bf9bc-xrqs4" WorkloadEndpoint="ci--4081--3--6--n--a9e48d2ea0-k8s-coredns--668d6bf9bc--xrqs4-eth0" Jan 24 00:33:08.197898 containerd[1650]: 2026-01-24 00:33:08.177 [INFO][4782] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7a2d73ae482d9ff189d0dde4488635d28cb7d596ecd30233797626f7d9252258" Namespace="kube-system" Pod="coredns-668d6bf9bc-xrqs4" WorkloadEndpoint="ci--4081--3--6--n--a9e48d2ea0-k8s-coredns--668d6bf9bc--xrqs4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--a9e48d2ea0-k8s-coredns--668d6bf9bc--xrqs4-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c67637af-e8b0-4286-97b2-b018e1728d18", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 32, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-a9e48d2ea0", ContainerID:"7a2d73ae482d9ff189d0dde4488635d28cb7d596ecd30233797626f7d9252258", Pod:"coredns-668d6bf9bc-xrqs4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.43.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali460353233fb", MAC:"12:ac:42:10:e9:f3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:33:08.197898 containerd[1650]: 2026-01-24 00:33:08.193 [INFO][4782] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7a2d73ae482d9ff189d0dde4488635d28cb7d596ecd30233797626f7d9252258" Namespace="kube-system" Pod="coredns-668d6bf9bc-xrqs4" WorkloadEndpoint="ci--4081--3--6--n--a9e48d2ea0-k8s-coredns--668d6bf9bc--xrqs4-eth0" Jan 24 00:33:08.217488 containerd[1650]: time="2026-01-24T00:33:08.216089211Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:33:08.217488 containerd[1650]: time="2026-01-24T00:33:08.217003780Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:33:08.217488 containerd[1650]: time="2026-01-24T00:33:08.217023190Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:33:08.217488 containerd[1650]: time="2026-01-24T00:33:08.217131890Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:33:08.264437 systemd-networkd[1262]: calica5eaf44f6a: Link UP Jan 24 00:33:08.266554 systemd-networkd[1262]: calica5eaf44f6a: Gained carrier Jan 24 00:33:08.284993 containerd[1650]: 2026-01-24 00:33:08.005 [INFO][4793] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--a9e48d2ea0-k8s-calico--apiserver--f4f66fd65--f9s4x-eth0 calico-apiserver-f4f66fd65- calico-apiserver 45f9a298-5fb0-472f-b747-58a979ff2009 1005 0 2026-01-24 00:32:39 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:f4f66fd65 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-n-a9e48d2ea0 calico-apiserver-f4f66fd65-f9s4x eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calica5eaf44f6a [] [] }} ContainerID="432740753d8c066924536c0e71bfe6ddaecb7058765937ea59f3802cafe3c3e5" Namespace="calico-apiserver" Pod="calico-apiserver-f4f66fd65-f9s4x" WorkloadEndpoint="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--apiserver--f4f66fd65--f9s4x-" Jan 24 00:33:08.284993 containerd[1650]: 2026-01-24 00:33:08.005 [INFO][4793] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="432740753d8c066924536c0e71bfe6ddaecb7058765937ea59f3802cafe3c3e5" Namespace="calico-apiserver" Pod="calico-apiserver-f4f66fd65-f9s4x" WorkloadEndpoint="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--apiserver--f4f66fd65--f9s4x-eth0" Jan 24 00:33:08.284993 containerd[1650]: 2026-01-24 00:33:08.057 [INFO][4819] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="432740753d8c066924536c0e71bfe6ddaecb7058765937ea59f3802cafe3c3e5" HandleID="k8s-pod-network.432740753d8c066924536c0e71bfe6ddaecb7058765937ea59f3802cafe3c3e5" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--apiserver--f4f66fd65--f9s4x-eth0" Jan 24 00:33:08.284993 containerd[1650]: 2026-01-24 00:33:08.057 [INFO][4819] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="432740753d8c066924536c0e71bfe6ddaecb7058765937ea59f3802cafe3c3e5" HandleID="k8s-pod-network.432740753d8c066924536c0e71bfe6ddaecb7058765937ea59f3802cafe3c3e5" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--apiserver--f4f66fd65--f9s4x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5d10), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-6-n-a9e48d2ea0", "pod":"calico-apiserver-f4f66fd65-f9s4x", "timestamp":"2026-01-24 00:33:08.057215765 +0000 UTC"}, Hostname:"ci-4081-3-6-n-a9e48d2ea0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:33:08.284993 containerd[1650]: 2026-01-24 00:33:08.057 [INFO][4819] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:33:08.284993 containerd[1650]: 2026-01-24 00:33:08.156 [INFO][4819] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:33:08.284993 containerd[1650]: 2026-01-24 00:33:08.156 [INFO][4819] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-a9e48d2ea0' Jan 24 00:33:08.284993 containerd[1650]: 2026-01-24 00:33:08.217 [INFO][4819] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.432740753d8c066924536c0e71bfe6ddaecb7058765937ea59f3802cafe3c3e5" host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:08.284993 containerd[1650]: 2026-01-24 00:33:08.234 [INFO][4819] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:08.284993 containerd[1650]: 2026-01-24 00:33:08.239 [INFO][4819] ipam/ipam.go 511: Trying affinity for 192.168.43.0/26 host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:08.284993 containerd[1650]: 2026-01-24 00:33:08.241 [INFO][4819] ipam/ipam.go 158: Attempting to load block cidr=192.168.43.0/26 host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:08.284993 containerd[1650]: 2026-01-24 00:33:08.243 [INFO][4819] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.43.0/26 host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:08.284993 containerd[1650]: 2026-01-24 00:33:08.243 [INFO][4819] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.43.0/26 handle="k8s-pod-network.432740753d8c066924536c0e71bfe6ddaecb7058765937ea59f3802cafe3c3e5" host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:08.284993 containerd[1650]: 2026-01-24 00:33:08.245 [INFO][4819] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.432740753d8c066924536c0e71bfe6ddaecb7058765937ea59f3802cafe3c3e5 Jan 24 00:33:08.284993 containerd[1650]: 2026-01-24 00:33:08.249 [INFO][4819] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.43.0/26 handle="k8s-pod-network.432740753d8c066924536c0e71bfe6ddaecb7058765937ea59f3802cafe3c3e5" host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:08.284993 containerd[1650]: 2026-01-24 00:33:08.256 [INFO][4819] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.43.7/26] block=192.168.43.0/26 handle="k8s-pod-network.432740753d8c066924536c0e71bfe6ddaecb7058765937ea59f3802cafe3c3e5" host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:08.284993 containerd[1650]: 2026-01-24 00:33:08.256 [INFO][4819] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.43.7/26] handle="k8s-pod-network.432740753d8c066924536c0e71bfe6ddaecb7058765937ea59f3802cafe3c3e5" host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:08.284993 containerd[1650]: 2026-01-24 00:33:08.256 [INFO][4819] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:33:08.284993 containerd[1650]: 2026-01-24 00:33:08.256 [INFO][4819] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.43.7/26] IPv6=[] ContainerID="432740753d8c066924536c0e71bfe6ddaecb7058765937ea59f3802cafe3c3e5" HandleID="k8s-pod-network.432740753d8c066924536c0e71bfe6ddaecb7058765937ea59f3802cafe3c3e5" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--apiserver--f4f66fd65--f9s4x-eth0" Jan 24 00:33:08.285457 containerd[1650]: 2026-01-24 00:33:08.258 [INFO][4793] cni-plugin/k8s.go 418: Populated endpoint ContainerID="432740753d8c066924536c0e71bfe6ddaecb7058765937ea59f3802cafe3c3e5" Namespace="calico-apiserver" Pod="calico-apiserver-f4f66fd65-f9s4x" WorkloadEndpoint="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--apiserver--f4f66fd65--f9s4x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--a9e48d2ea0-k8s-calico--apiserver--f4f66fd65--f9s4x-eth0", GenerateName:"calico-apiserver-f4f66fd65-", Namespace:"calico-apiserver", SelfLink:"", UID:"45f9a298-5fb0-472f-b747-58a979ff2009", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 32, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f4f66fd65", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-a9e48d2ea0", ContainerID:"", Pod:"calico-apiserver-f4f66fd65-f9s4x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.43.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calica5eaf44f6a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:33:08.285457 containerd[1650]: 2026-01-24 00:33:08.258 [INFO][4793] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.43.7/32] ContainerID="432740753d8c066924536c0e71bfe6ddaecb7058765937ea59f3802cafe3c3e5" Namespace="calico-apiserver" Pod="calico-apiserver-f4f66fd65-f9s4x" WorkloadEndpoint="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--apiserver--f4f66fd65--f9s4x-eth0" Jan 24 00:33:08.285457 containerd[1650]: 2026-01-24 00:33:08.258 [INFO][4793] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calica5eaf44f6a ContainerID="432740753d8c066924536c0e71bfe6ddaecb7058765937ea59f3802cafe3c3e5" Namespace="calico-apiserver" Pod="calico-apiserver-f4f66fd65-f9s4x" WorkloadEndpoint="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--apiserver--f4f66fd65--f9s4x-eth0" Jan 24 00:33:08.285457 containerd[1650]: 2026-01-24 00:33:08.261 [INFO][4793] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="432740753d8c066924536c0e71bfe6ddaecb7058765937ea59f3802cafe3c3e5" Namespace="calico-apiserver" Pod="calico-apiserver-f4f66fd65-f9s4x" WorkloadEndpoint="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--apiserver--f4f66fd65--f9s4x-eth0" Jan 24 00:33:08.285457 containerd[1650]: 2026-01-24 00:33:08.261 [INFO][4793] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="432740753d8c066924536c0e71bfe6ddaecb7058765937ea59f3802cafe3c3e5" Namespace="calico-apiserver" Pod="calico-apiserver-f4f66fd65-f9s4x" WorkloadEndpoint="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--apiserver--f4f66fd65--f9s4x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--a9e48d2ea0-k8s-calico--apiserver--f4f66fd65--f9s4x-eth0", GenerateName:"calico-apiserver-f4f66fd65-", Namespace:"calico-apiserver", SelfLink:"", UID:"45f9a298-5fb0-472f-b747-58a979ff2009", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 32, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f4f66fd65", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-a9e48d2ea0", ContainerID:"432740753d8c066924536c0e71bfe6ddaecb7058765937ea59f3802cafe3c3e5", Pod:"calico-apiserver-f4f66fd65-f9s4x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.43.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calica5eaf44f6a", MAC:"e6:37:7b:e1:26:0e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:33:08.285457 containerd[1650]: 2026-01-24 00:33:08.277 [INFO][4793] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="432740753d8c066924536c0e71bfe6ddaecb7058765937ea59f3802cafe3c3e5" Namespace="calico-apiserver" Pod="calico-apiserver-f4f66fd65-f9s4x" WorkloadEndpoint="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--apiserver--f4f66fd65--f9s4x-eth0" Jan 24 00:33:08.302442 containerd[1650]: time="2026-01-24T00:33:08.302369518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xrqs4,Uid:c67637af-e8b0-4286-97b2-b018e1728d18,Namespace:kube-system,Attempt:1,} returns sandbox id \"7a2d73ae482d9ff189d0dde4488635d28cb7d596ecd30233797626f7d9252258\"" Jan 24 00:33:08.306257 containerd[1650]: time="2026-01-24T00:33:08.306117354Z" level=info msg="CreateContainer within sandbox \"7a2d73ae482d9ff189d0dde4488635d28cb7d596ecd30233797626f7d9252258\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 24 00:33:08.324515 containerd[1650]: time="2026-01-24T00:33:08.324219676Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:33:08.324515 containerd[1650]: time="2026-01-24T00:33:08.324271046Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:33:08.324515 containerd[1650]: time="2026-01-24T00:33:08.324280746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:33:08.324515 containerd[1650]: time="2026-01-24T00:33:08.324382096Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:33:08.337426 containerd[1650]: time="2026-01-24T00:33:08.336292465Z" level=info msg="CreateContainer within sandbox \"7a2d73ae482d9ff189d0dde4488635d28cb7d596ecd30233797626f7d9252258\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"69c9a19dd38c4baf9c37f8d81695cb01a1a34fa91549b113e5267b4966b37af2\"" Jan 24 00:33:08.338035 containerd[1650]: time="2026-01-24T00:33:08.337990833Z" level=info msg="StartContainer for \"69c9a19dd38c4baf9c37f8d81695cb01a1a34fa91549b113e5267b4966b37af2\"" Jan 24 00:33:08.401105 containerd[1650]: time="2026-01-24T00:33:08.400983282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f4f66fd65-f9s4x,Uid:45f9a298-5fb0-472f-b747-58a979ff2009,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"432740753d8c066924536c0e71bfe6ddaecb7058765937ea59f3802cafe3c3e5\"" Jan 24 00:33:08.413003 containerd[1650]: time="2026-01-24T00:33:08.412544291Z" level=info msg="StartContainer for \"69c9a19dd38c4baf9c37f8d81695cb01a1a34fa91549b113e5267b4966b37af2\" returns successfully" Jan 24 00:33:08.637189 containerd[1650]: time="2026-01-24T00:33:08.637077843Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:33:08.639012 containerd[1650]: time="2026-01-24T00:33:08.638849022Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:33:08.639012 containerd[1650]: time="2026-01-24T00:33:08.638921092Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 24 00:33:08.639265 kubelet[2753]: E0124 00:33:08.639212 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:33:08.639370 kubelet[2753]: E0124 00:33:08.639286 2753 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:33:08.640170 kubelet[2753]: E0124 00:33:08.639684 2753 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xv79g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-65d6744f47-ksmv4_calico-system(60b48194-9cf1-4af7-bca5-7353b7dd4d41): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:33:08.640761 containerd[1650]: time="2026-01-24T00:33:08.639782031Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:33:08.641576 kubelet[2753]: E0124 00:33:08.641448 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65d6744f47-ksmv4" podUID="60b48194-9cf1-4af7-bca5-7353b7dd4d41" Jan 24 00:33:08.737577 containerd[1650]: time="2026-01-24T00:33:08.737067127Z" level=info msg="StopPodSandbox for \"01980512e9705e6a944e98d9a1111365ff5a35472bc59dfdb58bb9981274cdef\"" Jan 24 00:33:08.740753 containerd[1650]: time="2026-01-24T00:33:08.740603843Z" level=info msg="StopPodSandbox for \"a217d8c34dbd51e19632ff3d2dc59fbaf1b83772998cdb977c005009e5f829c1\"" Jan 24 00:33:08.897014 containerd[1650]: 2026-01-24 00:33:08.850 [INFO][5042] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="01980512e9705e6a944e98d9a1111365ff5a35472bc59dfdb58bb9981274cdef" Jan 24 00:33:08.897014 containerd[1650]: 2026-01-24 00:33:08.850 [INFO][5042] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="01980512e9705e6a944e98d9a1111365ff5a35472bc59dfdb58bb9981274cdef" iface="eth0" netns="/var/run/netns/cni-4fcf0133-3252-ab38-c808-2b04d361cef4" Jan 24 00:33:08.897014 containerd[1650]: 2026-01-24 00:33:08.850 [INFO][5042] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="01980512e9705e6a944e98d9a1111365ff5a35472bc59dfdb58bb9981274cdef" iface="eth0" netns="/var/run/netns/cni-4fcf0133-3252-ab38-c808-2b04d361cef4" Jan 24 00:33:08.897014 containerd[1650]: 2026-01-24 00:33:08.857 [INFO][5042] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="01980512e9705e6a944e98d9a1111365ff5a35472bc59dfdb58bb9981274cdef" iface="eth0" netns="/var/run/netns/cni-4fcf0133-3252-ab38-c808-2b04d361cef4" Jan 24 00:33:08.897014 containerd[1650]: 2026-01-24 00:33:08.857 [INFO][5042] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="01980512e9705e6a944e98d9a1111365ff5a35472bc59dfdb58bb9981274cdef" Jan 24 00:33:08.897014 containerd[1650]: 2026-01-24 00:33:08.857 [INFO][5042] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="01980512e9705e6a944e98d9a1111365ff5a35472bc59dfdb58bb9981274cdef" Jan 24 00:33:08.897014 containerd[1650]: 2026-01-24 00:33:08.884 [INFO][5059] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="01980512e9705e6a944e98d9a1111365ff5a35472bc59dfdb58bb9981274cdef" HandleID="k8s-pod-network.01980512e9705e6a944e98d9a1111365ff5a35472bc59dfdb58bb9981274cdef" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-goldmane--666569f655--r5n27-eth0" Jan 24 00:33:08.897014 containerd[1650]: 2026-01-24 00:33:08.884 [INFO][5059] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:33:08.897014 containerd[1650]: 2026-01-24 00:33:08.884 [INFO][5059] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:33:08.897014 containerd[1650]: 2026-01-24 00:33:08.890 [WARNING][5059] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="01980512e9705e6a944e98d9a1111365ff5a35472bc59dfdb58bb9981274cdef" HandleID="k8s-pod-network.01980512e9705e6a944e98d9a1111365ff5a35472bc59dfdb58bb9981274cdef" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-goldmane--666569f655--r5n27-eth0" Jan 24 00:33:08.897014 containerd[1650]: 2026-01-24 00:33:08.890 [INFO][5059] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="01980512e9705e6a944e98d9a1111365ff5a35472bc59dfdb58bb9981274cdef" HandleID="k8s-pod-network.01980512e9705e6a944e98d9a1111365ff5a35472bc59dfdb58bb9981274cdef" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-goldmane--666569f655--r5n27-eth0" Jan 24 00:33:08.897014 containerd[1650]: 2026-01-24 00:33:08.892 [INFO][5059] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:33:08.897014 containerd[1650]: 2026-01-24 00:33:08.895 [INFO][5042] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="01980512e9705e6a944e98d9a1111365ff5a35472bc59dfdb58bb9981274cdef" Jan 24 00:33:08.898207 containerd[1650]: time="2026-01-24T00:33:08.897664622Z" level=info msg="TearDown network for sandbox \"01980512e9705e6a944e98d9a1111365ff5a35472bc59dfdb58bb9981274cdef\" successfully" Jan 24 00:33:08.898207 containerd[1650]: time="2026-01-24T00:33:08.897698822Z" level=info msg="StopPodSandbox for \"01980512e9705e6a944e98d9a1111365ff5a35472bc59dfdb58bb9981274cdef\" returns successfully" Jan 24 00:33:08.898744 containerd[1650]: time="2026-01-24T00:33:08.898674380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-r5n27,Uid:e954bcbc-6a7d-4fa9-9256-747a5b39530e,Namespace:calico-system,Attempt:1,}" Jan 24 00:33:08.921634 systemd[1]: run-netns-cni\x2d4fcf0133\x2d3252\x2dab38\x2dc808\x2d2b04d361cef4.mount: Deactivated successfully. Jan 24 00:33:08.948438 systemd-networkd[1262]: cali41069dfb3fc: Gained IPv6LL Jan 24 00:33:08.949914 containerd[1650]: 2026-01-24 00:33:08.881 [INFO][5048] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a217d8c34dbd51e19632ff3d2dc59fbaf1b83772998cdb977c005009e5f829c1" Jan 24 00:33:08.949914 containerd[1650]: 2026-01-24 00:33:08.881 [INFO][5048] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a217d8c34dbd51e19632ff3d2dc59fbaf1b83772998cdb977c005009e5f829c1" iface="eth0" netns="/var/run/netns/cni-1c32654d-598e-a5f0-67f9-d7c6af30ba8f" Jan 24 00:33:08.949914 containerd[1650]: 2026-01-24 00:33:08.882 [INFO][5048] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a217d8c34dbd51e19632ff3d2dc59fbaf1b83772998cdb977c005009e5f829c1" iface="eth0" netns="/var/run/netns/cni-1c32654d-598e-a5f0-67f9-d7c6af30ba8f" Jan 24 00:33:08.949914 containerd[1650]: 2026-01-24 00:33:08.883 [INFO][5048] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a217d8c34dbd51e19632ff3d2dc59fbaf1b83772998cdb977c005009e5f829c1" iface="eth0" netns="/var/run/netns/cni-1c32654d-598e-a5f0-67f9-d7c6af30ba8f" Jan 24 00:33:08.949914 containerd[1650]: 2026-01-24 00:33:08.883 [INFO][5048] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a217d8c34dbd51e19632ff3d2dc59fbaf1b83772998cdb977c005009e5f829c1" Jan 24 00:33:08.949914 containerd[1650]: 2026-01-24 00:33:08.883 [INFO][5048] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a217d8c34dbd51e19632ff3d2dc59fbaf1b83772998cdb977c005009e5f829c1" Jan 24 00:33:08.949914 containerd[1650]: 2026-01-24 00:33:08.938 [INFO][5066] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a217d8c34dbd51e19632ff3d2dc59fbaf1b83772998cdb977c005009e5f829c1" HandleID="k8s-pod-network.a217d8c34dbd51e19632ff3d2dc59fbaf1b83772998cdb977c005009e5f829c1" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--apiserver--6bfcb7c46c--w555v-eth0" Jan 24 00:33:08.949914 containerd[1650]: 2026-01-24 00:33:08.938 [INFO][5066] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:33:08.949914 containerd[1650]: 2026-01-24 00:33:08.938 [INFO][5066] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:33:08.949914 containerd[1650]: 2026-01-24 00:33:08.943 [WARNING][5066] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a217d8c34dbd51e19632ff3d2dc59fbaf1b83772998cdb977c005009e5f829c1" HandleID="k8s-pod-network.a217d8c34dbd51e19632ff3d2dc59fbaf1b83772998cdb977c005009e5f829c1" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--apiserver--6bfcb7c46c--w555v-eth0" Jan 24 00:33:08.949914 containerd[1650]: 2026-01-24 00:33:08.943 [INFO][5066] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a217d8c34dbd51e19632ff3d2dc59fbaf1b83772998cdb977c005009e5f829c1" HandleID="k8s-pod-network.a217d8c34dbd51e19632ff3d2dc59fbaf1b83772998cdb977c005009e5f829c1" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--apiserver--6bfcb7c46c--w555v-eth0" Jan 24 00:33:08.949914 containerd[1650]: 2026-01-24 00:33:08.945 [INFO][5066] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:33:08.949914 containerd[1650]: 2026-01-24 00:33:08.946 [INFO][5048] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a217d8c34dbd51e19632ff3d2dc59fbaf1b83772998cdb977c005009e5f829c1" Jan 24 00:33:08.950199 containerd[1650]: time="2026-01-24T00:33:08.949954510Z" level=info msg="TearDown network for sandbox \"a217d8c34dbd51e19632ff3d2dc59fbaf1b83772998cdb977c005009e5f829c1\" successfully" Jan 24 00:33:08.950199 containerd[1650]: time="2026-01-24T00:33:08.949978040Z" level=info msg="StopPodSandbox for \"a217d8c34dbd51e19632ff3d2dc59fbaf1b83772998cdb977c005009e5f829c1\" returns successfully" Jan 24 00:33:08.952688 containerd[1650]: time="2026-01-24T00:33:08.950660730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bfcb7c46c-w555v,Uid:abb81e57-cdeb-458f-9a89-6ad70b4a9133,Namespace:calico-apiserver,Attempt:1,}" Jan 24 00:33:08.955554 systemd[1]: run-netns-cni\x2d1c32654d\x2d598e\x2da5f0\x2d67f9\x2dd7c6af30ba8f.mount: Deactivated successfully. Jan 24 00:33:09.044405 kubelet[2753]: E0124 00:33:09.044199 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65d6744f47-ksmv4" podUID="60b48194-9cf1-4af7-bca5-7353b7dd4d41" Jan 24 00:33:09.048773 kubelet[2753]: E0124 00:33:09.048700 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jv7gx" podUID="08d51dd3-a54b-4b8c-9510-41c1d4106f97" Jan 24 00:33:09.052863 kubelet[2753]: I0124 00:33:09.050792 2753 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-xrqs4" podStartSLOduration=39.050779067 podStartE2EDuration="39.050779067s" podCreationTimestamp="2026-01-24 00:32:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:33:09.032230734 +0000 UTC m=+44.400810055" watchObservedRunningTime="2026-01-24 00:33:09.050779067 +0000 UTC m=+44.419358388" Jan 24 00:33:09.093127 systemd-networkd[1262]: cali6dca223a2a1: Link UP Jan 24 00:33:09.095373 systemd-networkd[1262]: cali6dca223a2a1: Gained carrier Jan 24 00:33:09.101776 containerd[1650]: time="2026-01-24T00:33:09.100620973Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:33:09.102939 containerd[1650]: time="2026-01-24T00:33:09.102612342Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:33:09.102999 kubelet[2753]: E0124 00:33:09.102965 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:33:09.102999 kubelet[2753]: E0124 00:33:09.102995 2753 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:33:09.103376 kubelet[2753]: E0124 00:33:09.103083 2753 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7xwzh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-f4f66fd65-f9s4x_calico-apiserver(45f9a298-5fb0-472f-b747-58a979ff2009): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:33:09.103684 containerd[1650]: time="2026-01-24T00:33:09.102701592Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:33:09.105063 kubelet[2753]: E0124 00:33:09.105017 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f4f66fd65-f9s4x" podUID="45f9a298-5fb0-472f-b747-58a979ff2009" Jan 24 00:33:09.106928 containerd[1650]: 2026-01-24 00:33:08.971 [INFO][5073] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--a9e48d2ea0-k8s-goldmane--666569f655--r5n27-eth0 goldmane-666569f655- calico-system e954bcbc-6a7d-4fa9-9256-747a5b39530e 1032 0 2026-01-24 00:32:40 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-6-n-a9e48d2ea0 goldmane-666569f655-r5n27 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali6dca223a2a1 [] [] }} ContainerID="b46da7dd2766942cc861799fcc26acfabaad1dfd0bccbe842ebda2c8c5eb1928" Namespace="calico-system" Pod="goldmane-666569f655-r5n27" WorkloadEndpoint="ci--4081--3--6--n--a9e48d2ea0-k8s-goldmane--666569f655--r5n27-" Jan 24 00:33:09.106928 containerd[1650]: 2026-01-24 00:33:08.971 [INFO][5073] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b46da7dd2766942cc861799fcc26acfabaad1dfd0bccbe842ebda2c8c5eb1928" Namespace="calico-system" Pod="goldmane-666569f655-r5n27" WorkloadEndpoint="ci--4081--3--6--n--a9e48d2ea0-k8s-goldmane--666569f655--r5n27-eth0" Jan 24 00:33:09.106928 containerd[1650]: 2026-01-24 00:33:09.004 [INFO][5095] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b46da7dd2766942cc861799fcc26acfabaad1dfd0bccbe842ebda2c8c5eb1928" HandleID="k8s-pod-network.b46da7dd2766942cc861799fcc26acfabaad1dfd0bccbe842ebda2c8c5eb1928" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-goldmane--666569f655--r5n27-eth0" Jan 24 00:33:09.106928 containerd[1650]: 2026-01-24 00:33:09.005 [INFO][5095] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b46da7dd2766942cc861799fcc26acfabaad1dfd0bccbe842ebda2c8c5eb1928" HandleID="k8s-pod-network.b46da7dd2766942cc861799fcc26acfabaad1dfd0bccbe842ebda2c8c5eb1928" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-goldmane--666569f655--r5n27-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-a9e48d2ea0", "pod":"goldmane-666569f655-r5n27", "timestamp":"2026-01-24 00:33:09.004945708 +0000 UTC"}, Hostname:"ci-4081-3-6-n-a9e48d2ea0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:33:09.106928 containerd[1650]: 2026-01-24 00:33:09.005 [INFO][5095] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:33:09.106928 containerd[1650]: 2026-01-24 00:33:09.005 [INFO][5095] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:33:09.106928 containerd[1650]: 2026-01-24 00:33:09.005 [INFO][5095] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-a9e48d2ea0' Jan 24 00:33:09.106928 containerd[1650]: 2026-01-24 00:33:09.010 [INFO][5095] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b46da7dd2766942cc861799fcc26acfabaad1dfd0bccbe842ebda2c8c5eb1928" host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:09.106928 containerd[1650]: 2026-01-24 00:33:09.019 [INFO][5095] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:09.106928 containerd[1650]: 2026-01-24 00:33:09.038 [INFO][5095] ipam/ipam.go 511: Trying affinity for 192.168.43.0/26 host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:09.106928 containerd[1650]: 2026-01-24 00:33:09.043 [INFO][5095] ipam/ipam.go 158: Attempting to load block cidr=192.168.43.0/26 host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:09.106928 containerd[1650]: 2026-01-24 00:33:09.056 [INFO][5095] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.43.0/26 host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:09.106928 containerd[1650]: 2026-01-24 00:33:09.057 [INFO][5095] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.43.0/26 handle="k8s-pod-network.b46da7dd2766942cc861799fcc26acfabaad1dfd0bccbe842ebda2c8c5eb1928" host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:09.106928 containerd[1650]: 2026-01-24 00:33:09.065 [INFO][5095] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b46da7dd2766942cc861799fcc26acfabaad1dfd0bccbe842ebda2c8c5eb1928 Jan 24 00:33:09.106928 containerd[1650]: 2026-01-24 00:33:09.072 [INFO][5095] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.43.0/26 handle="k8s-pod-network.b46da7dd2766942cc861799fcc26acfabaad1dfd0bccbe842ebda2c8c5eb1928" host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:09.106928 containerd[1650]: 2026-01-24 00:33:09.081 [INFO][5095] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.43.8/26] block=192.168.43.0/26 handle="k8s-pod-network.b46da7dd2766942cc861799fcc26acfabaad1dfd0bccbe842ebda2c8c5eb1928" host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:09.106928 containerd[1650]: 2026-01-24 00:33:09.081 [INFO][5095] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.43.8/26] handle="k8s-pod-network.b46da7dd2766942cc861799fcc26acfabaad1dfd0bccbe842ebda2c8c5eb1928" host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:09.106928 containerd[1650]: 2026-01-24 00:33:09.081 [INFO][5095] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:33:09.106928 containerd[1650]: 2026-01-24 00:33:09.081 [INFO][5095] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.43.8/26] IPv6=[] ContainerID="b46da7dd2766942cc861799fcc26acfabaad1dfd0bccbe842ebda2c8c5eb1928" HandleID="k8s-pod-network.b46da7dd2766942cc861799fcc26acfabaad1dfd0bccbe842ebda2c8c5eb1928" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-goldmane--666569f655--r5n27-eth0" Jan 24 00:33:09.107310 containerd[1650]: 2026-01-24 00:33:09.087 [INFO][5073] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b46da7dd2766942cc861799fcc26acfabaad1dfd0bccbe842ebda2c8c5eb1928" Namespace="calico-system" Pod="goldmane-666569f655-r5n27" WorkloadEndpoint="ci--4081--3--6--n--a9e48d2ea0-k8s-goldmane--666569f655--r5n27-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--a9e48d2ea0-k8s-goldmane--666569f655--r5n27-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"e954bcbc-6a7d-4fa9-9256-747a5b39530e", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 32, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-a9e48d2ea0", ContainerID:"", Pod:"goldmane-666569f655-r5n27", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.43.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6dca223a2a1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:33:09.107310 containerd[1650]: 2026-01-24 00:33:09.088 [INFO][5073] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.43.8/32] ContainerID="b46da7dd2766942cc861799fcc26acfabaad1dfd0bccbe842ebda2c8c5eb1928" Namespace="calico-system" Pod="goldmane-666569f655-r5n27" WorkloadEndpoint="ci--4081--3--6--n--a9e48d2ea0-k8s-goldmane--666569f655--r5n27-eth0" Jan 24 00:33:09.107310 containerd[1650]: 2026-01-24 00:33:09.088 [INFO][5073] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6dca223a2a1 ContainerID="b46da7dd2766942cc861799fcc26acfabaad1dfd0bccbe842ebda2c8c5eb1928" Namespace="calico-system" Pod="goldmane-666569f655-r5n27" WorkloadEndpoint="ci--4081--3--6--n--a9e48d2ea0-k8s-goldmane--666569f655--r5n27-eth0" Jan 24 00:33:09.107310 containerd[1650]: 2026-01-24 00:33:09.090 [INFO][5073] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b46da7dd2766942cc861799fcc26acfabaad1dfd0bccbe842ebda2c8c5eb1928" Namespace="calico-system" Pod="goldmane-666569f655-r5n27" WorkloadEndpoint="ci--4081--3--6--n--a9e48d2ea0-k8s-goldmane--666569f655--r5n27-eth0" Jan 24 00:33:09.107310 containerd[1650]: 2026-01-24 00:33:09.091 [INFO][5073] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b46da7dd2766942cc861799fcc26acfabaad1dfd0bccbe842ebda2c8c5eb1928" Namespace="calico-system" Pod="goldmane-666569f655-r5n27" WorkloadEndpoint="ci--4081--3--6--n--a9e48d2ea0-k8s-goldmane--666569f655--r5n27-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--a9e48d2ea0-k8s-goldmane--666569f655--r5n27-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"e954bcbc-6a7d-4fa9-9256-747a5b39530e", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 32, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-a9e48d2ea0", ContainerID:"b46da7dd2766942cc861799fcc26acfabaad1dfd0bccbe842ebda2c8c5eb1928", Pod:"goldmane-666569f655-r5n27", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.43.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6dca223a2a1", MAC:"7a:ce:62:57:8e:67", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:33:09.107310 containerd[1650]: 2026-01-24 00:33:09.103 [INFO][5073] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b46da7dd2766942cc861799fcc26acfabaad1dfd0bccbe842ebda2c8c5eb1928" Namespace="calico-system" Pod="goldmane-666569f655-r5n27" WorkloadEndpoint="ci--4081--3--6--n--a9e48d2ea0-k8s-goldmane--666569f655--r5n27-eth0" Jan 24 00:33:09.144282 containerd[1650]: time="2026-01-24T00:33:09.144141725Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:33:09.144439 containerd[1650]: time="2026-01-24T00:33:09.144266495Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:33:09.146056 containerd[1650]: time="2026-01-24T00:33:09.145331564Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:33:09.146056 containerd[1650]: time="2026-01-24T00:33:09.145972874Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:33:09.160695 systemd-networkd[1262]: calif4420e3ad5d: Link UP Jan 24 00:33:09.164150 systemd-networkd[1262]: calif4420e3ad5d: Gained carrier Jan 24 00:33:09.182590 containerd[1650]: 2026-01-24 00:33:09.006 [INFO][5086] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--a9e48d2ea0-k8s-calico--apiserver--6bfcb7c46c--w555v-eth0 calico-apiserver-6bfcb7c46c- calico-apiserver abb81e57-cdeb-458f-9a89-6ad70b4a9133 1033 0 2026-01-24 00:32:39 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6bfcb7c46c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-n-a9e48d2ea0 calico-apiserver-6bfcb7c46c-w555v eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif4420e3ad5d [] [] }} ContainerID="80394510b046e7075781c3fb36d8e1f812f8c205024a5a05da5a10108c12b0b8" Namespace="calico-apiserver" Pod="calico-apiserver-6bfcb7c46c-w555v" WorkloadEndpoint="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--apiserver--6bfcb7c46c--w555v-" Jan 24 00:33:09.182590 containerd[1650]: 2026-01-24 00:33:09.007 [INFO][5086] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="80394510b046e7075781c3fb36d8e1f812f8c205024a5a05da5a10108c12b0b8" Namespace="calico-apiserver" Pod="calico-apiserver-6bfcb7c46c-w555v" WorkloadEndpoint="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--apiserver--6bfcb7c46c--w555v-eth0" Jan 24 00:33:09.182590 containerd[1650]: 2026-01-24 00:33:09.064 [INFO][5106] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="80394510b046e7075781c3fb36d8e1f812f8c205024a5a05da5a10108c12b0b8" HandleID="k8s-pod-network.80394510b046e7075781c3fb36d8e1f812f8c205024a5a05da5a10108c12b0b8" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--apiserver--6bfcb7c46c--w555v-eth0" Jan 24 00:33:09.182590 containerd[1650]: 2026-01-24 00:33:09.065 [INFO][5106] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="80394510b046e7075781c3fb36d8e1f812f8c205024a5a05da5a10108c12b0b8" HandleID="k8s-pod-network.80394510b046e7075781c3fb36d8e1f812f8c205024a5a05da5a10108c12b0b8" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--apiserver--6bfcb7c46c--w555v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032a420), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-6-n-a9e48d2ea0", "pod":"calico-apiserver-6bfcb7c46c-w555v", "timestamp":"2026-01-24 00:33:09.064945235 +0000 UTC"}, Hostname:"ci-4081-3-6-n-a9e48d2ea0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:33:09.182590 containerd[1650]: 2026-01-24 00:33:09.065 [INFO][5106] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:33:09.182590 containerd[1650]: 2026-01-24 00:33:09.081 [INFO][5106] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:33:09.182590 containerd[1650]: 2026-01-24 00:33:09.081 [INFO][5106] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-a9e48d2ea0' Jan 24 00:33:09.182590 containerd[1650]: 2026-01-24 00:33:09.112 [INFO][5106] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.80394510b046e7075781c3fb36d8e1f812f8c205024a5a05da5a10108c12b0b8" host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:09.182590 containerd[1650]: 2026-01-24 00:33:09.120 [INFO][5106] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:09.182590 containerd[1650]: 2026-01-24 00:33:09.129 [INFO][5106] ipam/ipam.go 511: Trying affinity for 192.168.43.0/26 host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:09.182590 containerd[1650]: 2026-01-24 00:33:09.133 [INFO][5106] ipam/ipam.go 158: Attempting to load block cidr=192.168.43.0/26 host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:09.182590 containerd[1650]: 2026-01-24 00:33:09.137 [INFO][5106] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.43.0/26 host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:09.182590 containerd[1650]: 2026-01-24 00:33:09.137 [INFO][5106] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.43.0/26 handle="k8s-pod-network.80394510b046e7075781c3fb36d8e1f812f8c205024a5a05da5a10108c12b0b8" host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:09.182590 containerd[1650]: 2026-01-24 00:33:09.138 [INFO][5106] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.80394510b046e7075781c3fb36d8e1f812f8c205024a5a05da5a10108c12b0b8 Jan 24 00:33:09.182590 containerd[1650]: 2026-01-24 00:33:09.143 [INFO][5106] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.43.0/26 handle="k8s-pod-network.80394510b046e7075781c3fb36d8e1f812f8c205024a5a05da5a10108c12b0b8" host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:09.182590 containerd[1650]: 2026-01-24 00:33:09.149 [INFO][5106] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.43.9/26] block=192.168.43.0/26 handle="k8s-pod-network.80394510b046e7075781c3fb36d8e1f812f8c205024a5a05da5a10108c12b0b8" host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:09.182590 containerd[1650]: 2026-01-24 00:33:09.149 [INFO][5106] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.43.9/26] handle="k8s-pod-network.80394510b046e7075781c3fb36d8e1f812f8c205024a5a05da5a10108c12b0b8" host="ci-4081-3-6-n-a9e48d2ea0" Jan 24 00:33:09.182590 containerd[1650]: 2026-01-24 00:33:09.149 [INFO][5106] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:33:09.182590 containerd[1650]: 2026-01-24 00:33:09.149 [INFO][5106] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.43.9/26] IPv6=[] ContainerID="80394510b046e7075781c3fb36d8e1f812f8c205024a5a05da5a10108c12b0b8" HandleID="k8s-pod-network.80394510b046e7075781c3fb36d8e1f812f8c205024a5a05da5a10108c12b0b8" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--apiserver--6bfcb7c46c--w555v-eth0" Jan 24 00:33:09.184211 containerd[1650]: 2026-01-24 00:33:09.153 [INFO][5086] cni-plugin/k8s.go 418: Populated endpoint ContainerID="80394510b046e7075781c3fb36d8e1f812f8c205024a5a05da5a10108c12b0b8" Namespace="calico-apiserver" Pod="calico-apiserver-6bfcb7c46c-w555v" WorkloadEndpoint="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--apiserver--6bfcb7c46c--w555v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--a9e48d2ea0-k8s-calico--apiserver--6bfcb7c46c--w555v-eth0", GenerateName:"calico-apiserver-6bfcb7c46c-", Namespace:"calico-apiserver", SelfLink:"", UID:"abb81e57-cdeb-458f-9a89-6ad70b4a9133", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 32, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bfcb7c46c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-a9e48d2ea0", ContainerID:"", Pod:"calico-apiserver-6bfcb7c46c-w555v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.43.9/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif4420e3ad5d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:33:09.184211 containerd[1650]: 2026-01-24 00:33:09.153 [INFO][5086] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.43.9/32] ContainerID="80394510b046e7075781c3fb36d8e1f812f8c205024a5a05da5a10108c12b0b8" Namespace="calico-apiserver" Pod="calico-apiserver-6bfcb7c46c-w555v" WorkloadEndpoint="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--apiserver--6bfcb7c46c--w555v-eth0" Jan 24 00:33:09.184211 containerd[1650]: 2026-01-24 00:33:09.153 [INFO][5086] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif4420e3ad5d ContainerID="80394510b046e7075781c3fb36d8e1f812f8c205024a5a05da5a10108c12b0b8" Namespace="calico-apiserver" Pod="calico-apiserver-6bfcb7c46c-w555v" WorkloadEndpoint="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--apiserver--6bfcb7c46c--w555v-eth0" Jan 24 00:33:09.184211 containerd[1650]: 2026-01-24 00:33:09.166 [INFO][5086] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="80394510b046e7075781c3fb36d8e1f812f8c205024a5a05da5a10108c12b0b8" Namespace="calico-apiserver" Pod="calico-apiserver-6bfcb7c46c-w555v" WorkloadEndpoint="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--apiserver--6bfcb7c46c--w555v-eth0" Jan 24 00:33:09.184211 containerd[1650]: 2026-01-24 00:33:09.167 [INFO][5086] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="80394510b046e7075781c3fb36d8e1f812f8c205024a5a05da5a10108c12b0b8" Namespace="calico-apiserver" Pod="calico-apiserver-6bfcb7c46c-w555v" WorkloadEndpoint="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--apiserver--6bfcb7c46c--w555v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--a9e48d2ea0-k8s-calico--apiserver--6bfcb7c46c--w555v-eth0", GenerateName:"calico-apiserver-6bfcb7c46c-", Namespace:"calico-apiserver", SelfLink:"", UID:"abb81e57-cdeb-458f-9a89-6ad70b4a9133", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 32, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bfcb7c46c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-a9e48d2ea0", ContainerID:"80394510b046e7075781c3fb36d8e1f812f8c205024a5a05da5a10108c12b0b8", Pod:"calico-apiserver-6bfcb7c46c-w555v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.43.9/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif4420e3ad5d", MAC:"fa:79:18:82:62:c0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:33:09.184211 containerd[1650]: 2026-01-24 00:33:09.177 [INFO][5086] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="80394510b046e7075781c3fb36d8e1f812f8c205024a5a05da5a10108c12b0b8" Namespace="calico-apiserver" Pod="calico-apiserver-6bfcb7c46c-w555v" WorkloadEndpoint="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--apiserver--6bfcb7c46c--w555v-eth0" Jan 24 00:33:09.222742 containerd[1650]: time="2026-01-24T00:33:09.222145187Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:33:09.222742 containerd[1650]: time="2026-01-24T00:33:09.222197437Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:33:09.222742 containerd[1650]: time="2026-01-24T00:33:09.222207367Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:33:09.222742 containerd[1650]: time="2026-01-24T00:33:09.222286057Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:33:09.225224 containerd[1650]: time="2026-01-24T00:33:09.225198784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-r5n27,Uid:e954bcbc-6a7d-4fa9-9256-747a5b39530e,Namespace:calico-system,Attempt:1,} returns sandbox id \"b46da7dd2766942cc861799fcc26acfabaad1dfd0bccbe842ebda2c8c5eb1928\"" Jan 24 00:33:09.227220 containerd[1650]: time="2026-01-24T00:33:09.226826853Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:33:09.275301 containerd[1650]: time="2026-01-24T00:33:09.275261840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bfcb7c46c-w555v,Uid:abb81e57-cdeb-458f-9a89-6ad70b4a9133,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"80394510b046e7075781c3fb36d8e1f812f8c205024a5a05da5a10108c12b0b8\"" Jan 24 00:33:09.523803 systemd-networkd[1262]: calie17e4dc2dea: Gained IPv6LL Jan 24 00:33:09.588806 systemd-networkd[1262]: cali460353233fb: Gained IPv6LL Jan 24 00:33:09.648667 containerd[1650]: time="2026-01-24T00:33:09.648311461Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:33:09.650496 containerd[1650]: time="2026-01-24T00:33:09.650314119Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:33:09.651480 containerd[1650]: time="2026-01-24T00:33:09.650422649Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 24 00:33:09.651598 kubelet[2753]: E0124 00:33:09.651494 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:33:09.651598 kubelet[2753]: E0124 00:33:09.651554 2753 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:33:09.653327 kubelet[2753]: E0124 00:33:09.652020 2753 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j6vg8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-r5n27_calico-system(e954bcbc-6a7d-4fa9-9256-747a5b39530e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:33:09.653741 containerd[1650]: time="2026-01-24T00:33:09.652462878Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:33:09.654596 kubelet[2753]: E0124 00:33:09.654543 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-r5n27" podUID="e954bcbc-6a7d-4fa9-9256-747a5b39530e" Jan 24 00:33:09.715617 systemd-networkd[1262]: calica5eaf44f6a: Gained IPv6LL Jan 24 00:33:10.056334 kubelet[2753]: E0124 00:33:10.055499 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f4f66fd65-f9s4x" podUID="45f9a298-5fb0-472f-b747-58a979ff2009" Jan 24 00:33:10.056334 kubelet[2753]: E0124 00:33:10.055845 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65d6744f47-ksmv4" podUID="60b48194-9cf1-4af7-bca5-7353b7dd4d41" Jan 24 00:33:10.056334 kubelet[2753]: E0124 00:33:10.055920 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-r5n27" podUID="e954bcbc-6a7d-4fa9-9256-747a5b39530e" Jan 24 00:33:10.108971 containerd[1650]: time="2026-01-24T00:33:10.108896954Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:33:10.112359 containerd[1650]: time="2026-01-24T00:33:10.112047821Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:33:10.112359 containerd[1650]: time="2026-01-24T00:33:10.112145492Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:33:10.116143 kubelet[2753]: E0124 00:33:10.115144 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:33:10.116143 kubelet[2753]: E0124 00:33:10.115203 2753 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:33:10.116445 kubelet[2753]: E0124 00:33:10.115362 2753 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hkljm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6bfcb7c46c-w555v_calico-apiserver(abb81e57-cdeb-458f-9a89-6ad70b4a9133): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:33:10.118955 kubelet[2753]: E0124 00:33:10.117999 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bfcb7c46c-w555v" podUID="abb81e57-cdeb-458f-9a89-6ad70b4a9133" Jan 24 00:33:10.291559 systemd-networkd[1262]: cali6dca223a2a1: Gained IPv6LL Jan 24 00:33:10.931905 systemd-networkd[1262]: calif4420e3ad5d: Gained IPv6LL Jan 24 00:33:11.057577 kubelet[2753]: E0124 00:33:11.056918 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-r5n27" podUID="e954bcbc-6a7d-4fa9-9256-747a5b39530e" Jan 24 00:33:11.057577 kubelet[2753]: E0124 00:33:11.057362 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bfcb7c46c-w555v" podUID="abb81e57-cdeb-458f-9a89-6ad70b4a9133" Jan 24 00:33:17.735559 containerd[1650]: time="2026-01-24T00:33:17.735476149Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:33:18.177565 containerd[1650]: time="2026-01-24T00:33:18.177472079Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:33:18.179431 containerd[1650]: time="2026-01-24T00:33:18.179325558Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:33:18.179580 containerd[1650]: time="2026-01-24T00:33:18.179454308Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 24 00:33:18.179838 kubelet[2753]: E0124 00:33:18.179751 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:33:18.179838 kubelet[2753]: E0124 00:33:18.179824 2753 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:33:18.184608 kubelet[2753]: E0124 00:33:18.180732 2753 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:212ab3e8d9b24b12bd1bf6f88681001f,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pjdj4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-58745d67dd-ct89f_calico-system(de21b4a8-c633-4342-967f-dd18be2c5322): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:33:18.185028 containerd[1650]: time="2026-01-24T00:33:18.184727907Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:33:18.647781 containerd[1650]: time="2026-01-24T00:33:18.647681432Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:33:18.649612 containerd[1650]: time="2026-01-24T00:33:18.649282342Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:33:18.649612 containerd[1650]: time="2026-01-24T00:33:18.649457612Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 24 00:33:18.650205 kubelet[2753]: E0124 00:33:18.649905 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:33:18.650205 kubelet[2753]: E0124 00:33:18.649972 2753 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:33:18.650205 kubelet[2753]: E0124 00:33:18.650115 2753 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pjdj4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-58745d67dd-ct89f_calico-system(de21b4a8-c633-4342-967f-dd18be2c5322): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:33:18.652085 kubelet[2753]: E0124 00:33:18.651988 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58745d67dd-ct89f" podUID="de21b4a8-c633-4342-967f-dd18be2c5322" Jan 24 00:33:18.737937 containerd[1650]: time="2026-01-24T00:33:18.737861035Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:33:19.175613 containerd[1650]: time="2026-01-24T00:33:19.175521877Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:33:19.177549 containerd[1650]: time="2026-01-24T00:33:19.177288817Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:33:19.177549 containerd[1650]: time="2026-01-24T00:33:19.177341017Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:33:19.178228 kubelet[2753]: E0124 00:33:19.177748 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:33:19.178228 kubelet[2753]: E0124 00:33:19.177811 2753 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:33:19.178228 kubelet[2753]: E0124 00:33:19.177973 2753 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r8nk7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-f4f66fd65-vcrjr_calico-apiserver(8571ab88-459c-48f7-a296-37c9ac9b6a8a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:33:19.179498 kubelet[2753]: E0124 00:33:19.179262 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f4f66fd65-vcrjr" podUID="8571ab88-459c-48f7-a296-37c9ac9b6a8a" Jan 24 00:33:20.738052 containerd[1650]: time="2026-01-24T00:33:20.737657396Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:33:21.182423 containerd[1650]: time="2026-01-24T00:33:21.182329994Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:33:21.184351 containerd[1650]: time="2026-01-24T00:33:21.184177444Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:33:21.184351 containerd[1650]: time="2026-01-24T00:33:21.184262264Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:33:21.184551 kubelet[2753]: E0124 00:33:21.184484 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:33:21.185130 kubelet[2753]: E0124 00:33:21.184562 2753 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:33:21.185130 kubelet[2753]: E0124 00:33:21.184701 2753 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7xwzh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-f4f66fd65-f9s4x_calico-apiserver(45f9a298-5fb0-472f-b747-58a979ff2009): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:33:21.186536 kubelet[2753]: E0124 00:33:21.186279 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f4f66fd65-f9s4x" podUID="45f9a298-5fb0-472f-b747-58a979ff2009" Jan 24 00:33:21.740492 containerd[1650]: time="2026-01-24T00:33:21.738318884Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:33:22.180372 containerd[1650]: time="2026-01-24T00:33:22.180263030Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:33:22.181845 containerd[1650]: time="2026-01-24T00:33:22.181764600Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:33:22.181919 containerd[1650]: time="2026-01-24T00:33:22.181873699Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 24 00:33:22.182556 kubelet[2753]: E0124 00:33:22.182056 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:33:22.182556 kubelet[2753]: E0124 00:33:22.182539 2753 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:33:22.183800 kubelet[2753]: E0124 00:33:22.182838 2753 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j6vg8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-r5n27_calico-system(e954bcbc-6a7d-4fa9-9256-747a5b39530e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:33:22.184033 containerd[1650]: time="2026-01-24T00:33:22.183520409Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:33:22.184312 kubelet[2753]: E0124 00:33:22.184108 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-r5n27" podUID="e954bcbc-6a7d-4fa9-9256-747a5b39530e" Jan 24 00:33:22.621538 containerd[1650]: time="2026-01-24T00:33:22.621462216Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:33:22.623223 containerd[1650]: time="2026-01-24T00:33:22.623049875Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:33:22.623223 containerd[1650]: time="2026-01-24T00:33:22.623075945Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 24 00:33:22.623545 kubelet[2753]: E0124 00:33:22.623468 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:33:22.624955 kubelet[2753]: E0124 00:33:22.623551 2753 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:33:22.624955 kubelet[2753]: E0124 00:33:22.623779 2753 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xv79g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-65d6744f47-ksmv4_calico-system(60b48194-9cf1-4af7-bca5-7353b7dd4d41): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:33:22.625442 kubelet[2753]: E0124 00:33:22.625319 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65d6744f47-ksmv4" podUID="60b48194-9cf1-4af7-bca5-7353b7dd4d41" Jan 24 00:33:22.735770 containerd[1650]: time="2026-01-24T00:33:22.734573580Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:33:23.184703 containerd[1650]: time="2026-01-24T00:33:23.184629801Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:33:23.186442 containerd[1650]: time="2026-01-24T00:33:23.186256451Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:33:23.186442 containerd[1650]: time="2026-01-24T00:33:23.186355401Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 24 00:33:23.186608 kubelet[2753]: E0124 00:33:23.186532 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:33:23.186608 kubelet[2753]: E0124 00:33:23.186594 2753 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:33:23.186868 kubelet[2753]: E0124 00:33:23.186726 2753 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fqrxx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jv7gx_calico-system(08d51dd3-a54b-4b8c-9510-41c1d4106f97): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:33:23.189620 containerd[1650]: time="2026-01-24T00:33:23.189480380Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:33:23.634530 containerd[1650]: time="2026-01-24T00:33:23.634425691Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:33:23.636153 containerd[1650]: time="2026-01-24T00:33:23.636021941Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:33:23.636153 containerd[1650]: time="2026-01-24T00:33:23.636064071Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 24 00:33:23.636368 kubelet[2753]: E0124 00:33:23.636316 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:33:23.637436 kubelet[2753]: E0124 00:33:23.636386 2753 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:33:23.637436 kubelet[2753]: E0124 00:33:23.636555 2753 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fqrxx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jv7gx_calico-system(08d51dd3-a54b-4b8c-9510-41c1d4106f97): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:33:23.637852 kubelet[2753]: E0124 00:33:23.637778 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jv7gx" podUID="08d51dd3-a54b-4b8c-9510-41c1d4106f97" Jan 24 00:33:24.730497 containerd[1650]: time="2026-01-24T00:33:24.729946965Z" level=info msg="StopPodSandbox for \"01980512e9705e6a944e98d9a1111365ff5a35472bc59dfdb58bb9981274cdef\"" Jan 24 00:33:24.742454 containerd[1650]: time="2026-01-24T00:33:24.740778264Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:33:24.884259 containerd[1650]: 2026-01-24 00:33:24.817 [WARNING][5251] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="01980512e9705e6a944e98d9a1111365ff5a35472bc59dfdb58bb9981274cdef" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--a9e48d2ea0-k8s-goldmane--666569f655--r5n27-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"e954bcbc-6a7d-4fa9-9256-747a5b39530e", ResourceVersion:"1082", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 32, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-a9e48d2ea0", ContainerID:"b46da7dd2766942cc861799fcc26acfabaad1dfd0bccbe842ebda2c8c5eb1928", Pod:"goldmane-666569f655-r5n27", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.43.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6dca223a2a1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:33:24.884259 containerd[1650]: 2026-01-24 00:33:24.819 [INFO][5251] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="01980512e9705e6a944e98d9a1111365ff5a35472bc59dfdb58bb9981274cdef" Jan 24 00:33:24.884259 containerd[1650]: 2026-01-24 00:33:24.819 [INFO][5251] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="01980512e9705e6a944e98d9a1111365ff5a35472bc59dfdb58bb9981274cdef" iface="eth0" netns="" Jan 24 00:33:24.884259 containerd[1650]: 2026-01-24 00:33:24.819 [INFO][5251] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="01980512e9705e6a944e98d9a1111365ff5a35472bc59dfdb58bb9981274cdef" Jan 24 00:33:24.884259 containerd[1650]: 2026-01-24 00:33:24.819 [INFO][5251] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="01980512e9705e6a944e98d9a1111365ff5a35472bc59dfdb58bb9981274cdef" Jan 24 00:33:24.884259 containerd[1650]: 2026-01-24 00:33:24.868 [INFO][5258] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="01980512e9705e6a944e98d9a1111365ff5a35472bc59dfdb58bb9981274cdef" HandleID="k8s-pod-network.01980512e9705e6a944e98d9a1111365ff5a35472bc59dfdb58bb9981274cdef" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-goldmane--666569f655--r5n27-eth0" Jan 24 00:33:24.884259 containerd[1650]: 2026-01-24 00:33:24.868 [INFO][5258] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:33:24.884259 containerd[1650]: 2026-01-24 00:33:24.868 [INFO][5258] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:33:24.884259 containerd[1650]: 2026-01-24 00:33:24.875 [WARNING][5258] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="01980512e9705e6a944e98d9a1111365ff5a35472bc59dfdb58bb9981274cdef" HandleID="k8s-pod-network.01980512e9705e6a944e98d9a1111365ff5a35472bc59dfdb58bb9981274cdef" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-goldmane--666569f655--r5n27-eth0" Jan 24 00:33:24.884259 containerd[1650]: 2026-01-24 00:33:24.875 [INFO][5258] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="01980512e9705e6a944e98d9a1111365ff5a35472bc59dfdb58bb9981274cdef" HandleID="k8s-pod-network.01980512e9705e6a944e98d9a1111365ff5a35472bc59dfdb58bb9981274cdef" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-goldmane--666569f655--r5n27-eth0" Jan 24 00:33:24.884259 containerd[1650]: 2026-01-24 00:33:24.877 [INFO][5258] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:33:24.884259 containerd[1650]: 2026-01-24 00:33:24.880 [INFO][5251] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="01980512e9705e6a944e98d9a1111365ff5a35472bc59dfdb58bb9981274cdef" Jan 24 00:33:24.885303 containerd[1650]: time="2026-01-24T00:33:24.884291343Z" level=info msg="TearDown network for sandbox \"01980512e9705e6a944e98d9a1111365ff5a35472bc59dfdb58bb9981274cdef\" successfully" Jan 24 00:33:24.885303 containerd[1650]: time="2026-01-24T00:33:24.884321223Z" level=info msg="StopPodSandbox for \"01980512e9705e6a944e98d9a1111365ff5a35472bc59dfdb58bb9981274cdef\" returns successfully" Jan 24 00:33:24.885303 containerd[1650]: time="2026-01-24T00:33:24.885085043Z" level=info msg="RemovePodSandbox for \"01980512e9705e6a944e98d9a1111365ff5a35472bc59dfdb58bb9981274cdef\"" Jan 24 00:33:24.885303 containerd[1650]: time="2026-01-24T00:33:24.885118943Z" level=info msg="Forcibly stopping sandbox \"01980512e9705e6a944e98d9a1111365ff5a35472bc59dfdb58bb9981274cdef\"" Jan 24 00:33:24.983667 containerd[1650]: 2026-01-24 00:33:24.933 [WARNING][5272] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="01980512e9705e6a944e98d9a1111365ff5a35472bc59dfdb58bb9981274cdef" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--a9e48d2ea0-k8s-goldmane--666569f655--r5n27-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"e954bcbc-6a7d-4fa9-9256-747a5b39530e", ResourceVersion:"1082", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 32, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-a9e48d2ea0", ContainerID:"b46da7dd2766942cc861799fcc26acfabaad1dfd0bccbe842ebda2c8c5eb1928", Pod:"goldmane-666569f655-r5n27", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.43.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6dca223a2a1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:33:24.983667 containerd[1650]: 2026-01-24 00:33:24.934 [INFO][5272] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="01980512e9705e6a944e98d9a1111365ff5a35472bc59dfdb58bb9981274cdef" Jan 24 00:33:24.983667 containerd[1650]: 2026-01-24 00:33:24.934 [INFO][5272] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="01980512e9705e6a944e98d9a1111365ff5a35472bc59dfdb58bb9981274cdef" iface="eth0" netns="" Jan 24 00:33:24.983667 containerd[1650]: 2026-01-24 00:33:24.935 [INFO][5272] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="01980512e9705e6a944e98d9a1111365ff5a35472bc59dfdb58bb9981274cdef" Jan 24 00:33:24.983667 containerd[1650]: 2026-01-24 00:33:24.935 [INFO][5272] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="01980512e9705e6a944e98d9a1111365ff5a35472bc59dfdb58bb9981274cdef" Jan 24 00:33:24.983667 containerd[1650]: 2026-01-24 00:33:24.965 [INFO][5279] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="01980512e9705e6a944e98d9a1111365ff5a35472bc59dfdb58bb9981274cdef" HandleID="k8s-pod-network.01980512e9705e6a944e98d9a1111365ff5a35472bc59dfdb58bb9981274cdef" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-goldmane--666569f655--r5n27-eth0" Jan 24 00:33:24.983667 containerd[1650]: 2026-01-24 00:33:24.965 [INFO][5279] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:33:24.983667 containerd[1650]: 2026-01-24 00:33:24.965 [INFO][5279] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:33:24.983667 containerd[1650]: 2026-01-24 00:33:24.975 [WARNING][5279] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="01980512e9705e6a944e98d9a1111365ff5a35472bc59dfdb58bb9981274cdef" HandleID="k8s-pod-network.01980512e9705e6a944e98d9a1111365ff5a35472bc59dfdb58bb9981274cdef" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-goldmane--666569f655--r5n27-eth0" Jan 24 00:33:24.983667 containerd[1650]: 2026-01-24 00:33:24.975 [INFO][5279] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="01980512e9705e6a944e98d9a1111365ff5a35472bc59dfdb58bb9981274cdef" HandleID="k8s-pod-network.01980512e9705e6a944e98d9a1111365ff5a35472bc59dfdb58bb9981274cdef" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-goldmane--666569f655--r5n27-eth0" Jan 24 00:33:24.983667 containerd[1650]: 2026-01-24 00:33:24.977 [INFO][5279] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:33:24.983667 containerd[1650]: 2026-01-24 00:33:24.980 [INFO][5272] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="01980512e9705e6a944e98d9a1111365ff5a35472bc59dfdb58bb9981274cdef" Jan 24 00:33:24.983667 containerd[1650]: time="2026-01-24T00:33:24.983626275Z" level=info msg="TearDown network for sandbox \"01980512e9705e6a944e98d9a1111365ff5a35472bc59dfdb58bb9981274cdef\" successfully" Jan 24 00:33:24.990613 containerd[1650]: time="2026-01-24T00:33:24.990557985Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"01980512e9705e6a944e98d9a1111365ff5a35472bc59dfdb58bb9981274cdef\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:33:24.990770 containerd[1650]: time="2026-01-24T00:33:24.990643715Z" level=info msg="RemovePodSandbox \"01980512e9705e6a944e98d9a1111365ff5a35472bc59dfdb58bb9981274cdef\" returns successfully" Jan 24 00:33:24.991425 containerd[1650]: time="2026-01-24T00:33:24.991356006Z" level=info msg="StopPodSandbox for \"0b8c502a59aef7be8a1d38df20e6870e36a886296160f154bc7d1cc308763e9c\"" Jan 24 00:33:25.100338 containerd[1650]: 2026-01-24 00:33:25.045 [WARNING][5293] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0b8c502a59aef7be8a1d38df20e6870e36a886296160f154bc7d1cc308763e9c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--a9e48d2ea0-k8s-calico--apiserver--f4f66fd65--f9s4x-eth0", GenerateName:"calico-apiserver-f4f66fd65-", Namespace:"calico-apiserver", SelfLink:"", UID:"45f9a298-5fb0-472f-b747-58a979ff2009", ResourceVersion:"1125", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 32, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f4f66fd65", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-a9e48d2ea0", ContainerID:"432740753d8c066924536c0e71bfe6ddaecb7058765937ea59f3802cafe3c3e5", Pod:"calico-apiserver-f4f66fd65-f9s4x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.43.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calica5eaf44f6a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:33:25.100338 containerd[1650]: 2026-01-24 00:33:25.046 [INFO][5293] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0b8c502a59aef7be8a1d38df20e6870e36a886296160f154bc7d1cc308763e9c" Jan 24 00:33:25.100338 containerd[1650]: 2026-01-24 00:33:25.046 [INFO][5293] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0b8c502a59aef7be8a1d38df20e6870e36a886296160f154bc7d1cc308763e9c" iface="eth0" netns="" Jan 24 00:33:25.100338 containerd[1650]: 2026-01-24 00:33:25.046 [INFO][5293] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0b8c502a59aef7be8a1d38df20e6870e36a886296160f154bc7d1cc308763e9c" Jan 24 00:33:25.100338 containerd[1650]: 2026-01-24 00:33:25.046 [INFO][5293] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0b8c502a59aef7be8a1d38df20e6870e36a886296160f154bc7d1cc308763e9c" Jan 24 00:33:25.100338 containerd[1650]: 2026-01-24 00:33:25.078 [INFO][5300] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0b8c502a59aef7be8a1d38df20e6870e36a886296160f154bc7d1cc308763e9c" HandleID="k8s-pod-network.0b8c502a59aef7be8a1d38df20e6870e36a886296160f154bc7d1cc308763e9c" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--apiserver--f4f66fd65--f9s4x-eth0" Jan 24 00:33:25.100338 containerd[1650]: 2026-01-24 00:33:25.078 [INFO][5300] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:33:25.100338 containerd[1650]: 2026-01-24 00:33:25.078 [INFO][5300] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:33:25.100338 containerd[1650]: 2026-01-24 00:33:25.089 [WARNING][5300] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0b8c502a59aef7be8a1d38df20e6870e36a886296160f154bc7d1cc308763e9c" HandleID="k8s-pod-network.0b8c502a59aef7be8a1d38df20e6870e36a886296160f154bc7d1cc308763e9c" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--apiserver--f4f66fd65--f9s4x-eth0" Jan 24 00:33:25.100338 containerd[1650]: 2026-01-24 00:33:25.089 [INFO][5300] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0b8c502a59aef7be8a1d38df20e6870e36a886296160f154bc7d1cc308763e9c" HandleID="k8s-pod-network.0b8c502a59aef7be8a1d38df20e6870e36a886296160f154bc7d1cc308763e9c" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--apiserver--f4f66fd65--f9s4x-eth0" Jan 24 00:33:25.100338 containerd[1650]: 2026-01-24 00:33:25.091 [INFO][5300] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:33:25.100338 containerd[1650]: 2026-01-24 00:33:25.095 [INFO][5293] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0b8c502a59aef7be8a1d38df20e6870e36a886296160f154bc7d1cc308763e9c" Jan 24 00:33:25.100338 containerd[1650]: time="2026-01-24T00:33:25.100194420Z" level=info msg="TearDown network for sandbox \"0b8c502a59aef7be8a1d38df20e6870e36a886296160f154bc7d1cc308763e9c\" successfully" Jan 24 00:33:25.100338 containerd[1650]: time="2026-01-24T00:33:25.100226060Z" level=info msg="StopPodSandbox for \"0b8c502a59aef7be8a1d38df20e6870e36a886296160f154bc7d1cc308763e9c\" returns successfully" Jan 24 00:33:25.101823 containerd[1650]: time="2026-01-24T00:33:25.101766790Z" level=info msg="RemovePodSandbox for \"0b8c502a59aef7be8a1d38df20e6870e36a886296160f154bc7d1cc308763e9c\"" Jan 24 00:33:25.101918 containerd[1650]: time="2026-01-24T00:33:25.101847050Z" level=info msg="Forcibly stopping sandbox \"0b8c502a59aef7be8a1d38df20e6870e36a886296160f154bc7d1cc308763e9c\"" Jan 24 00:33:25.181518 containerd[1650]: time="2026-01-24T00:33:25.181446457Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:33:25.184641 containerd[1650]: time="2026-01-24T00:33:25.184484676Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:33:25.184854 containerd[1650]: time="2026-01-24T00:33:25.184707096Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:33:25.185827 kubelet[2753]: E0124 00:33:25.185176 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:33:25.185827 kubelet[2753]: E0124 00:33:25.185246 2753 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:33:25.185827 kubelet[2753]: E0124 00:33:25.185380 2753 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hkljm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6bfcb7c46c-w555v_calico-apiserver(abb81e57-cdeb-458f-9a89-6ad70b4a9133): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:33:25.187358 kubelet[2753]: E0124 00:33:25.187190 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bfcb7c46c-w555v" podUID="abb81e57-cdeb-458f-9a89-6ad70b4a9133" Jan 24 00:33:25.231803 containerd[1650]: 2026-01-24 00:33:25.157 [WARNING][5314] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0b8c502a59aef7be8a1d38df20e6870e36a886296160f154bc7d1cc308763e9c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--a9e48d2ea0-k8s-calico--apiserver--f4f66fd65--f9s4x-eth0", GenerateName:"calico-apiserver-f4f66fd65-", Namespace:"calico-apiserver", SelfLink:"", UID:"45f9a298-5fb0-472f-b747-58a979ff2009", ResourceVersion:"1125", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 32, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f4f66fd65", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-a9e48d2ea0", ContainerID:"432740753d8c066924536c0e71bfe6ddaecb7058765937ea59f3802cafe3c3e5", Pod:"calico-apiserver-f4f66fd65-f9s4x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.43.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calica5eaf44f6a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:33:25.231803 containerd[1650]: 2026-01-24 00:33:25.158 [INFO][5314] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0b8c502a59aef7be8a1d38df20e6870e36a886296160f154bc7d1cc308763e9c" Jan 24 00:33:25.231803 containerd[1650]: 2026-01-24 00:33:25.158 [INFO][5314] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0b8c502a59aef7be8a1d38df20e6870e36a886296160f154bc7d1cc308763e9c" iface="eth0" netns="" Jan 24 00:33:25.231803 containerd[1650]: 2026-01-24 00:33:25.158 [INFO][5314] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0b8c502a59aef7be8a1d38df20e6870e36a886296160f154bc7d1cc308763e9c" Jan 24 00:33:25.231803 containerd[1650]: 2026-01-24 00:33:25.158 [INFO][5314] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0b8c502a59aef7be8a1d38df20e6870e36a886296160f154bc7d1cc308763e9c" Jan 24 00:33:25.231803 containerd[1650]: 2026-01-24 00:33:25.211 [INFO][5322] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0b8c502a59aef7be8a1d38df20e6870e36a886296160f154bc7d1cc308763e9c" HandleID="k8s-pod-network.0b8c502a59aef7be8a1d38df20e6870e36a886296160f154bc7d1cc308763e9c" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--apiserver--f4f66fd65--f9s4x-eth0" Jan 24 00:33:25.231803 containerd[1650]: 2026-01-24 00:33:25.211 [INFO][5322] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:33:25.231803 containerd[1650]: 2026-01-24 00:33:25.212 [INFO][5322] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:33:25.231803 containerd[1650]: 2026-01-24 00:33:25.219 [WARNING][5322] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0b8c502a59aef7be8a1d38df20e6870e36a886296160f154bc7d1cc308763e9c" HandleID="k8s-pod-network.0b8c502a59aef7be8a1d38df20e6870e36a886296160f154bc7d1cc308763e9c" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--apiserver--f4f66fd65--f9s4x-eth0" Jan 24 00:33:25.231803 containerd[1650]: 2026-01-24 00:33:25.219 [INFO][5322] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0b8c502a59aef7be8a1d38df20e6870e36a886296160f154bc7d1cc308763e9c" HandleID="k8s-pod-network.0b8c502a59aef7be8a1d38df20e6870e36a886296160f154bc7d1cc308763e9c" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--apiserver--f4f66fd65--f9s4x-eth0" Jan 24 00:33:25.231803 containerd[1650]: 2026-01-24 00:33:25.223 [INFO][5322] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:33:25.231803 containerd[1650]: 2026-01-24 00:33:25.228 [INFO][5314] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0b8c502a59aef7be8a1d38df20e6870e36a886296160f154bc7d1cc308763e9c" Jan 24 00:33:25.232265 containerd[1650]: time="2026-01-24T00:33:25.231858814Z" level=info msg="TearDown network for sandbox \"0b8c502a59aef7be8a1d38df20e6870e36a886296160f154bc7d1cc308763e9c\" successfully" Jan 24 00:33:25.237242 containerd[1650]: time="2026-01-24T00:33:25.237136874Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0b8c502a59aef7be8a1d38df20e6870e36a886296160f154bc7d1cc308763e9c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:33:25.237242 containerd[1650]: time="2026-01-24T00:33:25.237198534Z" level=info msg="RemovePodSandbox \"0b8c502a59aef7be8a1d38df20e6870e36a886296160f154bc7d1cc308763e9c\" returns successfully" Jan 24 00:33:25.238934 containerd[1650]: time="2026-01-24T00:33:25.238903924Z" level=info msg="StopPodSandbox for \"9962b513d74e499c74e3cb7c19d37cc9c00c9c1d1128c2c9c237f85880db8e8b\"" Jan 24 00:33:25.311946 containerd[1650]: 2026-01-24 00:33:25.278 [WARNING][5336] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9962b513d74e499c74e3cb7c19d37cc9c00c9c1d1128c2c9c237f85880db8e8b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--a9e48d2ea0-k8s-calico--apiserver--f4f66fd65--vcrjr-eth0", GenerateName:"calico-apiserver-f4f66fd65-", Namespace:"calico-apiserver", SelfLink:"", UID:"8571ab88-459c-48f7-a296-37c9ac9b6a8a", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 32, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f4f66fd65", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-a9e48d2ea0", ContainerID:"5296bd6875785b8e88829804e387070d88304634cc02ae8d5bb17d37edce9921", Pod:"calico-apiserver-f4f66fd65-vcrjr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.43.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali57b1768dcce", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:33:25.311946 containerd[1650]: 2026-01-24 00:33:25.278 [INFO][5336] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9962b513d74e499c74e3cb7c19d37cc9c00c9c1d1128c2c9c237f85880db8e8b" Jan 24 00:33:25.311946 containerd[1650]: 2026-01-24 00:33:25.278 [INFO][5336] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9962b513d74e499c74e3cb7c19d37cc9c00c9c1d1128c2c9c237f85880db8e8b" iface="eth0" netns="" Jan 24 00:33:25.311946 containerd[1650]: 2026-01-24 00:33:25.278 [INFO][5336] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9962b513d74e499c74e3cb7c19d37cc9c00c9c1d1128c2c9c237f85880db8e8b" Jan 24 00:33:25.311946 containerd[1650]: 2026-01-24 00:33:25.278 [INFO][5336] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9962b513d74e499c74e3cb7c19d37cc9c00c9c1d1128c2c9c237f85880db8e8b" Jan 24 00:33:25.311946 containerd[1650]: 2026-01-24 00:33:25.297 [INFO][5343] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9962b513d74e499c74e3cb7c19d37cc9c00c9c1d1128c2c9c237f85880db8e8b" HandleID="k8s-pod-network.9962b513d74e499c74e3cb7c19d37cc9c00c9c1d1128c2c9c237f85880db8e8b" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--apiserver--f4f66fd65--vcrjr-eth0" Jan 24 00:33:25.311946 containerd[1650]: 2026-01-24 00:33:25.297 [INFO][5343] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:33:25.311946 containerd[1650]: 2026-01-24 00:33:25.297 [INFO][5343] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:33:25.311946 containerd[1650]: 2026-01-24 00:33:25.306 [WARNING][5343] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9962b513d74e499c74e3cb7c19d37cc9c00c9c1d1128c2c9c237f85880db8e8b" HandleID="k8s-pod-network.9962b513d74e499c74e3cb7c19d37cc9c00c9c1d1128c2c9c237f85880db8e8b" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--apiserver--f4f66fd65--vcrjr-eth0" Jan 24 00:33:25.311946 containerd[1650]: 2026-01-24 00:33:25.306 [INFO][5343] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9962b513d74e499c74e3cb7c19d37cc9c00c9c1d1128c2c9c237f85880db8e8b" HandleID="k8s-pod-network.9962b513d74e499c74e3cb7c19d37cc9c00c9c1d1128c2c9c237f85880db8e8b" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--apiserver--f4f66fd65--vcrjr-eth0" Jan 24 00:33:25.311946 containerd[1650]: 2026-01-24 00:33:25.308 [INFO][5343] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:33:25.311946 containerd[1650]: 2026-01-24 00:33:25.310 [INFO][5336] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9962b513d74e499c74e3cb7c19d37cc9c00c9c1d1128c2c9c237f85880db8e8b" Jan 24 00:33:25.311946 containerd[1650]: time="2026-01-24T00:33:25.311838370Z" level=info msg="TearDown network for sandbox \"9962b513d74e499c74e3cb7c19d37cc9c00c9c1d1128c2c9c237f85880db8e8b\" successfully" Jan 24 00:33:25.311946 containerd[1650]: time="2026-01-24T00:33:25.311860250Z" level=info msg="StopPodSandbox for \"9962b513d74e499c74e3cb7c19d37cc9c00c9c1d1128c2c9c237f85880db8e8b\" returns successfully" Jan 24 00:33:25.312966 containerd[1650]: time="2026-01-24T00:33:25.312586130Z" level=info msg="RemovePodSandbox for \"9962b513d74e499c74e3cb7c19d37cc9c00c9c1d1128c2c9c237f85880db8e8b\"" Jan 24 00:33:25.312966 containerd[1650]: time="2026-01-24T00:33:25.312616470Z" level=info msg="Forcibly stopping sandbox \"9962b513d74e499c74e3cb7c19d37cc9c00c9c1d1128c2c9c237f85880db8e8b\"" Jan 24 00:33:25.389981 containerd[1650]: 2026-01-24 00:33:25.347 [WARNING][5357] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9962b513d74e499c74e3cb7c19d37cc9c00c9c1d1128c2c9c237f85880db8e8b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--a9e48d2ea0-k8s-calico--apiserver--f4f66fd65--vcrjr-eth0", GenerateName:"calico-apiserver-f4f66fd65-", Namespace:"calico-apiserver", SelfLink:"", UID:"8571ab88-459c-48f7-a296-37c9ac9b6a8a", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 32, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f4f66fd65", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-a9e48d2ea0", ContainerID:"5296bd6875785b8e88829804e387070d88304634cc02ae8d5bb17d37edce9921", Pod:"calico-apiserver-f4f66fd65-vcrjr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.43.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali57b1768dcce", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:33:25.389981 containerd[1650]: 2026-01-24 00:33:25.347 [INFO][5357] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9962b513d74e499c74e3cb7c19d37cc9c00c9c1d1128c2c9c237f85880db8e8b" Jan 24 00:33:25.389981 containerd[1650]: 2026-01-24 00:33:25.347 [INFO][5357] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9962b513d74e499c74e3cb7c19d37cc9c00c9c1d1128c2c9c237f85880db8e8b" iface="eth0" netns="" Jan 24 00:33:25.389981 containerd[1650]: 2026-01-24 00:33:25.347 [INFO][5357] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9962b513d74e499c74e3cb7c19d37cc9c00c9c1d1128c2c9c237f85880db8e8b" Jan 24 00:33:25.389981 containerd[1650]: 2026-01-24 00:33:25.347 [INFO][5357] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9962b513d74e499c74e3cb7c19d37cc9c00c9c1d1128c2c9c237f85880db8e8b" Jan 24 00:33:25.389981 containerd[1650]: 2026-01-24 00:33:25.373 [INFO][5364] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9962b513d74e499c74e3cb7c19d37cc9c00c9c1d1128c2c9c237f85880db8e8b" HandleID="k8s-pod-network.9962b513d74e499c74e3cb7c19d37cc9c00c9c1d1128c2c9c237f85880db8e8b" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--apiserver--f4f66fd65--vcrjr-eth0" Jan 24 00:33:25.389981 containerd[1650]: 2026-01-24 00:33:25.374 [INFO][5364] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:33:25.389981 containerd[1650]: 2026-01-24 00:33:25.374 [INFO][5364] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:33:25.389981 containerd[1650]: 2026-01-24 00:33:25.380 [WARNING][5364] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9962b513d74e499c74e3cb7c19d37cc9c00c9c1d1128c2c9c237f85880db8e8b" HandleID="k8s-pod-network.9962b513d74e499c74e3cb7c19d37cc9c00c9c1d1128c2c9c237f85880db8e8b" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--apiserver--f4f66fd65--vcrjr-eth0" Jan 24 00:33:25.389981 containerd[1650]: 2026-01-24 00:33:25.380 [INFO][5364] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9962b513d74e499c74e3cb7c19d37cc9c00c9c1d1128c2c9c237f85880db8e8b" HandleID="k8s-pod-network.9962b513d74e499c74e3cb7c19d37cc9c00c9c1d1128c2c9c237f85880db8e8b" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--apiserver--f4f66fd65--vcrjr-eth0" Jan 24 00:33:25.389981 containerd[1650]: 2026-01-24 00:33:25.383 [INFO][5364] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:33:25.389981 containerd[1650]: 2026-01-24 00:33:25.386 [INFO][5357] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9962b513d74e499c74e3cb7c19d37cc9c00c9c1d1128c2c9c237f85880db8e8b" Jan 24 00:33:25.390615 containerd[1650]: time="2026-01-24T00:33:25.390037297Z" level=info msg="TearDown network for sandbox \"9962b513d74e499c74e3cb7c19d37cc9c00c9c1d1128c2c9c237f85880db8e8b\" successfully" Jan 24 00:33:25.399625 containerd[1650]: time="2026-01-24T00:33:25.399466047Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9962b513d74e499c74e3cb7c19d37cc9c00c9c1d1128c2c9c237f85880db8e8b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:33:25.399625 containerd[1650]: time="2026-01-24T00:33:25.399524327Z" level=info msg="RemovePodSandbox \"9962b513d74e499c74e3cb7c19d37cc9c00c9c1d1128c2c9c237f85880db8e8b\" returns successfully" Jan 24 00:33:25.400206 containerd[1650]: time="2026-01-24T00:33:25.400167337Z" level=info msg="StopPodSandbox for \"5e6fc284e88673e193c9910bdf52dc2df04392d112b0f429bf831dc0545477c3\"" Jan 24 00:33:25.506658 containerd[1650]: 2026-01-24 00:33:25.458 [WARNING][5378] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5e6fc284e88673e193c9910bdf52dc2df04392d112b0f429bf831dc0545477c3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--a9e48d2ea0-k8s-csi--node--driver--jv7gx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"08d51dd3-a54b-4b8c-9510-41c1d4106f97", ResourceVersion:"1140", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 32, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-a9e48d2ea0", ContainerID:"350cbd8ab696e753bb377ec100ed03139c4f9c090e390ab01fe5b3b4f5072e96", Pod:"csi-node-driver-jv7gx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.43.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali41069dfb3fc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:33:25.506658 containerd[1650]: 2026-01-24 00:33:25.458 [INFO][5378] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5e6fc284e88673e193c9910bdf52dc2df04392d112b0f429bf831dc0545477c3" Jan 24 00:33:25.506658 containerd[1650]: 2026-01-24 00:33:25.458 [INFO][5378] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5e6fc284e88673e193c9910bdf52dc2df04392d112b0f429bf831dc0545477c3" iface="eth0" netns="" Jan 24 00:33:25.506658 containerd[1650]: 2026-01-24 00:33:25.458 [INFO][5378] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5e6fc284e88673e193c9910bdf52dc2df04392d112b0f429bf831dc0545477c3" Jan 24 00:33:25.506658 containerd[1650]: 2026-01-24 00:33:25.458 [INFO][5378] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5e6fc284e88673e193c9910bdf52dc2df04392d112b0f429bf831dc0545477c3" Jan 24 00:33:25.506658 containerd[1650]: 2026-01-24 00:33:25.489 [INFO][5385] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5e6fc284e88673e193c9910bdf52dc2df04392d112b0f429bf831dc0545477c3" HandleID="k8s-pod-network.5e6fc284e88673e193c9910bdf52dc2df04392d112b0f429bf831dc0545477c3" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-csi--node--driver--jv7gx-eth0" Jan 24 00:33:25.506658 containerd[1650]: 2026-01-24 00:33:25.490 [INFO][5385] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:33:25.506658 containerd[1650]: 2026-01-24 00:33:25.490 [INFO][5385] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:33:25.506658 containerd[1650]: 2026-01-24 00:33:25.497 [WARNING][5385] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5e6fc284e88673e193c9910bdf52dc2df04392d112b0f429bf831dc0545477c3" HandleID="k8s-pod-network.5e6fc284e88673e193c9910bdf52dc2df04392d112b0f429bf831dc0545477c3" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-csi--node--driver--jv7gx-eth0" Jan 24 00:33:25.506658 containerd[1650]: 2026-01-24 00:33:25.498 [INFO][5385] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5e6fc284e88673e193c9910bdf52dc2df04392d112b0f429bf831dc0545477c3" HandleID="k8s-pod-network.5e6fc284e88673e193c9910bdf52dc2df04392d112b0f429bf831dc0545477c3" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-csi--node--driver--jv7gx-eth0" Jan 24 00:33:25.506658 containerd[1650]: 2026-01-24 00:33:25.500 [INFO][5385] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:33:25.506658 containerd[1650]: 2026-01-24 00:33:25.503 [INFO][5378] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5e6fc284e88673e193c9910bdf52dc2df04392d112b0f429bf831dc0545477c3" Jan 24 00:33:25.506658 containerd[1650]: time="2026-01-24T00:33:25.506602741Z" level=info msg="TearDown network for sandbox \"5e6fc284e88673e193c9910bdf52dc2df04392d112b0f429bf831dc0545477c3\" successfully" Jan 24 00:33:25.506658 containerd[1650]: time="2026-01-24T00:33:25.506633311Z" level=info msg="StopPodSandbox for \"5e6fc284e88673e193c9910bdf52dc2df04392d112b0f429bf831dc0545477c3\" returns successfully" Jan 24 00:33:25.507668 containerd[1650]: time="2026-01-24T00:33:25.507624982Z" level=info msg="RemovePodSandbox for \"5e6fc284e88673e193c9910bdf52dc2df04392d112b0f429bf831dc0545477c3\"" Jan 24 00:33:25.507668 containerd[1650]: time="2026-01-24T00:33:25.507666252Z" level=info msg="Forcibly stopping sandbox \"5e6fc284e88673e193c9910bdf52dc2df04392d112b0f429bf831dc0545477c3\"" Jan 24 00:33:25.607303 containerd[1650]: 2026-01-24 00:33:25.556 [WARNING][5399] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5e6fc284e88673e193c9910bdf52dc2df04392d112b0f429bf831dc0545477c3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--a9e48d2ea0-k8s-csi--node--driver--jv7gx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"08d51dd3-a54b-4b8c-9510-41c1d4106f97", ResourceVersion:"1140", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 32, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-a9e48d2ea0", ContainerID:"350cbd8ab696e753bb377ec100ed03139c4f9c090e390ab01fe5b3b4f5072e96", Pod:"csi-node-driver-jv7gx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.43.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali41069dfb3fc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:33:25.607303 containerd[1650]: 2026-01-24 00:33:25.556 [INFO][5399] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5e6fc284e88673e193c9910bdf52dc2df04392d112b0f429bf831dc0545477c3" Jan 24 00:33:25.607303 containerd[1650]: 2026-01-24 00:33:25.556 [INFO][5399] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5e6fc284e88673e193c9910bdf52dc2df04392d112b0f429bf831dc0545477c3" iface="eth0" netns="" Jan 24 00:33:25.607303 containerd[1650]: 2026-01-24 00:33:25.556 [INFO][5399] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5e6fc284e88673e193c9910bdf52dc2df04392d112b0f429bf831dc0545477c3" Jan 24 00:33:25.607303 containerd[1650]: 2026-01-24 00:33:25.556 [INFO][5399] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5e6fc284e88673e193c9910bdf52dc2df04392d112b0f429bf831dc0545477c3" Jan 24 00:33:25.607303 containerd[1650]: 2026-01-24 00:33:25.591 [INFO][5406] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5e6fc284e88673e193c9910bdf52dc2df04392d112b0f429bf831dc0545477c3" HandleID="k8s-pod-network.5e6fc284e88673e193c9910bdf52dc2df04392d112b0f429bf831dc0545477c3" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-csi--node--driver--jv7gx-eth0" Jan 24 00:33:25.607303 containerd[1650]: 2026-01-24 00:33:25.591 [INFO][5406] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:33:25.607303 containerd[1650]: 2026-01-24 00:33:25.591 [INFO][5406] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:33:25.607303 containerd[1650]: 2026-01-24 00:33:25.599 [WARNING][5406] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5e6fc284e88673e193c9910bdf52dc2df04392d112b0f429bf831dc0545477c3" HandleID="k8s-pod-network.5e6fc284e88673e193c9910bdf52dc2df04392d112b0f429bf831dc0545477c3" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-csi--node--driver--jv7gx-eth0" Jan 24 00:33:25.607303 containerd[1650]: 2026-01-24 00:33:25.599 [INFO][5406] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5e6fc284e88673e193c9910bdf52dc2df04392d112b0f429bf831dc0545477c3" HandleID="k8s-pod-network.5e6fc284e88673e193c9910bdf52dc2df04392d112b0f429bf831dc0545477c3" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-csi--node--driver--jv7gx-eth0" Jan 24 00:33:25.607303 containerd[1650]: 2026-01-24 00:33:25.601 [INFO][5406] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:33:25.607303 containerd[1650]: 2026-01-24 00:33:25.604 [INFO][5399] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5e6fc284e88673e193c9910bdf52dc2df04392d112b0f429bf831dc0545477c3" Jan 24 00:33:25.608322 containerd[1650]: time="2026-01-24T00:33:25.607330677Z" level=info msg="TearDown network for sandbox \"5e6fc284e88673e193c9910bdf52dc2df04392d112b0f429bf831dc0545477c3\" successfully" Jan 24 00:33:25.612861 containerd[1650]: time="2026-01-24T00:33:25.612701296Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5e6fc284e88673e193c9910bdf52dc2df04392d112b0f429bf831dc0545477c3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:33:25.612861 containerd[1650]: time="2026-01-24T00:33:25.612763106Z" level=info msg="RemovePodSandbox \"5e6fc284e88673e193c9910bdf52dc2df04392d112b0f429bf831dc0545477c3\" returns successfully" Jan 24 00:33:25.613916 containerd[1650]: time="2026-01-24T00:33:25.613486877Z" level=info msg="StopPodSandbox for \"f0328ead9a379c5c934a4c1395f5cddeec596243749c827fa826d926712e3fac\"" Jan 24 00:33:25.703027 containerd[1650]: 2026-01-24 00:33:25.659 [WARNING][5421] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f0328ead9a379c5c934a4c1395f5cddeec596243749c827fa826d926712e3fac" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--a9e48d2ea0-k8s-coredns--668d6bf9bc--9hl7k-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d4b6ade8-6f1f-4156-806d-99bf4d2944e2", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 32, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-a9e48d2ea0", ContainerID:"6344b074c04bbb494edb54ddbd7318166c6311068424985073107b6d58e07967", Pod:"coredns-668d6bf9bc-9hl7k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.43.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3f5ba50de34", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:33:25.703027 containerd[1650]: 2026-01-24 00:33:25.659 [INFO][5421] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f0328ead9a379c5c934a4c1395f5cddeec596243749c827fa826d926712e3fac" Jan 24 00:33:25.703027 containerd[1650]: 2026-01-24 00:33:25.659 [INFO][5421] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f0328ead9a379c5c934a4c1395f5cddeec596243749c827fa826d926712e3fac" iface="eth0" netns="" Jan 24 00:33:25.703027 containerd[1650]: 2026-01-24 00:33:25.659 [INFO][5421] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f0328ead9a379c5c934a4c1395f5cddeec596243749c827fa826d926712e3fac" Jan 24 00:33:25.703027 containerd[1650]: 2026-01-24 00:33:25.659 [INFO][5421] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f0328ead9a379c5c934a4c1395f5cddeec596243749c827fa826d926712e3fac" Jan 24 00:33:25.703027 containerd[1650]: 2026-01-24 00:33:25.687 [INFO][5428] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f0328ead9a379c5c934a4c1395f5cddeec596243749c827fa826d926712e3fac" HandleID="k8s-pod-network.f0328ead9a379c5c934a4c1395f5cddeec596243749c827fa826d926712e3fac" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-coredns--668d6bf9bc--9hl7k-eth0" Jan 24 00:33:25.703027 containerd[1650]: 2026-01-24 00:33:25.687 [INFO][5428] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:33:25.703027 containerd[1650]: 2026-01-24 00:33:25.687 [INFO][5428] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:33:25.703027 containerd[1650]: 2026-01-24 00:33:25.694 [WARNING][5428] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f0328ead9a379c5c934a4c1395f5cddeec596243749c827fa826d926712e3fac" HandleID="k8s-pod-network.f0328ead9a379c5c934a4c1395f5cddeec596243749c827fa826d926712e3fac" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-coredns--668d6bf9bc--9hl7k-eth0" Jan 24 00:33:25.703027 containerd[1650]: 2026-01-24 00:33:25.694 [INFO][5428] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f0328ead9a379c5c934a4c1395f5cddeec596243749c827fa826d926712e3fac" HandleID="k8s-pod-network.f0328ead9a379c5c934a4c1395f5cddeec596243749c827fa826d926712e3fac" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-coredns--668d6bf9bc--9hl7k-eth0" Jan 24 00:33:25.703027 containerd[1650]: 2026-01-24 00:33:25.696 [INFO][5428] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:33:25.703027 containerd[1650]: 2026-01-24 00:33:25.699 [INFO][5421] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f0328ead9a379c5c934a4c1395f5cddeec596243749c827fa826d926712e3fac" Jan 24 00:33:25.703654 containerd[1650]: time="2026-01-24T00:33:25.703128293Z" level=info msg="TearDown network for sandbox \"f0328ead9a379c5c934a4c1395f5cddeec596243749c827fa826d926712e3fac\" successfully" Jan 24 00:33:25.703654 containerd[1650]: time="2026-01-24T00:33:25.703157203Z" level=info msg="StopPodSandbox for \"f0328ead9a379c5c934a4c1395f5cddeec596243749c827fa826d926712e3fac\" returns successfully" Jan 24 00:33:25.704806 containerd[1650]: time="2026-01-24T00:33:25.704354472Z" level=info msg="RemovePodSandbox for \"f0328ead9a379c5c934a4c1395f5cddeec596243749c827fa826d926712e3fac\"" Jan 24 00:33:25.704806 containerd[1650]: time="2026-01-24T00:33:25.704434013Z" level=info msg="Forcibly stopping sandbox \"f0328ead9a379c5c934a4c1395f5cddeec596243749c827fa826d926712e3fac\"" Jan 24 00:33:25.797355 containerd[1650]: 2026-01-24 00:33:25.753 [WARNING][5443] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f0328ead9a379c5c934a4c1395f5cddeec596243749c827fa826d926712e3fac" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--a9e48d2ea0-k8s-coredns--668d6bf9bc--9hl7k-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d4b6ade8-6f1f-4156-806d-99bf4d2944e2", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 32, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-a9e48d2ea0", ContainerID:"6344b074c04bbb494edb54ddbd7318166c6311068424985073107b6d58e07967", Pod:"coredns-668d6bf9bc-9hl7k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.43.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3f5ba50de34", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:33:25.797355 containerd[1650]: 2026-01-24 00:33:25.753 [INFO][5443] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f0328ead9a379c5c934a4c1395f5cddeec596243749c827fa826d926712e3fac" Jan 24 00:33:25.797355 containerd[1650]: 2026-01-24 00:33:25.753 [INFO][5443] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f0328ead9a379c5c934a4c1395f5cddeec596243749c827fa826d926712e3fac" iface="eth0" netns="" Jan 24 00:33:25.797355 containerd[1650]: 2026-01-24 00:33:25.753 [INFO][5443] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f0328ead9a379c5c934a4c1395f5cddeec596243749c827fa826d926712e3fac" Jan 24 00:33:25.797355 containerd[1650]: 2026-01-24 00:33:25.753 [INFO][5443] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f0328ead9a379c5c934a4c1395f5cddeec596243749c827fa826d926712e3fac" Jan 24 00:33:25.797355 containerd[1650]: 2026-01-24 00:33:25.780 [INFO][5450] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f0328ead9a379c5c934a4c1395f5cddeec596243749c827fa826d926712e3fac" HandleID="k8s-pod-network.f0328ead9a379c5c934a4c1395f5cddeec596243749c827fa826d926712e3fac" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-coredns--668d6bf9bc--9hl7k-eth0" Jan 24 00:33:25.797355 containerd[1650]: 2026-01-24 00:33:25.782 [INFO][5450] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:33:25.797355 containerd[1650]: 2026-01-24 00:33:25.782 [INFO][5450] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:33:25.797355 containerd[1650]: 2026-01-24 00:33:25.789 [WARNING][5450] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f0328ead9a379c5c934a4c1395f5cddeec596243749c827fa826d926712e3fac" HandleID="k8s-pod-network.f0328ead9a379c5c934a4c1395f5cddeec596243749c827fa826d926712e3fac" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-coredns--668d6bf9bc--9hl7k-eth0" Jan 24 00:33:25.797355 containerd[1650]: 2026-01-24 00:33:25.789 [INFO][5450] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f0328ead9a379c5c934a4c1395f5cddeec596243749c827fa826d926712e3fac" HandleID="k8s-pod-network.f0328ead9a379c5c934a4c1395f5cddeec596243749c827fa826d926712e3fac" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-coredns--668d6bf9bc--9hl7k-eth0" Jan 24 00:33:25.797355 containerd[1650]: 2026-01-24 00:33:25.791 [INFO][5450] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:33:25.797355 containerd[1650]: 2026-01-24 00:33:25.794 [INFO][5443] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f0328ead9a379c5c934a4c1395f5cddeec596243749c827fa826d926712e3fac" Jan 24 00:33:25.800444 containerd[1650]: time="2026-01-24T00:33:25.799926948Z" level=info msg="TearDown network for sandbox \"f0328ead9a379c5c934a4c1395f5cddeec596243749c827fa826d926712e3fac\" successfully" Jan 24 00:33:25.805056 containerd[1650]: time="2026-01-24T00:33:25.805007098Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f0328ead9a379c5c934a4c1395f5cddeec596243749c827fa826d926712e3fac\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:33:25.805172 containerd[1650]: time="2026-01-24T00:33:25.805092358Z" level=info msg="RemovePodSandbox \"f0328ead9a379c5c934a4c1395f5cddeec596243749c827fa826d926712e3fac\" returns successfully" Jan 24 00:33:25.805715 containerd[1650]: time="2026-01-24T00:33:25.805676478Z" level=info msg="StopPodSandbox for \"7da988224e02efacacbf41da865446bcc0b13db71dd771b2ae7221d284a8372d\"" Jan 24 00:33:25.899110 containerd[1650]: 2026-01-24 00:33:25.852 [WARNING][5465] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7da988224e02efacacbf41da865446bcc0b13db71dd771b2ae7221d284a8372d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--a9e48d2ea0-k8s-calico--kube--controllers--65d6744f47--ksmv4-eth0", GenerateName:"calico-kube-controllers-65d6744f47-", Namespace:"calico-system", SelfLink:"", UID:"60b48194-9cf1-4af7-bca5-7353b7dd4d41", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 32, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"65d6744f47", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-a9e48d2ea0", ContainerID:"9b16d4ab10e954ab7e5502f0fc3f2e8c0dca3a9e5d11a8eab54187283e6f6d6c", Pod:"calico-kube-controllers-65d6744f47-ksmv4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.43.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie17e4dc2dea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:33:25.899110 containerd[1650]: 2026-01-24 00:33:25.852 [INFO][5465] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7da988224e02efacacbf41da865446bcc0b13db71dd771b2ae7221d284a8372d" Jan 24 00:33:25.899110 containerd[1650]: 2026-01-24 00:33:25.852 [INFO][5465] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7da988224e02efacacbf41da865446bcc0b13db71dd771b2ae7221d284a8372d" iface="eth0" netns="" Jan 24 00:33:25.899110 containerd[1650]: 2026-01-24 00:33:25.852 [INFO][5465] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7da988224e02efacacbf41da865446bcc0b13db71dd771b2ae7221d284a8372d" Jan 24 00:33:25.899110 containerd[1650]: 2026-01-24 00:33:25.853 [INFO][5465] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7da988224e02efacacbf41da865446bcc0b13db71dd771b2ae7221d284a8372d" Jan 24 00:33:25.899110 containerd[1650]: 2026-01-24 00:33:25.881 [INFO][5472] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7da988224e02efacacbf41da865446bcc0b13db71dd771b2ae7221d284a8372d" HandleID="k8s-pod-network.7da988224e02efacacbf41da865446bcc0b13db71dd771b2ae7221d284a8372d" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--kube--controllers--65d6744f47--ksmv4-eth0" Jan 24 00:33:25.899110 containerd[1650]: 2026-01-24 00:33:25.882 [INFO][5472] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:33:25.899110 containerd[1650]: 2026-01-24 00:33:25.882 [INFO][5472] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:33:25.899110 containerd[1650]: 2026-01-24 00:33:25.889 [WARNING][5472] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7da988224e02efacacbf41da865446bcc0b13db71dd771b2ae7221d284a8372d" HandleID="k8s-pod-network.7da988224e02efacacbf41da865446bcc0b13db71dd771b2ae7221d284a8372d" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--kube--controllers--65d6744f47--ksmv4-eth0" Jan 24 00:33:25.899110 containerd[1650]: 2026-01-24 00:33:25.889 [INFO][5472] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7da988224e02efacacbf41da865446bcc0b13db71dd771b2ae7221d284a8372d" HandleID="k8s-pod-network.7da988224e02efacacbf41da865446bcc0b13db71dd771b2ae7221d284a8372d" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--kube--controllers--65d6744f47--ksmv4-eth0" Jan 24 00:33:25.899110 containerd[1650]: 2026-01-24 00:33:25.890 [INFO][5472] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:33:25.899110 containerd[1650]: 2026-01-24 00:33:25.895 [INFO][5465] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7da988224e02efacacbf41da865446bcc0b13db71dd771b2ae7221d284a8372d" Jan 24 00:33:25.899110 containerd[1650]: time="2026-01-24T00:33:25.898934794Z" level=info msg="TearDown network for sandbox \"7da988224e02efacacbf41da865446bcc0b13db71dd771b2ae7221d284a8372d\" successfully" Jan 24 00:33:25.899110 containerd[1650]: time="2026-01-24T00:33:25.898968484Z" level=info msg="StopPodSandbox for \"7da988224e02efacacbf41da865446bcc0b13db71dd771b2ae7221d284a8372d\" returns successfully" Jan 24 00:33:25.901061 containerd[1650]: time="2026-01-24T00:33:25.900336574Z" level=info msg="RemovePodSandbox for \"7da988224e02efacacbf41da865446bcc0b13db71dd771b2ae7221d284a8372d\"" Jan 24 00:33:25.901061 containerd[1650]: time="2026-01-24T00:33:25.900373954Z" level=info msg="Forcibly stopping sandbox \"7da988224e02efacacbf41da865446bcc0b13db71dd771b2ae7221d284a8372d\"" Jan 24 00:33:26.009903 containerd[1650]: 2026-01-24 00:33:25.956 [WARNING][5487] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7da988224e02efacacbf41da865446bcc0b13db71dd771b2ae7221d284a8372d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--a9e48d2ea0-k8s-calico--kube--controllers--65d6744f47--ksmv4-eth0", GenerateName:"calico-kube-controllers-65d6744f47-", Namespace:"calico-system", SelfLink:"", UID:"60b48194-9cf1-4af7-bca5-7353b7dd4d41", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 32, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"65d6744f47", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-a9e48d2ea0", ContainerID:"9b16d4ab10e954ab7e5502f0fc3f2e8c0dca3a9e5d11a8eab54187283e6f6d6c", Pod:"calico-kube-controllers-65d6744f47-ksmv4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.43.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie17e4dc2dea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:33:26.009903 containerd[1650]: 2026-01-24 00:33:25.956 [INFO][5487] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7da988224e02efacacbf41da865446bcc0b13db71dd771b2ae7221d284a8372d" Jan 24 00:33:26.009903 containerd[1650]: 2026-01-24 00:33:25.956 [INFO][5487] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7da988224e02efacacbf41da865446bcc0b13db71dd771b2ae7221d284a8372d" iface="eth0" netns="" Jan 24 00:33:26.009903 containerd[1650]: 2026-01-24 00:33:25.956 [INFO][5487] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7da988224e02efacacbf41da865446bcc0b13db71dd771b2ae7221d284a8372d" Jan 24 00:33:26.009903 containerd[1650]: 2026-01-24 00:33:25.956 [INFO][5487] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7da988224e02efacacbf41da865446bcc0b13db71dd771b2ae7221d284a8372d" Jan 24 00:33:26.009903 containerd[1650]: 2026-01-24 00:33:25.988 [INFO][5494] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7da988224e02efacacbf41da865446bcc0b13db71dd771b2ae7221d284a8372d" HandleID="k8s-pod-network.7da988224e02efacacbf41da865446bcc0b13db71dd771b2ae7221d284a8372d" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--kube--controllers--65d6744f47--ksmv4-eth0" Jan 24 00:33:26.009903 containerd[1650]: 2026-01-24 00:33:25.989 [INFO][5494] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:33:26.009903 containerd[1650]: 2026-01-24 00:33:25.989 [INFO][5494] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:33:26.009903 containerd[1650]: 2026-01-24 00:33:26.000 [WARNING][5494] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7da988224e02efacacbf41da865446bcc0b13db71dd771b2ae7221d284a8372d" HandleID="k8s-pod-network.7da988224e02efacacbf41da865446bcc0b13db71dd771b2ae7221d284a8372d" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--kube--controllers--65d6744f47--ksmv4-eth0" Jan 24 00:33:26.009903 containerd[1650]: 2026-01-24 00:33:26.001 [INFO][5494] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7da988224e02efacacbf41da865446bcc0b13db71dd771b2ae7221d284a8372d" HandleID="k8s-pod-network.7da988224e02efacacbf41da865446bcc0b13db71dd771b2ae7221d284a8372d" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--kube--controllers--65d6744f47--ksmv4-eth0" Jan 24 00:33:26.009903 containerd[1650]: 2026-01-24 00:33:26.002 [INFO][5494] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:33:26.009903 containerd[1650]: 2026-01-24 00:33:26.005 [INFO][5487] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7da988224e02efacacbf41da865446bcc0b13db71dd771b2ae7221d284a8372d" Jan 24 00:33:26.010528 containerd[1650]: time="2026-01-24T00:33:26.009929759Z" level=info msg="TearDown network for sandbox \"7da988224e02efacacbf41da865446bcc0b13db71dd771b2ae7221d284a8372d\" successfully" Jan 24 00:33:26.015379 containerd[1650]: time="2026-01-24T00:33:26.015315709Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7da988224e02efacacbf41da865446bcc0b13db71dd771b2ae7221d284a8372d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:33:26.015379 containerd[1650]: time="2026-01-24T00:33:26.015374969Z" level=info msg="RemovePodSandbox \"7da988224e02efacacbf41da865446bcc0b13db71dd771b2ae7221d284a8372d\" returns successfully" Jan 24 00:33:26.016007 containerd[1650]: time="2026-01-24T00:33:26.015942989Z" level=info msg="StopPodSandbox for \"b87b98a3af7ee3503771164041e0de4cad3a5c161331c13536afbe36658eecd0\"" Jan 24 00:33:26.126707 containerd[1650]: 2026-01-24 00:33:26.068 [WARNING][5508] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b87b98a3af7ee3503771164041e0de4cad3a5c161331c13536afbe36658eecd0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--a9e48d2ea0-k8s-coredns--668d6bf9bc--xrqs4-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c67637af-e8b0-4286-97b2-b018e1728d18", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 32, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-a9e48d2ea0", ContainerID:"7a2d73ae482d9ff189d0dde4488635d28cb7d596ecd30233797626f7d9252258", Pod:"coredns-668d6bf9bc-xrqs4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.43.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali460353233fb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:33:26.126707 containerd[1650]: 2026-01-24 00:33:26.068 [INFO][5508] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b87b98a3af7ee3503771164041e0de4cad3a5c161331c13536afbe36658eecd0" Jan 24 00:33:26.126707 containerd[1650]: 2026-01-24 00:33:26.068 [INFO][5508] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b87b98a3af7ee3503771164041e0de4cad3a5c161331c13536afbe36658eecd0" iface="eth0" netns="" Jan 24 00:33:26.126707 containerd[1650]: 2026-01-24 00:33:26.068 [INFO][5508] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b87b98a3af7ee3503771164041e0de4cad3a5c161331c13536afbe36658eecd0" Jan 24 00:33:26.126707 containerd[1650]: 2026-01-24 00:33:26.068 [INFO][5508] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b87b98a3af7ee3503771164041e0de4cad3a5c161331c13536afbe36658eecd0" Jan 24 00:33:26.126707 containerd[1650]: 2026-01-24 00:33:26.108 [INFO][5516] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b87b98a3af7ee3503771164041e0de4cad3a5c161331c13536afbe36658eecd0" HandleID="k8s-pod-network.b87b98a3af7ee3503771164041e0de4cad3a5c161331c13536afbe36658eecd0" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-coredns--668d6bf9bc--xrqs4-eth0" Jan 24 00:33:26.126707 containerd[1650]: 2026-01-24 00:33:26.108 [INFO][5516] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:33:26.126707 containerd[1650]: 2026-01-24 00:33:26.108 [INFO][5516] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:33:26.126707 containerd[1650]: 2026-01-24 00:33:26.117 [WARNING][5516] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b87b98a3af7ee3503771164041e0de4cad3a5c161331c13536afbe36658eecd0" HandleID="k8s-pod-network.b87b98a3af7ee3503771164041e0de4cad3a5c161331c13536afbe36658eecd0" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-coredns--668d6bf9bc--xrqs4-eth0" Jan 24 00:33:26.126707 containerd[1650]: 2026-01-24 00:33:26.117 [INFO][5516] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b87b98a3af7ee3503771164041e0de4cad3a5c161331c13536afbe36658eecd0" HandleID="k8s-pod-network.b87b98a3af7ee3503771164041e0de4cad3a5c161331c13536afbe36658eecd0" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-coredns--668d6bf9bc--xrqs4-eth0" Jan 24 00:33:26.126707 containerd[1650]: 2026-01-24 00:33:26.119 [INFO][5516] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:33:26.126707 containerd[1650]: 2026-01-24 00:33:26.122 [INFO][5508] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b87b98a3af7ee3503771164041e0de4cad3a5c161331c13536afbe36658eecd0" Jan 24 00:33:26.127586 containerd[1650]: time="2026-01-24T00:33:26.126716337Z" level=info msg="TearDown network for sandbox \"b87b98a3af7ee3503771164041e0de4cad3a5c161331c13536afbe36658eecd0\" successfully" Jan 24 00:33:26.127586 containerd[1650]: time="2026-01-24T00:33:26.126752867Z" level=info msg="StopPodSandbox for \"b87b98a3af7ee3503771164041e0de4cad3a5c161331c13536afbe36658eecd0\" returns successfully" Jan 24 00:33:26.127586 containerd[1650]: time="2026-01-24T00:33:26.127157727Z" level=info msg="RemovePodSandbox for \"b87b98a3af7ee3503771164041e0de4cad3a5c161331c13536afbe36658eecd0\"" Jan 24 00:33:26.127586 containerd[1650]: time="2026-01-24T00:33:26.127193967Z" level=info msg="Forcibly stopping sandbox \"b87b98a3af7ee3503771164041e0de4cad3a5c161331c13536afbe36658eecd0\"" Jan 24 00:33:26.248450 containerd[1650]: 2026-01-24 00:33:26.188 [WARNING][5530] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b87b98a3af7ee3503771164041e0de4cad3a5c161331c13536afbe36658eecd0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--a9e48d2ea0-k8s-coredns--668d6bf9bc--xrqs4-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c67637af-e8b0-4286-97b2-b018e1728d18", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 32, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-a9e48d2ea0", ContainerID:"7a2d73ae482d9ff189d0dde4488635d28cb7d596ecd30233797626f7d9252258", Pod:"coredns-668d6bf9bc-xrqs4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.43.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali460353233fb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:33:26.248450 containerd[1650]: 2026-01-24 00:33:26.188 [INFO][5530] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b87b98a3af7ee3503771164041e0de4cad3a5c161331c13536afbe36658eecd0" Jan 24 00:33:26.248450 containerd[1650]: 2026-01-24 00:33:26.188 [INFO][5530] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b87b98a3af7ee3503771164041e0de4cad3a5c161331c13536afbe36658eecd0" iface="eth0" netns="" Jan 24 00:33:26.248450 containerd[1650]: 2026-01-24 00:33:26.188 [INFO][5530] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b87b98a3af7ee3503771164041e0de4cad3a5c161331c13536afbe36658eecd0" Jan 24 00:33:26.248450 containerd[1650]: 2026-01-24 00:33:26.189 [INFO][5530] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b87b98a3af7ee3503771164041e0de4cad3a5c161331c13536afbe36658eecd0" Jan 24 00:33:26.248450 containerd[1650]: 2026-01-24 00:33:26.229 [INFO][5537] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b87b98a3af7ee3503771164041e0de4cad3a5c161331c13536afbe36658eecd0" HandleID="k8s-pod-network.b87b98a3af7ee3503771164041e0de4cad3a5c161331c13536afbe36658eecd0" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-coredns--668d6bf9bc--xrqs4-eth0" Jan 24 00:33:26.248450 containerd[1650]: 2026-01-24 00:33:26.229 [INFO][5537] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:33:26.248450 containerd[1650]: 2026-01-24 00:33:26.229 [INFO][5537] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:33:26.248450 containerd[1650]: 2026-01-24 00:33:26.239 [WARNING][5537] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b87b98a3af7ee3503771164041e0de4cad3a5c161331c13536afbe36658eecd0" HandleID="k8s-pod-network.b87b98a3af7ee3503771164041e0de4cad3a5c161331c13536afbe36658eecd0" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-coredns--668d6bf9bc--xrqs4-eth0" Jan 24 00:33:26.248450 containerd[1650]: 2026-01-24 00:33:26.239 [INFO][5537] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b87b98a3af7ee3503771164041e0de4cad3a5c161331c13536afbe36658eecd0" HandleID="k8s-pod-network.b87b98a3af7ee3503771164041e0de4cad3a5c161331c13536afbe36658eecd0" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-coredns--668d6bf9bc--xrqs4-eth0" Jan 24 00:33:26.248450 containerd[1650]: 2026-01-24 00:33:26.241 [INFO][5537] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:33:26.248450 containerd[1650]: 2026-01-24 00:33:26.245 [INFO][5530] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b87b98a3af7ee3503771164041e0de4cad3a5c161331c13536afbe36658eecd0" Jan 24 00:33:26.249095 containerd[1650]: time="2026-01-24T00:33:26.248497525Z" level=info msg="TearDown network for sandbox \"b87b98a3af7ee3503771164041e0de4cad3a5c161331c13536afbe36658eecd0\" successfully" Jan 24 00:33:26.253561 containerd[1650]: time="2026-01-24T00:33:26.253520045Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b87b98a3af7ee3503771164041e0de4cad3a5c161331c13536afbe36658eecd0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:33:26.253685 containerd[1650]: time="2026-01-24T00:33:26.253574655Z" level=info msg="RemovePodSandbox \"b87b98a3af7ee3503771164041e0de4cad3a5c161331c13536afbe36658eecd0\" returns successfully" Jan 24 00:33:26.254060 containerd[1650]: time="2026-01-24T00:33:26.254033444Z" level=info msg="StopPodSandbox for \"a217d8c34dbd51e19632ff3d2dc59fbaf1b83772998cdb977c005009e5f829c1\"" Jan 24 00:33:26.368154 containerd[1650]: 2026-01-24 00:33:26.306 [WARNING][5551] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a217d8c34dbd51e19632ff3d2dc59fbaf1b83772998cdb977c005009e5f829c1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--a9e48d2ea0-k8s-calico--apiserver--6bfcb7c46c--w555v-eth0", GenerateName:"calico-apiserver-6bfcb7c46c-", Namespace:"calico-apiserver", SelfLink:"", UID:"abb81e57-cdeb-458f-9a89-6ad70b4a9133", ResourceVersion:"1152", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 32, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bfcb7c46c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-a9e48d2ea0", ContainerID:"80394510b046e7075781c3fb36d8e1f812f8c205024a5a05da5a10108c12b0b8", Pod:"calico-apiserver-6bfcb7c46c-w555v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.43.9/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif4420e3ad5d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:33:26.368154 containerd[1650]: 2026-01-24 00:33:26.306 [INFO][5551] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a217d8c34dbd51e19632ff3d2dc59fbaf1b83772998cdb977c005009e5f829c1" Jan 24 00:33:26.368154 containerd[1650]: 2026-01-24 00:33:26.306 [INFO][5551] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a217d8c34dbd51e19632ff3d2dc59fbaf1b83772998cdb977c005009e5f829c1" iface="eth0" netns="" Jan 24 00:33:26.368154 containerd[1650]: 2026-01-24 00:33:26.306 [INFO][5551] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a217d8c34dbd51e19632ff3d2dc59fbaf1b83772998cdb977c005009e5f829c1" Jan 24 00:33:26.368154 containerd[1650]: 2026-01-24 00:33:26.306 [INFO][5551] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a217d8c34dbd51e19632ff3d2dc59fbaf1b83772998cdb977c005009e5f829c1" Jan 24 00:33:26.368154 containerd[1650]: 2026-01-24 00:33:26.344 [INFO][5558] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a217d8c34dbd51e19632ff3d2dc59fbaf1b83772998cdb977c005009e5f829c1" HandleID="k8s-pod-network.a217d8c34dbd51e19632ff3d2dc59fbaf1b83772998cdb977c005009e5f829c1" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--apiserver--6bfcb7c46c--w555v-eth0" Jan 24 00:33:26.368154 containerd[1650]: 2026-01-24 00:33:26.345 [INFO][5558] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:33:26.368154 containerd[1650]: 2026-01-24 00:33:26.345 [INFO][5558] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:33:26.368154 containerd[1650]: 2026-01-24 00:33:26.355 [WARNING][5558] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a217d8c34dbd51e19632ff3d2dc59fbaf1b83772998cdb977c005009e5f829c1" HandleID="k8s-pod-network.a217d8c34dbd51e19632ff3d2dc59fbaf1b83772998cdb977c005009e5f829c1" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--apiserver--6bfcb7c46c--w555v-eth0" Jan 24 00:33:26.368154 containerd[1650]: 2026-01-24 00:33:26.355 [INFO][5558] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a217d8c34dbd51e19632ff3d2dc59fbaf1b83772998cdb977c005009e5f829c1" HandleID="k8s-pod-network.a217d8c34dbd51e19632ff3d2dc59fbaf1b83772998cdb977c005009e5f829c1" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--apiserver--6bfcb7c46c--w555v-eth0" Jan 24 00:33:26.368154 containerd[1650]: 2026-01-24 00:33:26.357 [INFO][5558] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:33:26.368154 containerd[1650]: 2026-01-24 00:33:26.362 [INFO][5551] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a217d8c34dbd51e19632ff3d2dc59fbaf1b83772998cdb977c005009e5f829c1" Jan 24 00:33:26.370947 containerd[1650]: time="2026-01-24T00:33:26.369271283Z" level=info msg="TearDown network for sandbox \"a217d8c34dbd51e19632ff3d2dc59fbaf1b83772998cdb977c005009e5f829c1\" successfully" Jan 24 00:33:26.370947 containerd[1650]: time="2026-01-24T00:33:26.369350452Z" level=info msg="StopPodSandbox for \"a217d8c34dbd51e19632ff3d2dc59fbaf1b83772998cdb977c005009e5f829c1\" returns successfully" Jan 24 00:33:26.370947 containerd[1650]: time="2026-01-24T00:33:26.370310673Z" level=info msg="RemovePodSandbox for \"a217d8c34dbd51e19632ff3d2dc59fbaf1b83772998cdb977c005009e5f829c1\"" Jan 24 00:33:26.370947 containerd[1650]: time="2026-01-24T00:33:26.370355223Z" level=info msg="Forcibly stopping sandbox \"a217d8c34dbd51e19632ff3d2dc59fbaf1b83772998cdb977c005009e5f829c1\"" Jan 24 00:33:26.489338 containerd[1650]: 2026-01-24 00:33:26.442 [WARNING][5574] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a217d8c34dbd51e19632ff3d2dc59fbaf1b83772998cdb977c005009e5f829c1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--a9e48d2ea0-k8s-calico--apiserver--6bfcb7c46c--w555v-eth0", GenerateName:"calico-apiserver-6bfcb7c46c-", Namespace:"calico-apiserver", SelfLink:"", UID:"abb81e57-cdeb-458f-9a89-6ad70b4a9133", ResourceVersion:"1152", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 32, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bfcb7c46c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-a9e48d2ea0", ContainerID:"80394510b046e7075781c3fb36d8e1f812f8c205024a5a05da5a10108c12b0b8", Pod:"calico-apiserver-6bfcb7c46c-w555v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.43.9/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif4420e3ad5d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:33:26.489338 containerd[1650]: 2026-01-24 00:33:26.444 [INFO][5574] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a217d8c34dbd51e19632ff3d2dc59fbaf1b83772998cdb977c005009e5f829c1" Jan 24 00:33:26.489338 containerd[1650]: 2026-01-24 00:33:26.444 [INFO][5574] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a217d8c34dbd51e19632ff3d2dc59fbaf1b83772998cdb977c005009e5f829c1" iface="eth0" netns="" Jan 24 00:33:26.489338 containerd[1650]: 2026-01-24 00:33:26.444 [INFO][5574] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a217d8c34dbd51e19632ff3d2dc59fbaf1b83772998cdb977c005009e5f829c1" Jan 24 00:33:26.489338 containerd[1650]: 2026-01-24 00:33:26.445 [INFO][5574] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a217d8c34dbd51e19632ff3d2dc59fbaf1b83772998cdb977c005009e5f829c1" Jan 24 00:33:26.489338 containerd[1650]: 2026-01-24 00:33:26.478 [INFO][5581] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a217d8c34dbd51e19632ff3d2dc59fbaf1b83772998cdb977c005009e5f829c1" HandleID="k8s-pod-network.a217d8c34dbd51e19632ff3d2dc59fbaf1b83772998cdb977c005009e5f829c1" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--apiserver--6bfcb7c46c--w555v-eth0" Jan 24 00:33:26.489338 containerd[1650]: 2026-01-24 00:33:26.479 [INFO][5581] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:33:26.489338 containerd[1650]: 2026-01-24 00:33:26.479 [INFO][5581] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:33:26.489338 containerd[1650]: 2026-01-24 00:33:26.482 [WARNING][5581] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a217d8c34dbd51e19632ff3d2dc59fbaf1b83772998cdb977c005009e5f829c1" HandleID="k8s-pod-network.a217d8c34dbd51e19632ff3d2dc59fbaf1b83772998cdb977c005009e5f829c1" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--apiserver--6bfcb7c46c--w555v-eth0" Jan 24 00:33:26.489338 containerd[1650]: 2026-01-24 00:33:26.482 [INFO][5581] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a217d8c34dbd51e19632ff3d2dc59fbaf1b83772998cdb977c005009e5f829c1" HandleID="k8s-pod-network.a217d8c34dbd51e19632ff3d2dc59fbaf1b83772998cdb977c005009e5f829c1" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-calico--apiserver--6bfcb7c46c--w555v-eth0" Jan 24 00:33:26.489338 containerd[1650]: 2026-01-24 00:33:26.484 [INFO][5581] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:33:26.489338 containerd[1650]: 2026-01-24 00:33:26.486 [INFO][5574] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a217d8c34dbd51e19632ff3d2dc59fbaf1b83772998cdb977c005009e5f829c1" Jan 24 00:33:26.489338 containerd[1650]: time="2026-01-24T00:33:26.489148281Z" level=info msg="TearDown network for sandbox \"a217d8c34dbd51e19632ff3d2dc59fbaf1b83772998cdb977c005009e5f829c1\" successfully" Jan 24 00:33:26.494426 containerd[1650]: time="2026-01-24T00:33:26.494303310Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a217d8c34dbd51e19632ff3d2dc59fbaf1b83772998cdb977c005009e5f829c1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:33:26.494426 containerd[1650]: time="2026-01-24T00:33:26.494351641Z" level=info msg="RemovePodSandbox \"a217d8c34dbd51e19632ff3d2dc59fbaf1b83772998cdb977c005009e5f829c1\" returns successfully" Jan 24 00:33:26.495376 containerd[1650]: time="2026-01-24T00:33:26.495360961Z" level=info msg="StopPodSandbox for \"d73c6a86204171f06b0afb71fa0d60e7b7d852a50e14c0ac2369222f7b57d106\"" Jan 24 00:33:26.558339 containerd[1650]: 2026-01-24 00:33:26.525 [WARNING][5595] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="d73c6a86204171f06b0afb71fa0d60e7b7d852a50e14c0ac2369222f7b57d106" WorkloadEndpoint="ci--4081--3--6--n--a9e48d2ea0-k8s-whisker--6974f9cb6--rk89t-eth0" Jan 24 00:33:26.558339 containerd[1650]: 2026-01-24 00:33:26.525 [INFO][5595] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d73c6a86204171f06b0afb71fa0d60e7b7d852a50e14c0ac2369222f7b57d106" Jan 24 00:33:26.558339 containerd[1650]: 2026-01-24 00:33:26.525 [INFO][5595] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d73c6a86204171f06b0afb71fa0d60e7b7d852a50e14c0ac2369222f7b57d106" iface="eth0" netns="" Jan 24 00:33:26.558339 containerd[1650]: 2026-01-24 00:33:26.525 [INFO][5595] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d73c6a86204171f06b0afb71fa0d60e7b7d852a50e14c0ac2369222f7b57d106" Jan 24 00:33:26.558339 containerd[1650]: 2026-01-24 00:33:26.525 [INFO][5595] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d73c6a86204171f06b0afb71fa0d60e7b7d852a50e14c0ac2369222f7b57d106" Jan 24 00:33:26.558339 containerd[1650]: 2026-01-24 00:33:26.548 [INFO][5602] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d73c6a86204171f06b0afb71fa0d60e7b7d852a50e14c0ac2369222f7b57d106" HandleID="k8s-pod-network.d73c6a86204171f06b0afb71fa0d60e7b7d852a50e14c0ac2369222f7b57d106" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-whisker--6974f9cb6--rk89t-eth0" Jan 24 00:33:26.558339 containerd[1650]: 2026-01-24 00:33:26.549 [INFO][5602] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:33:26.558339 containerd[1650]: 2026-01-24 00:33:26.549 [INFO][5602] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:33:26.558339 containerd[1650]: 2026-01-24 00:33:26.553 [WARNING][5602] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d73c6a86204171f06b0afb71fa0d60e7b7d852a50e14c0ac2369222f7b57d106" HandleID="k8s-pod-network.d73c6a86204171f06b0afb71fa0d60e7b7d852a50e14c0ac2369222f7b57d106" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-whisker--6974f9cb6--rk89t-eth0" Jan 24 00:33:26.558339 containerd[1650]: 2026-01-24 00:33:26.553 [INFO][5602] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d73c6a86204171f06b0afb71fa0d60e7b7d852a50e14c0ac2369222f7b57d106" HandleID="k8s-pod-network.d73c6a86204171f06b0afb71fa0d60e7b7d852a50e14c0ac2369222f7b57d106" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-whisker--6974f9cb6--rk89t-eth0" Jan 24 00:33:26.558339 containerd[1650]: 2026-01-24 00:33:26.554 [INFO][5602] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:33:26.558339 containerd[1650]: 2026-01-24 00:33:26.556 [INFO][5595] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d73c6a86204171f06b0afb71fa0d60e7b7d852a50e14c0ac2369222f7b57d106" Jan 24 00:33:26.559077 containerd[1650]: time="2026-01-24T00:33:26.558374230Z" level=info msg="TearDown network for sandbox \"d73c6a86204171f06b0afb71fa0d60e7b7d852a50e14c0ac2369222f7b57d106\" successfully" Jan 24 00:33:26.559077 containerd[1650]: time="2026-01-24T00:33:26.558444770Z" level=info msg="StopPodSandbox for \"d73c6a86204171f06b0afb71fa0d60e7b7d852a50e14c0ac2369222f7b57d106\" returns successfully" Jan 24 00:33:26.559271 containerd[1650]: time="2026-01-24T00:33:26.559107699Z" level=info msg="RemovePodSandbox for \"d73c6a86204171f06b0afb71fa0d60e7b7d852a50e14c0ac2369222f7b57d106\"" Jan 24 00:33:26.559271 containerd[1650]: time="2026-01-24T00:33:26.559146400Z" level=info msg="Forcibly stopping sandbox \"d73c6a86204171f06b0afb71fa0d60e7b7d852a50e14c0ac2369222f7b57d106\"" Jan 24 00:33:26.620115 containerd[1650]: 2026-01-24 00:33:26.593 [WARNING][5617] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="d73c6a86204171f06b0afb71fa0d60e7b7d852a50e14c0ac2369222f7b57d106" WorkloadEndpoint="ci--4081--3--6--n--a9e48d2ea0-k8s-whisker--6974f9cb6--rk89t-eth0" Jan 24 00:33:26.620115 containerd[1650]: 2026-01-24 00:33:26.593 [INFO][5617] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d73c6a86204171f06b0afb71fa0d60e7b7d852a50e14c0ac2369222f7b57d106" Jan 24 00:33:26.620115 containerd[1650]: 2026-01-24 00:33:26.593 [INFO][5617] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d73c6a86204171f06b0afb71fa0d60e7b7d852a50e14c0ac2369222f7b57d106" iface="eth0" netns="" Jan 24 00:33:26.620115 containerd[1650]: 2026-01-24 00:33:26.593 [INFO][5617] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d73c6a86204171f06b0afb71fa0d60e7b7d852a50e14c0ac2369222f7b57d106" Jan 24 00:33:26.620115 containerd[1650]: 2026-01-24 00:33:26.593 [INFO][5617] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d73c6a86204171f06b0afb71fa0d60e7b7d852a50e14c0ac2369222f7b57d106" Jan 24 00:33:26.620115 containerd[1650]: 2026-01-24 00:33:26.609 [INFO][5625] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d73c6a86204171f06b0afb71fa0d60e7b7d852a50e14c0ac2369222f7b57d106" HandleID="k8s-pod-network.d73c6a86204171f06b0afb71fa0d60e7b7d852a50e14c0ac2369222f7b57d106" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-whisker--6974f9cb6--rk89t-eth0" Jan 24 00:33:26.620115 containerd[1650]: 2026-01-24 00:33:26.609 [INFO][5625] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:33:26.620115 containerd[1650]: 2026-01-24 00:33:26.609 [INFO][5625] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:33:26.620115 containerd[1650]: 2026-01-24 00:33:26.613 [WARNING][5625] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d73c6a86204171f06b0afb71fa0d60e7b7d852a50e14c0ac2369222f7b57d106" HandleID="k8s-pod-network.d73c6a86204171f06b0afb71fa0d60e7b7d852a50e14c0ac2369222f7b57d106" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-whisker--6974f9cb6--rk89t-eth0" Jan 24 00:33:26.620115 containerd[1650]: 2026-01-24 00:33:26.613 [INFO][5625] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d73c6a86204171f06b0afb71fa0d60e7b7d852a50e14c0ac2369222f7b57d106" HandleID="k8s-pod-network.d73c6a86204171f06b0afb71fa0d60e7b7d852a50e14c0ac2369222f7b57d106" Workload="ci--4081--3--6--n--a9e48d2ea0-k8s-whisker--6974f9cb6--rk89t-eth0" Jan 24 00:33:26.620115 containerd[1650]: 2026-01-24 00:33:26.615 [INFO][5625] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:33:26.620115 containerd[1650]: 2026-01-24 00:33:26.617 [INFO][5617] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d73c6a86204171f06b0afb71fa0d60e7b7d852a50e14c0ac2369222f7b57d106" Jan 24 00:33:26.620851 containerd[1650]: time="2026-01-24T00:33:26.620193499Z" level=info msg="TearDown network for sandbox \"d73c6a86204171f06b0afb71fa0d60e7b7d852a50e14c0ac2369222f7b57d106\" successfully" Jan 24 00:33:26.624293 containerd[1650]: time="2026-01-24T00:33:26.624264479Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d73c6a86204171f06b0afb71fa0d60e7b7d852a50e14c0ac2369222f7b57d106\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:33:26.624398 containerd[1650]: time="2026-01-24T00:33:26.624303119Z" level=info msg="RemovePodSandbox \"d73c6a86204171f06b0afb71fa0d60e7b7d852a50e14c0ac2369222f7b57d106\" returns successfully" Jan 24 00:33:31.734935 kubelet[2753]: E0124 00:33:31.734823 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f4f66fd65-vcrjr" podUID="8571ab88-459c-48f7-a296-37c9ac9b6a8a" Jan 24 00:33:31.737137 kubelet[2753]: E0124 00:33:31.735749 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58745d67dd-ct89f" podUID="de21b4a8-c633-4342-967f-dd18be2c5322" Jan 24 00:33:32.734103 kubelet[2753]: E0124 00:33:32.733926 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f4f66fd65-f9s4x" podUID="45f9a298-5fb0-472f-b747-58a979ff2009" Jan 24 00:33:33.733432 kubelet[2753]: E0124 00:33:33.733335 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65d6744f47-ksmv4" podUID="60b48194-9cf1-4af7-bca5-7353b7dd4d41" Jan 24 00:33:34.738580 kubelet[2753]: E0124 00:33:34.737132 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-r5n27" podUID="e954bcbc-6a7d-4fa9-9256-747a5b39530e" Jan 24 00:33:34.738580 kubelet[2753]: E0124 00:33:34.737600 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jv7gx" podUID="08d51dd3-a54b-4b8c-9510-41c1d4106f97" Jan 24 00:33:36.733322 kubelet[2753]: E0124 00:33:36.733243 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bfcb7c46c-w555v" podUID="abb81e57-cdeb-458f-9a89-6ad70b4a9133" Jan 24 00:33:43.735564 containerd[1650]: time="2026-01-24T00:33:43.734475640Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:33:44.190528 containerd[1650]: time="2026-01-24T00:33:44.190449890Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:33:44.199311 containerd[1650]: time="2026-01-24T00:33:44.199232405Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:33:44.201378 containerd[1650]: time="2026-01-24T00:33:44.200428886Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:33:44.201942 kubelet[2753]: E0124 00:33:44.201886 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:33:44.206482 kubelet[2753]: E0124 00:33:44.201957 2753 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:33:44.206482 kubelet[2753]: E0124 00:33:44.202229 2753 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r8nk7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-f4f66fd65-vcrjr_calico-apiserver(8571ab88-459c-48f7-a296-37c9ac9b6a8a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:33:44.206482 kubelet[2753]: E0124 00:33:44.203502 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f4f66fd65-vcrjr" podUID="8571ab88-459c-48f7-a296-37c9ac9b6a8a" Jan 24 00:33:44.206814 containerd[1650]: time="2026-01-24T00:33:44.203792277Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:33:44.631511 containerd[1650]: time="2026-01-24T00:33:44.631451127Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:33:44.632808 containerd[1650]: time="2026-01-24T00:33:44.632753747Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:33:44.632911 containerd[1650]: time="2026-01-24T00:33:44.632870266Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:33:44.633254 kubelet[2753]: E0124 00:33:44.633140 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:33:44.633312 kubelet[2753]: E0124 00:33:44.633275 2753 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:33:44.633572 kubelet[2753]: E0124 00:33:44.633459 2753 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7xwzh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-f4f66fd65-f9s4x_calico-apiserver(45f9a298-5fb0-472f-b747-58a979ff2009): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:33:44.635539 kubelet[2753]: E0124 00:33:44.635494 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f4f66fd65-f9s4x" podUID="45f9a298-5fb0-472f-b747-58a979ff2009" Jan 24 00:33:44.737861 containerd[1650]: time="2026-01-24T00:33:44.737823873Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:33:45.181967 containerd[1650]: time="2026-01-24T00:33:45.181909926Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:33:45.183512 containerd[1650]: time="2026-01-24T00:33:45.183451153Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:33:45.183651 containerd[1650]: time="2026-01-24T00:33:45.183538752Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 24 00:33:45.183733 kubelet[2753]: E0124 00:33:45.183686 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:33:45.183796 kubelet[2753]: E0124 00:33:45.183740 2753 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:33:45.183958 kubelet[2753]: E0124 00:33:45.183892 2753 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:212ab3e8d9b24b12bd1bf6f88681001f,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pjdj4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-58745d67dd-ct89f_calico-system(de21b4a8-c633-4342-967f-dd18be2c5322): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:33:45.188001 containerd[1650]: time="2026-01-24T00:33:45.187937587Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:33:45.625839 containerd[1650]: time="2026-01-24T00:33:45.625657924Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:33:45.627582 containerd[1650]: time="2026-01-24T00:33:45.627242980Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:33:45.627582 containerd[1650]: time="2026-01-24T00:33:45.627317599Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 24 00:33:45.627753 kubelet[2753]: E0124 00:33:45.627569 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:33:45.627753 kubelet[2753]: E0124 00:33:45.627634 2753 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:33:45.629068 kubelet[2753]: E0124 00:33:45.627770 2753 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pjdj4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-58745d67dd-ct89f_calico-system(de21b4a8-c633-4342-967f-dd18be2c5322): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:33:45.629506 kubelet[2753]: E0124 00:33:45.629438 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58745d67dd-ct89f" podUID="de21b4a8-c633-4342-967f-dd18be2c5322" Jan 24 00:33:46.739670 containerd[1650]: time="2026-01-24T00:33:46.738883653Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:33:47.187466 containerd[1650]: time="2026-01-24T00:33:47.187377292Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:33:47.189149 containerd[1650]: time="2026-01-24T00:33:47.189061789Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:33:47.189262 containerd[1650]: time="2026-01-24T00:33:47.189155038Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 24 00:33:47.189445 kubelet[2753]: E0124 00:33:47.189313 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:33:47.189445 kubelet[2753]: E0124 00:33:47.189435 2753 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:33:47.190142 kubelet[2753]: E0124 00:33:47.189692 2753 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fqrxx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jv7gx_calico-system(08d51dd3-a54b-4b8c-9510-41c1d4106f97): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:33:47.190439 containerd[1650]: time="2026-01-24T00:33:47.190281869Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:33:47.821633 containerd[1650]: time="2026-01-24T00:33:47.821377869Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:33:47.825442 containerd[1650]: time="2026-01-24T00:33:47.824219617Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:33:47.825442 containerd[1650]: time="2026-01-24T00:33:47.824317306Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 24 00:33:47.827418 kubelet[2753]: E0124 00:33:47.824529 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:33:47.827418 kubelet[2753]: E0124 00:33:47.825686 2753 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:33:47.827418 kubelet[2753]: E0124 00:33:47.825968 2753 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xv79g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-65d6744f47-ksmv4_calico-system(60b48194-9cf1-4af7-bca5-7353b7dd4d41): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:33:47.828220 containerd[1650]: time="2026-01-24T00:33:47.827975138Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:33:47.828828 kubelet[2753]: E0124 00:33:47.828735 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65d6744f47-ksmv4" podUID="60b48194-9cf1-4af7-bca5-7353b7dd4d41" Jan 24 00:33:48.260134 containerd[1650]: time="2026-01-24T00:33:48.259477368Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:33:48.261621 containerd[1650]: time="2026-01-24T00:33:48.261476053Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:33:48.261621 containerd[1650]: time="2026-01-24T00:33:48.261567642Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 24 00:33:48.261840 kubelet[2753]: E0124 00:33:48.261789 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:33:48.262370 kubelet[2753]: E0124 00:33:48.261858 2753 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:33:48.262370 kubelet[2753]: E0124 00:33:48.262130 2753 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fqrxx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jv7gx_calico-system(08d51dd3-a54b-4b8c-9510-41c1d4106f97): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:33:48.263418 kubelet[2753]: E0124 00:33:48.263320 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jv7gx" podUID="08d51dd3-a54b-4b8c-9510-41c1d4106f97" Jan 24 00:33:48.263687 containerd[1650]: time="2026-01-24T00:33:48.263496058Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:33:48.705633 containerd[1650]: time="2026-01-24T00:33:48.705480855Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:33:48.707918 containerd[1650]: time="2026-01-24T00:33:48.707745749Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:33:48.709102 containerd[1650]: time="2026-01-24T00:33:48.707882327Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:33:48.710489 kubelet[2753]: E0124 00:33:48.708152 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:33:48.710489 kubelet[2753]: E0124 00:33:48.708519 2753 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:33:48.710678 containerd[1650]: time="2026-01-24T00:33:48.709239068Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:33:48.710730 kubelet[2753]: E0124 00:33:48.709626 2753 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hkljm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6bfcb7c46c-w555v_calico-apiserver(abb81e57-cdeb-458f-9a89-6ad70b4a9133): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:33:48.713128 kubelet[2753]: E0124 00:33:48.713019 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bfcb7c46c-w555v" podUID="abb81e57-cdeb-458f-9a89-6ad70b4a9133" Jan 24 00:33:49.152455 containerd[1650]: time="2026-01-24T00:33:49.152353397Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:33:49.154403 containerd[1650]: time="2026-01-24T00:33:49.153833896Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:33:49.154403 containerd[1650]: time="2026-01-24T00:33:49.153956245Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 24 00:33:49.154501 kubelet[2753]: E0124 00:33:49.154266 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:33:49.154501 kubelet[2753]: E0124 00:33:49.154328 2753 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:33:49.154966 kubelet[2753]: E0124 00:33:49.154577 2753 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j6vg8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-r5n27_calico-system(e954bcbc-6a7d-4fa9-9256-747a5b39530e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:33:49.156330 kubelet[2753]: E0124 00:33:49.156292 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-r5n27" podUID="e954bcbc-6a7d-4fa9-9256-747a5b39530e" Jan 24 00:33:55.733925 kubelet[2753]: E0124 00:33:55.733245 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f4f66fd65-vcrjr" podUID="8571ab88-459c-48f7-a296-37c9ac9b6a8a" Jan 24 00:33:56.734696 kubelet[2753]: E0124 00:33:56.734576 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58745d67dd-ct89f" podUID="de21b4a8-c633-4342-967f-dd18be2c5322" Jan 24 00:33:57.733563 kubelet[2753]: E0124 00:33:57.731783 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f4f66fd65-f9s4x" podUID="45f9a298-5fb0-472f-b747-58a979ff2009" Jan 24 00:33:58.733331 kubelet[2753]: E0124 00:33:58.733123 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65d6744f47-ksmv4" podUID="60b48194-9cf1-4af7-bca5-7353b7dd4d41" Jan 24 00:34:00.739899 kubelet[2753]: E0124 00:34:00.739836 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jv7gx" podUID="08d51dd3-a54b-4b8c-9510-41c1d4106f97" Jan 24 00:34:01.733484 kubelet[2753]: E0124 00:34:01.733170 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bfcb7c46c-w555v" podUID="abb81e57-cdeb-458f-9a89-6ad70b4a9133" Jan 24 00:34:02.735670 kubelet[2753]: E0124 00:34:02.734802 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-r5n27" podUID="e954bcbc-6a7d-4fa9-9256-747a5b39530e" Jan 24 00:34:06.559300 systemd[1]: Started sshd@8-65.109.167.77:22-20.161.92.111:54890.service - OpenSSH per-connection server daemon (20.161.92.111:54890). Jan 24 00:34:07.333560 sshd[5693]: Accepted publickey for core from 20.161.92.111 port 54890 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:34:07.336796 sshd[5693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:34:07.342798 systemd-logind[1620]: New session 8 of user core. Jan 24 00:34:07.349156 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 24 00:34:07.733957 kubelet[2753]: E0124 00:34:07.733605 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f4f66fd65-vcrjr" podUID="8571ab88-459c-48f7-a296-37c9ac9b6a8a" Jan 24 00:34:08.006810 sshd[5693]: pam_unix(sshd:session): session closed for user core Jan 24 00:34:08.014032 systemd[1]: sshd@8-65.109.167.77:22-20.161.92.111:54890.service: Deactivated successfully. Jan 24 00:34:08.020463 systemd[1]: session-8.scope: Deactivated successfully. Jan 24 00:34:08.021920 systemd-logind[1620]: Session 8 logged out. Waiting for processes to exit. Jan 24 00:34:08.023064 systemd-logind[1620]: Removed session 8. Jan 24 00:34:09.734585 kubelet[2753]: E0124 00:34:09.734293 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f4f66fd65-f9s4x" podUID="45f9a298-5fb0-472f-b747-58a979ff2009" Jan 24 00:34:11.739087 kubelet[2753]: E0124 00:34:11.737709 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58745d67dd-ct89f" podUID="de21b4a8-c633-4342-967f-dd18be2c5322" Jan 24 00:34:13.136220 systemd[1]: Started sshd@9-65.109.167.77:22-20.161.92.111:33544.service - OpenSSH per-connection server daemon (20.161.92.111:33544). Jan 24 00:34:13.733014 kubelet[2753]: E0124 00:34:13.732857 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65d6744f47-ksmv4" podUID="60b48194-9cf1-4af7-bca5-7353b7dd4d41" Jan 24 00:34:13.907451 sshd[5708]: Accepted publickey for core from 20.161.92.111 port 33544 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:34:13.909671 sshd[5708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:34:13.919086 systemd-logind[1620]: New session 9 of user core. Jan 24 00:34:13.925249 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 24 00:34:14.507971 sshd[5708]: pam_unix(sshd:session): session closed for user core Jan 24 00:34:14.513861 systemd[1]: sshd@9-65.109.167.77:22-20.161.92.111:33544.service: Deactivated successfully. Jan 24 00:34:14.522867 systemd-logind[1620]: Session 9 logged out. Waiting for processes to exit. Jan 24 00:34:14.524683 systemd[1]: session-9.scope: Deactivated successfully. Jan 24 00:34:14.526785 systemd-logind[1620]: Removed session 9. Jan 24 00:34:14.754054 kubelet[2753]: E0124 00:34:14.753948 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jv7gx" podUID="08d51dd3-a54b-4b8c-9510-41c1d4106f97" Jan 24 00:34:15.736284 kubelet[2753]: E0124 00:34:15.735705 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bfcb7c46c-w555v" podUID="abb81e57-cdeb-458f-9a89-6ad70b4a9133" Jan 24 00:34:16.735707 kubelet[2753]: E0124 00:34:16.735663 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-r5n27" podUID="e954bcbc-6a7d-4fa9-9256-747a5b39530e" Jan 24 00:34:19.639035 systemd[1]: Started sshd@10-65.109.167.77:22-20.161.92.111:33552.service - OpenSSH per-connection server daemon (20.161.92.111:33552). Jan 24 00:34:19.734113 kubelet[2753]: E0124 00:34:19.734073 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f4f66fd65-vcrjr" podUID="8571ab88-459c-48f7-a296-37c9ac9b6a8a" Jan 24 00:34:20.401518 sshd[5727]: Accepted publickey for core from 20.161.92.111 port 33552 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:34:20.402592 sshd[5727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:34:20.417171 systemd-logind[1620]: New session 10 of user core. Jan 24 00:34:20.426692 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 24 00:34:20.735675 kubelet[2753]: E0124 00:34:20.735076 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f4f66fd65-f9s4x" podUID="45f9a298-5fb0-472f-b747-58a979ff2009" Jan 24 00:34:21.032852 sshd[5727]: pam_unix(sshd:session): session closed for user core Jan 24 00:34:21.037773 systemd-logind[1620]: Session 10 logged out. Waiting for processes to exit. Jan 24 00:34:21.039863 systemd[1]: sshd@10-65.109.167.77:22-20.161.92.111:33552.service: Deactivated successfully. Jan 24 00:34:21.051335 systemd[1]: session-10.scope: Deactivated successfully. Jan 24 00:34:21.060039 systemd-logind[1620]: Removed session 10. Jan 24 00:34:21.166595 systemd[1]: Started sshd@11-65.109.167.77:22-20.161.92.111:33568.service - OpenSSH per-connection server daemon (20.161.92.111:33568). Jan 24 00:34:21.910791 sshd[5742]: Accepted publickey for core from 20.161.92.111 port 33568 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:34:21.913222 sshd[5742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:34:21.920946 systemd-logind[1620]: New session 11 of user core. Jan 24 00:34:21.928791 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 24 00:34:22.606193 sshd[5742]: pam_unix(sshd:session): session closed for user core Jan 24 00:34:22.609658 systemd[1]: sshd@11-65.109.167.77:22-20.161.92.111:33568.service: Deactivated successfully. Jan 24 00:34:22.613151 systemd-logind[1620]: Session 11 logged out. Waiting for processes to exit. Jan 24 00:34:22.614688 systemd[1]: session-11.scope: Deactivated successfully. Jan 24 00:34:22.616047 systemd-logind[1620]: Removed session 11. Jan 24 00:34:22.735216 systemd[1]: Started sshd@12-65.109.167.77:22-20.161.92.111:53462.service - OpenSSH per-connection server daemon (20.161.92.111:53462). Jan 24 00:34:23.488265 sshd[5754]: Accepted publickey for core from 20.161.92.111 port 53462 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:34:23.488807 sshd[5754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:34:23.494113 systemd-logind[1620]: New session 12 of user core. Jan 24 00:34:23.496668 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 24 00:34:24.136845 sshd[5754]: pam_unix(sshd:session): session closed for user core Jan 24 00:34:24.146348 systemd[1]: sshd@12-65.109.167.77:22-20.161.92.111:53462.service: Deactivated successfully. Jan 24 00:34:24.162636 systemd[1]: session-12.scope: Deactivated successfully. Jan 24 00:34:24.168355 systemd-logind[1620]: Session 12 logged out. Waiting for processes to exit. Jan 24 00:34:24.170700 systemd-logind[1620]: Removed session 12. Jan 24 00:34:26.734769 containerd[1650]: time="2026-01-24T00:34:26.734373354Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:34:27.203588 containerd[1650]: time="2026-01-24T00:34:27.203518658Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:34:27.208160 containerd[1650]: time="2026-01-24T00:34:27.205515152Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:34:27.208160 containerd[1650]: time="2026-01-24T00:34:27.205574752Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 24 00:34:27.208342 kubelet[2753]: E0124 00:34:27.206545 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:34:27.208342 kubelet[2753]: E0124 00:34:27.206602 2753 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:34:27.208342 kubelet[2753]: E0124 00:34:27.206734 2753 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:212ab3e8d9b24b12bd1bf6f88681001f,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pjdj4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-58745d67dd-ct89f_calico-system(de21b4a8-c633-4342-967f-dd18be2c5322): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:34:27.212888 containerd[1650]: time="2026-01-24T00:34:27.212848601Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:34:27.657634 containerd[1650]: time="2026-01-24T00:34:27.656936305Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:34:27.659473 containerd[1650]: time="2026-01-24T00:34:27.658842129Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:34:27.659473 containerd[1650]: time="2026-01-24T00:34:27.658924079Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 24 00:34:27.662012 kubelet[2753]: E0124 00:34:27.661492 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:34:27.662012 kubelet[2753]: E0124 00:34:27.661549 2753 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:34:27.662012 kubelet[2753]: E0124 00:34:27.661665 2753 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pjdj4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-58745d67dd-ct89f_calico-system(de21b4a8-c633-4342-967f-dd18be2c5322): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:34:27.663144 kubelet[2753]: E0124 00:34:27.663112 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58745d67dd-ct89f" podUID="de21b4a8-c633-4342-967f-dd18be2c5322" Jan 24 00:34:27.732125 kubelet[2753]: E0124 00:34:27.732073 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65d6744f47-ksmv4" podUID="60b48194-9cf1-4af7-bca5-7353b7dd4d41" Jan 24 00:34:28.735036 containerd[1650]: time="2026-01-24T00:34:28.734907898Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:34:29.168637 containerd[1650]: time="2026-01-24T00:34:29.168289438Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:34:29.170461 containerd[1650]: time="2026-01-24T00:34:29.170104942Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:34:29.170461 containerd[1650]: time="2026-01-24T00:34:29.170229422Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 24 00:34:29.171610 kubelet[2753]: E0124 00:34:29.170970 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:34:29.171610 kubelet[2753]: E0124 00:34:29.171053 2753 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:34:29.171610 kubelet[2753]: E0124 00:34:29.171211 2753 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fqrxx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jv7gx_calico-system(08d51dd3-a54b-4b8c-9510-41c1d4106f97): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:34:29.176051 containerd[1650]: time="2026-01-24T00:34:29.174601580Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:34:29.274180 systemd[1]: Started sshd@13-65.109.167.77:22-20.161.92.111:53468.service - OpenSSH per-connection server daemon (20.161.92.111:53468). Jan 24 00:34:29.613480 containerd[1650]: time="2026-01-24T00:34:29.613292781Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:34:29.615207 containerd[1650]: time="2026-01-24T00:34:29.615125026Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:34:29.615493 containerd[1650]: time="2026-01-24T00:34:29.615232035Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 24 00:34:29.615783 kubelet[2753]: E0124 00:34:29.615382 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:34:29.615783 kubelet[2753]: E0124 00:34:29.615438 2753 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:34:29.615783 kubelet[2753]: E0124 00:34:29.615544 2753 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fqrxx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jv7gx_calico-system(08d51dd3-a54b-4b8c-9510-41c1d4106f97): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:34:29.616876 kubelet[2753]: E0124 00:34:29.616836 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jv7gx" podUID="08d51dd3-a54b-4b8c-9510-41c1d4106f97" Jan 24 00:34:29.732330 containerd[1650]: time="2026-01-24T00:34:29.732280344Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:34:30.050610 sshd[5778]: Accepted publickey for core from 20.161.92.111 port 53468 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:34:30.053949 sshd[5778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:34:30.074479 systemd-logind[1620]: New session 13 of user core. Jan 24 00:34:30.082669 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 24 00:34:30.164010 containerd[1650]: time="2026-01-24T00:34:30.163967703Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:34:30.165884 containerd[1650]: time="2026-01-24T00:34:30.165811828Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:34:30.165884 containerd[1650]: time="2026-01-24T00:34:30.165842588Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:34:30.166271 kubelet[2753]: E0124 00:34:30.166112 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:34:30.166271 kubelet[2753]: E0124 00:34:30.166154 2753 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:34:30.169090 kubelet[2753]: E0124 00:34:30.168964 2753 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hkljm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6bfcb7c46c-w555v_calico-apiserver(abb81e57-cdeb-458f-9a89-6ad70b4a9133): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:34:30.170581 kubelet[2753]: E0124 00:34:30.170258 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bfcb7c46c-w555v" podUID="abb81e57-cdeb-458f-9a89-6ad70b4a9133" Jan 24 00:34:30.694500 sshd[5778]: pam_unix(sshd:session): session closed for user core Jan 24 00:34:30.701107 systemd-logind[1620]: Session 13 logged out. Waiting for processes to exit. Jan 24 00:34:30.703082 systemd[1]: sshd@13-65.109.167.77:22-20.161.92.111:53468.service: Deactivated successfully. Jan 24 00:34:30.715888 systemd[1]: session-13.scope: Deactivated successfully. Jan 24 00:34:30.719287 systemd-logind[1620]: Removed session 13. Jan 24 00:34:30.735757 containerd[1650]: time="2026-01-24T00:34:30.735540000Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:34:30.825855 systemd[1]: Started sshd@14-65.109.167.77:22-20.161.92.111:53476.service - OpenSSH per-connection server daemon (20.161.92.111:53476). Jan 24 00:34:31.171200 containerd[1650]: time="2026-01-24T00:34:31.171149733Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:34:31.174415 containerd[1650]: time="2026-01-24T00:34:31.172486918Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:34:31.174415 containerd[1650]: time="2026-01-24T00:34:31.172553228Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 24 00:34:31.174581 kubelet[2753]: E0124 00:34:31.172716 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:34:31.174581 kubelet[2753]: E0124 00:34:31.172767 2753 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:34:31.174581 kubelet[2753]: E0124 00:34:31.172863 2753 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j6vg8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-r5n27_calico-system(e954bcbc-6a7d-4fa9-9256-747a5b39530e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:34:31.174581 kubelet[2753]: E0124 00:34:31.174218 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-r5n27" podUID="e954bcbc-6a7d-4fa9-9256-747a5b39530e" Jan 24 00:34:31.607240 sshd[5794]: Accepted publickey for core from 20.161.92.111 port 53476 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:34:31.609204 sshd[5794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:34:31.613559 systemd-logind[1620]: New session 14 of user core. Jan 24 00:34:31.618889 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 24 00:34:31.734199 containerd[1650]: time="2026-01-24T00:34:31.733798453Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:34:32.348995 containerd[1650]: time="2026-01-24T00:34:32.348919590Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:34:32.350966 containerd[1650]: time="2026-01-24T00:34:32.350884315Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:34:32.354023 containerd[1650]: time="2026-01-24T00:34:32.351002474Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:34:32.354023 containerd[1650]: time="2026-01-24T00:34:32.353080719Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:34:32.354132 kubelet[2753]: E0124 00:34:32.351157 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:34:32.354132 kubelet[2753]: E0124 00:34:32.351204 2753 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:34:32.354132 kubelet[2753]: E0124 00:34:32.351403 2753 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7xwzh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-f4f66fd65-f9s4x_calico-apiserver(45f9a298-5fb0-472f-b747-58a979ff2009): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:34:32.356583 kubelet[2753]: E0124 00:34:32.354980 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f4f66fd65-f9s4x" podUID="45f9a298-5fb0-472f-b747-58a979ff2009" Jan 24 00:34:32.452067 sshd[5794]: pam_unix(sshd:session): session closed for user core Jan 24 00:34:32.457834 systemd[1]: sshd@14-65.109.167.77:22-20.161.92.111:53476.service: Deactivated successfully. Jan 24 00:34:32.465545 systemd-logind[1620]: Session 14 logged out. Waiting for processes to exit. Jan 24 00:34:32.466213 systemd[1]: session-14.scope: Deactivated successfully. Jan 24 00:34:32.470000 systemd-logind[1620]: Removed session 14. Jan 24 00:34:32.577696 systemd[1]: Started sshd@15-65.109.167.77:22-20.161.92.111:59158.service - OpenSSH per-connection server daemon (20.161.92.111:59158). Jan 24 00:34:32.788062 containerd[1650]: time="2026-01-24T00:34:32.787936120Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:34:32.790519 containerd[1650]: time="2026-01-24T00:34:32.790328283Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:34:32.790519 containerd[1650]: time="2026-01-24T00:34:32.790365213Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:34:32.791038 kubelet[2753]: E0124 00:34:32.790639 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:34:32.791038 kubelet[2753]: E0124 00:34:32.790701 2753 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:34:32.791038 kubelet[2753]: E0124 00:34:32.790810 2753 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r8nk7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-f4f66fd65-vcrjr_calico-apiserver(8571ab88-459c-48f7-a296-37c9ac9b6a8a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:34:32.792140 kubelet[2753]: E0124 00:34:32.792109 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f4f66fd65-vcrjr" podUID="8571ab88-459c-48f7-a296-37c9ac9b6a8a" Jan 24 00:34:33.336044 sshd[5806]: Accepted publickey for core from 20.161.92.111 port 59158 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:34:33.338244 sshd[5806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:34:33.351729 systemd-logind[1620]: New session 15 of user core. Jan 24 00:34:33.362097 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 24 00:34:34.661363 sshd[5806]: pam_unix(sshd:session): session closed for user core Jan 24 00:34:34.666568 systemd[1]: sshd@15-65.109.167.77:22-20.161.92.111:59158.service: Deactivated successfully. Jan 24 00:34:34.672935 systemd-logind[1620]: Session 15 logged out. Waiting for processes to exit. Jan 24 00:34:34.673369 systemd[1]: session-15.scope: Deactivated successfully. Jan 24 00:34:34.676025 systemd-logind[1620]: Removed session 15. Jan 24 00:34:34.792473 systemd[1]: Started sshd@16-65.109.167.77:22-20.161.92.111:59164.service - OpenSSH per-connection server daemon (20.161.92.111:59164). Jan 24 00:34:35.569294 sshd[5848]: Accepted publickey for core from 20.161.92.111 port 59164 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:34:35.573072 sshd[5848]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:34:35.586073 systemd-logind[1620]: New session 16 of user core. Jan 24 00:34:35.592441 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 24 00:34:36.270850 sshd[5848]: pam_unix(sshd:session): session closed for user core Jan 24 00:34:36.274345 systemd[1]: sshd@16-65.109.167.77:22-20.161.92.111:59164.service: Deactivated successfully. Jan 24 00:34:36.278590 systemd[1]: session-16.scope: Deactivated successfully. Jan 24 00:34:36.279308 systemd-logind[1620]: Session 16 logged out. Waiting for processes to exit. Jan 24 00:34:36.280229 systemd-logind[1620]: Removed session 16. Jan 24 00:34:36.406578 systemd[1]: Started sshd@17-65.109.167.77:22-20.161.92.111:59180.service - OpenSSH per-connection server daemon (20.161.92.111:59180). Jan 24 00:34:37.187820 sshd[5876]: Accepted publickey for core from 20.161.92.111 port 59180 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:34:37.189163 sshd[5876]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:34:37.198439 systemd-logind[1620]: New session 17 of user core. Jan 24 00:34:37.202552 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 24 00:34:37.789368 sshd[5876]: pam_unix(sshd:session): session closed for user core Jan 24 00:34:37.798163 systemd[1]: sshd@17-65.109.167.77:22-20.161.92.111:59180.service: Deactivated successfully. Jan 24 00:34:37.810824 systemd[1]: session-17.scope: Deactivated successfully. Jan 24 00:34:37.813864 systemd-logind[1620]: Session 17 logged out. Waiting for processes to exit. Jan 24 00:34:37.817583 systemd-logind[1620]: Removed session 17. Jan 24 00:34:40.739746 kubelet[2753]: E0124 00:34:40.739690 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58745d67dd-ct89f" podUID="de21b4a8-c633-4342-967f-dd18be2c5322" Jan 24 00:34:41.733474 containerd[1650]: time="2026-01-24T00:34:41.733376117Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:34:42.178258 containerd[1650]: time="2026-01-24T00:34:42.178089836Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:34:42.179520 containerd[1650]: time="2026-01-24T00:34:42.179400593Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:34:42.179520 containerd[1650]: time="2026-01-24T00:34:42.179484743Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 24 00:34:42.180652 kubelet[2753]: E0124 00:34:42.179723 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:34:42.180652 kubelet[2753]: E0124 00:34:42.179766 2753 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:34:42.180652 kubelet[2753]: E0124 00:34:42.179878 2753 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xv79g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-65d6744f47-ksmv4_calico-system(60b48194-9cf1-4af7-bca5-7353b7dd4d41): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:34:42.181348 kubelet[2753]: E0124 00:34:42.181277 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65d6744f47-ksmv4" podUID="60b48194-9cf1-4af7-bca5-7353b7dd4d41" Jan 24 00:34:42.734616 kubelet[2753]: E0124 00:34:42.734567 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bfcb7c46c-w555v" podUID="abb81e57-cdeb-458f-9a89-6ad70b4a9133" Jan 24 00:34:42.918748 systemd[1]: Started sshd@18-65.109.167.77:22-20.161.92.111:55804.service - OpenSSH per-connection server daemon (20.161.92.111:55804). Jan 24 00:34:43.684632 sshd[5899]: Accepted publickey for core from 20.161.92.111 port 55804 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:34:43.684742 sshd[5899]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:34:43.689749 systemd-logind[1620]: New session 18 of user core. Jan 24 00:34:43.698046 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 24 00:34:43.734429 kubelet[2753]: E0124 00:34:43.734255 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-r5n27" podUID="e954bcbc-6a7d-4fa9-9256-747a5b39530e" Jan 24 00:34:43.737440 kubelet[2753]: E0124 00:34:43.736770 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jv7gx" podUID="08d51dd3-a54b-4b8c-9510-41c1d4106f97" Jan 24 00:34:44.341528 sshd[5899]: pam_unix(sshd:session): session closed for user core Jan 24 00:34:44.349233 systemd[1]: sshd@18-65.109.167.77:22-20.161.92.111:55804.service: Deactivated successfully. Jan 24 00:34:44.353282 systemd[1]: session-18.scope: Deactivated successfully. Jan 24 00:34:44.354731 systemd-logind[1620]: Session 18 logged out. Waiting for processes to exit. Jan 24 00:34:44.355925 systemd-logind[1620]: Removed session 18. Jan 24 00:34:44.736786 kubelet[2753]: E0124 00:34:44.736592 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f4f66fd65-f9s4x" podUID="45f9a298-5fb0-472f-b747-58a979ff2009" Jan 24 00:34:46.745587 kubelet[2753]: E0124 00:34:46.745503 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f4f66fd65-vcrjr" podUID="8571ab88-459c-48f7-a296-37c9ac9b6a8a" Jan 24 00:34:49.472149 systemd[1]: Started sshd@19-65.109.167.77:22-20.161.92.111:55820.service - OpenSSH per-connection server daemon (20.161.92.111:55820). Jan 24 00:34:50.254793 sshd[5913]: Accepted publickey for core from 20.161.92.111 port 55820 ssh2: RSA SHA256:l7qCf3i2zn3B4yCTd9MpdHhqieNbOBVcx9Bhg49nlMA Jan 24 00:34:50.259608 sshd[5913]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:34:50.274062 systemd-logind[1620]: New session 19 of user core. Jan 24 00:34:50.280027 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 24 00:34:50.858634 sshd[5913]: pam_unix(sshd:session): session closed for user core Jan 24 00:34:50.863864 systemd[1]: sshd@19-65.109.167.77:22-20.161.92.111:55820.service: Deactivated successfully. Jan 24 00:34:50.873748 systemd[1]: session-19.scope: Deactivated successfully. Jan 24 00:34:50.876066 systemd-logind[1620]: Session 19 logged out. Waiting for processes to exit. Jan 24 00:34:50.877071 systemd-logind[1620]: Removed session 19. Jan 24 00:34:54.743155 kubelet[2753]: E0124 00:34:54.743096 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65d6744f47-ksmv4" podUID="60b48194-9cf1-4af7-bca5-7353b7dd4d41" Jan 24 00:34:54.743951 kubelet[2753]: E0124 00:34:54.743584 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jv7gx" podUID="08d51dd3-a54b-4b8c-9510-41c1d4106f97" Jan 24 00:34:54.743951 kubelet[2753]: E0124 00:34:54.743752 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58745d67dd-ct89f" podUID="de21b4a8-c633-4342-967f-dd18be2c5322" Jan 24 00:34:55.738433 kubelet[2753]: E0124 00:34:55.736725 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f4f66fd65-f9s4x" podUID="45f9a298-5fb0-472f-b747-58a979ff2009" Jan 24 00:34:55.740639 kubelet[2753]: E0124 00:34:55.739152 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bfcb7c46c-w555v" podUID="abb81e57-cdeb-458f-9a89-6ad70b4a9133" Jan 24 00:34:57.732922 kubelet[2753]: E0124 00:34:57.732819 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-r5n27" podUID="e954bcbc-6a7d-4fa9-9256-747a5b39530e" Jan 24 00:34:57.841428 update_engine[1634]: I20260124 00:34:57.841298 1634 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 24 00:34:57.841428 update_engine[1634]: I20260124 00:34:57.841426 1634 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 24 00:34:57.842195 update_engine[1634]: I20260124 00:34:57.841825 1634 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 24 00:34:57.842860 update_engine[1634]: I20260124 00:34:57.842803 1634 omaha_request_params.cc:62] Current group set to lts Jan 24 00:34:57.845172 update_engine[1634]: I20260124 00:34:57.844679 1634 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 24 00:34:57.845172 update_engine[1634]: I20260124 00:34:57.844722 1634 update_attempter.cc:643] Scheduling an action processor start. Jan 24 00:34:57.845172 update_engine[1634]: I20260124 00:34:57.844756 1634 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 24 00:34:57.845172 update_engine[1634]: I20260124 00:34:57.844813 1634 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 24 00:34:57.845172 update_engine[1634]: I20260124 00:34:57.844940 1634 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 24 00:34:57.845172 update_engine[1634]: I20260124 00:34:57.844956 1634 omaha_request_action.cc:272] Request: Jan 24 00:34:57.845172 update_engine[1634]: Jan 24 00:34:57.845172 update_engine[1634]: Jan 24 00:34:57.845172 update_engine[1634]: Jan 24 00:34:57.845172 update_engine[1634]: Jan 24 00:34:57.845172 update_engine[1634]: Jan 24 00:34:57.845172 update_engine[1634]: Jan 24 00:34:57.845172 update_engine[1634]: Jan 24 00:34:57.845172 update_engine[1634]: Jan 24 00:34:57.845172 update_engine[1634]: I20260124 00:34:57.844973 1634 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 24 00:34:57.845845 locksmithd[1684]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 24 00:34:57.848693 update_engine[1634]: I20260124 00:34:57.848649 1634 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 24 00:34:57.849432 update_engine[1634]: I20260124 00:34:57.849350 1634 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 24 00:34:57.851817 update_engine[1634]: E20260124 00:34:57.851774 1634 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 24 00:34:57.852016 update_engine[1634]: I20260124 00:34:57.851989 1634 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 24 00:35:01.732700 kubelet[2753]: E0124 00:35:01.732627 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f4f66fd65-vcrjr" podUID="8571ab88-459c-48f7-a296-37c9ac9b6a8a" Jan 24 00:35:06.732034 kubelet[2753]: E0124 00:35:06.731943 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65d6744f47-ksmv4" podUID="60b48194-9cf1-4af7-bca5-7353b7dd4d41" Jan 24 00:35:06.892658 containerd[1650]: time="2026-01-24T00:35:06.892357715Z" level=info msg="shim disconnected" id=09a2a9da0400135caf22eb3d941f2db903dcdb5863a21d1c97f99e8ca8f27e02 namespace=k8s.io Jan 24 00:35:06.892658 containerd[1650]: time="2026-01-24T00:35:06.892644994Z" level=warning msg="cleaning up after shim disconnected" id=09a2a9da0400135caf22eb3d941f2db903dcdb5863a21d1c97f99e8ca8f27e02 namespace=k8s.io Jan 24 00:35:06.892658 containerd[1650]: time="2026-01-24T00:35:06.892663024Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:35:06.896669 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-09a2a9da0400135caf22eb3d941f2db903dcdb5863a21d1c97f99e8ca8f27e02-rootfs.mount: Deactivated successfully. Jan 24 00:35:07.104646 kubelet[2753]: E0124 00:35:07.104582 2753 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:52412->10.0.0.2:2379: read: connection timed out" Jan 24 00:35:07.228808 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b95b5c0380ad1f6bd35617c2ba79814116ef2bcd85f2cf575707f75de31c2017-rootfs.mount: Deactivated successfully. Jan 24 00:35:07.239099 containerd[1650]: time="2026-01-24T00:35:07.238970692Z" level=info msg="shim disconnected" id=b95b5c0380ad1f6bd35617c2ba79814116ef2bcd85f2cf575707f75de31c2017 namespace=k8s.io Jan 24 00:35:07.239099 containerd[1650]: time="2026-01-24T00:35:07.239074202Z" level=warning msg="cleaning up after shim disconnected" id=b95b5c0380ad1f6bd35617c2ba79814116ef2bcd85f2cf575707f75de31c2017 namespace=k8s.io Jan 24 00:35:07.239099 containerd[1650]: time="2026-01-24T00:35:07.239087092Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:35:07.390496 kubelet[2753]: I0124 00:35:07.390365 2753 scope.go:117] "RemoveContainer" containerID="09a2a9da0400135caf22eb3d941f2db903dcdb5863a21d1c97f99e8ca8f27e02" Jan 24 00:35:07.392727 kubelet[2753]: I0124 00:35:07.392688 2753 scope.go:117] "RemoveContainer" containerID="b95b5c0380ad1f6bd35617c2ba79814116ef2bcd85f2cf575707f75de31c2017" Jan 24 00:35:07.397414 containerd[1650]: time="2026-01-24T00:35:07.397308331Z" level=info msg="CreateContainer within sandbox \"6f59b1c0bacd39e3d2f28183ed42eb756ac26cfe093305f8d9708e6bf466381d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 24 00:35:07.397609 containerd[1650]: time="2026-01-24T00:35:07.397312841Z" level=info msg="CreateContainer within sandbox \"70b3f891d4bc45b14de43a3bbc58f3486a9472348f1b8b166adc86c971682f92\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 24 00:35:07.414866 containerd[1650]: time="2026-01-24T00:35:07.414806523Z" level=info msg="CreateContainer within sandbox \"70b3f891d4bc45b14de43a3bbc58f3486a9472348f1b8b166adc86c971682f92\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"8762e4c88d0c15ea5d7a2d098fc6ebf07edd0394f3e89d2c9000aeb25603b371\"" Jan 24 00:35:07.419526 containerd[1650]: time="2026-01-24T00:35:07.416573670Z" level=info msg="StartContainer for \"8762e4c88d0c15ea5d7a2d098fc6ebf07edd0394f3e89d2c9000aeb25603b371\"" Jan 24 00:35:07.419526 containerd[1650]: time="2026-01-24T00:35:07.417674678Z" level=info msg="CreateContainer within sandbox \"6f59b1c0bacd39e3d2f28183ed42eb756ac26cfe093305f8d9708e6bf466381d\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"f94ca3deeaaed1ad63c5cc6b1bec915e9c5ea1e58edf4622ff214b4f8bf079d7\"" Jan 24 00:35:07.418216 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount640154272.mount: Deactivated successfully. Jan 24 00:35:07.420028 containerd[1650]: time="2026-01-24T00:35:07.419630325Z" level=info msg="StartContainer for \"f94ca3deeaaed1ad63c5cc6b1bec915e9c5ea1e58edf4622ff214b4f8bf079d7\"" Jan 24 00:35:07.483436 kubelet[2753]: I0124 00:35:07.483409 2753 status_manager.go:890] "Failed to get status for pod" podUID="e954bcbc-6a7d-4fa9-9256-747a5b39530e" pod="calico-system/goldmane-666569f655-r5n27" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:52328->10.0.0.2:2379: read: connection timed out" Jan 24 00:35:07.484781 kubelet[2753]: E0124 00:35:07.479672 2753 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:52236->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{goldmane-666569f655-r5n27.188d837a54a62556 calico-system 1633 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-system,Name:goldmane-666569f655-r5n27,UID:e954bcbc-6a7d-4fa9-9256-747a5b39530e,APIVersion:v1,ResourceVersion:833,FieldPath:spec.containers{goldmane},},Reason:BackOff,Message:Back-off pulling image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\",Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-a9e48d2ea0,},FirstTimestamp:2026-01-24 00:33:10 +0000 UTC,LastTimestamp:2026-01-24 00:34:57.732739371 +0000 UTC m=+153.101318732,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-a9e48d2ea0,}" Jan 24 00:35:07.495415 containerd[1650]: time="2026-01-24T00:35:07.495003405Z" level=info msg="StartContainer for \"f94ca3deeaaed1ad63c5cc6b1bec915e9c5ea1e58edf4622ff214b4f8bf079d7\" returns successfully" Jan 24 00:35:07.509698 containerd[1650]: time="2026-01-24T00:35:07.509507101Z" level=info msg="StartContainer for \"8762e4c88d0c15ea5d7a2d098fc6ebf07edd0394f3e89d2c9000aeb25603b371\" returns successfully" Jan 24 00:35:07.733066 kubelet[2753]: E0124 00:35:07.732812 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f4f66fd65-f9s4x" podUID="45f9a298-5fb0-472f-b747-58a979ff2009" Jan 24 00:35:07.846488 update_engine[1634]: I20260124 00:35:07.845619 1634 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 24 00:35:07.846488 update_engine[1634]: I20260124 00:35:07.846015 1634 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 24 00:35:07.846488 update_engine[1634]: I20260124 00:35:07.846375 1634 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 24 00:35:07.847785 update_engine[1634]: E20260124 00:35:07.847723 1634 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 24 00:35:07.847978 update_engine[1634]: I20260124 00:35:07.847949 1634 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 24 00:35:08.732551 kubelet[2753]: E0124 00:35:08.732465 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jv7gx" podUID="08d51dd3-a54b-4b8c-9510-41c1d4106f97" Jan 24 00:35:08.732819 kubelet[2753]: E0124 00:35:08.732680 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58745d67dd-ct89f" podUID="de21b4a8-c633-4342-967f-dd18be2c5322" Jan 24 00:35:10.734012 kubelet[2753]: E0124 00:35:10.733929 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-r5n27" podUID="e954bcbc-6a7d-4fa9-9256-747a5b39530e" Jan 24 00:35:10.735075 kubelet[2753]: E0124 00:35:10.734039 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bfcb7c46c-w555v" podUID="abb81e57-cdeb-458f-9a89-6ad70b4a9133" Jan 24 00:35:13.329212 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d1d3baa983d1c47445a2ced29a55391d0797535f4c0443d4aeb8ee31037772a5-rootfs.mount: Deactivated successfully. Jan 24 00:35:13.339699 containerd[1650]: time="2026-01-24T00:35:13.339618676Z" level=info msg="shim disconnected" id=d1d3baa983d1c47445a2ced29a55391d0797535f4c0443d4aeb8ee31037772a5 namespace=k8s.io Jan 24 00:35:13.339699 containerd[1650]: time="2026-01-24T00:35:13.339695176Z" level=warning msg="cleaning up after shim disconnected" id=d1d3baa983d1c47445a2ced29a55391d0797535f4c0443d4aeb8ee31037772a5 namespace=k8s.io Jan 24 00:35:13.339699 containerd[1650]: time="2026-01-24T00:35:13.339711296Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:35:13.413709 kubelet[2753]: I0124 00:35:13.413647 2753 scope.go:117] "RemoveContainer" containerID="d1d3baa983d1c47445a2ced29a55391d0797535f4c0443d4aeb8ee31037772a5" Jan 24 00:35:13.415753 containerd[1650]: time="2026-01-24T00:35:13.415677842Z" level=info msg="CreateContainer within sandbox \"b64b17aca8a0dca3656be434e2091567c2a3369cf35f36ba179978352284d26d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 24 00:35:13.441881 containerd[1650]: time="2026-01-24T00:35:13.441833433Z" level=info msg="CreateContainer within sandbox \"b64b17aca8a0dca3656be434e2091567c2a3369cf35f36ba179978352284d26d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"b9c0e68bfb2b76fe371b14b74ff9a04b9d6d187843b6454ddb439064830e7bdc\"" Jan 24 00:35:13.444453 containerd[1650]: time="2026-01-24T00:35:13.443996650Z" level=info msg="StartContainer for \"b9c0e68bfb2b76fe371b14b74ff9a04b9d6d187843b6454ddb439064830e7bdc\"" Jan 24 00:35:13.579962 containerd[1650]: time="2026-01-24T00:35:13.579846026Z" level=info msg="StartContainer for \"b9c0e68bfb2b76fe371b14b74ff9a04b9d6d187843b6454ddb439064830e7bdc\" returns successfully" Jan 24 00:35:16.732605 kubelet[2753]: E0124 00:35:16.732369 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f4f66fd65-vcrjr" podUID="8571ab88-459c-48f7-a296-37c9ac9b6a8a" Jan 24 00:35:17.104965 kubelet[2753]: E0124 00:35:17.104865 2753 controller.go:195] "Failed to update lease" err="Put \"https://65.109.167.77:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-a9e48d2ea0?timeout=10s\": context deadline exceeded" Jan 24 00:35:17.845588 update_engine[1634]: I20260124 00:35:17.845453 1634 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 24 00:35:17.846369 update_engine[1634]: I20260124 00:35:17.845873 1634 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 24 00:35:17.846369 update_engine[1634]: I20260124 00:35:17.846186 1634 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 24 00:35:17.846933 update_engine[1634]: E20260124 00:35:17.846880 1634 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 24 00:35:17.846999 update_engine[1634]: I20260124 00:35:17.846967 1634 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 24 00:35:18.733534 kubelet[2753]: E0124 00:35:18.733482 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65d6744f47-ksmv4" podUID="60b48194-9cf1-4af7-bca5-7353b7dd4d41" Jan 24 00:35:18.756558 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f94ca3deeaaed1ad63c5cc6b1bec915e9c5ea1e58edf4622ff214b4f8bf079d7-rootfs.mount: Deactivated successfully. Jan 24 00:35:18.759761 containerd[1650]: time="2026-01-24T00:35:18.757035216Z" level=info msg="shim disconnected" id=f94ca3deeaaed1ad63c5cc6b1bec915e9c5ea1e58edf4622ff214b4f8bf079d7 namespace=k8s.io Jan 24 00:35:18.759761 containerd[1650]: time="2026-01-24T00:35:18.757099356Z" level=warning msg="cleaning up after shim disconnected" id=f94ca3deeaaed1ad63c5cc6b1bec915e9c5ea1e58edf4622ff214b4f8bf079d7 namespace=k8s.io Jan 24 00:35:18.759761 containerd[1650]: time="2026-01-24T00:35:18.757114566Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:35:19.430817 kubelet[2753]: I0124 00:35:19.430746 2753 scope.go:117] "RemoveContainer" containerID="b95b5c0380ad1f6bd35617c2ba79814116ef2bcd85f2cf575707f75de31c2017" Jan 24 00:35:19.431274 kubelet[2753]: I0124 00:35:19.431230 2753 scope.go:117] "RemoveContainer" containerID="f94ca3deeaaed1ad63c5cc6b1bec915e9c5ea1e58edf4622ff214b4f8bf079d7" Jan 24 00:35:19.431989 kubelet[2753]: E0124 00:35:19.431516 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-7dcd859c48-cg6z7_tigera-operator(656a2ee3-91ec-4438-99d2-66fb734308a5)\"" pod="tigera-operator/tigera-operator-7dcd859c48-cg6z7" podUID="656a2ee3-91ec-4438-99d2-66fb734308a5" Jan 24 00:35:19.434034 containerd[1650]: time="2026-01-24T00:35:19.433965240Z" level=info msg="RemoveContainer for \"b95b5c0380ad1f6bd35617c2ba79814116ef2bcd85f2cf575707f75de31c2017\"" Jan 24 00:35:19.443079 containerd[1650]: time="2026-01-24T00:35:19.443004876Z" level=info msg="RemoveContainer for \"b95b5c0380ad1f6bd35617c2ba79814116ef2bcd85f2cf575707f75de31c2017\" returns successfully" Jan 24 00:35:22.735858 kubelet[2753]: E0124 00:35:22.735526 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f4f66fd65-f9s4x" podUID="45f9a298-5fb0-472f-b747-58a979ff2009" Jan 24 00:35:22.735858 kubelet[2753]: E0124 00:35:22.735647 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bfcb7c46c-w555v" podUID="abb81e57-cdeb-458f-9a89-6ad70b4a9133" Jan 24 00:35:22.736985 kubelet[2753]: E0124 00:35:22.736450 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58745d67dd-ct89f" podUID="de21b4a8-c633-4342-967f-dd18be2c5322" Jan 24 00:35:23.733556 kubelet[2753]: E0124 00:35:23.733471 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jv7gx" podUID="08d51dd3-a54b-4b8c-9510-41c1d4106f97" Jan 24 00:35:25.733743 kubelet[2753]: E0124 00:35:25.733283 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-r5n27" podUID="e954bcbc-6a7d-4fa9-9256-747a5b39530e" Jan 24 00:35:27.106210 kubelet[2753]: E0124 00:35:27.106063 2753 controller.go:195] "Failed to update lease" err="Put \"https://65.109.167.77:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-a9e48d2ea0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 24 00:35:27.846209 update_engine[1634]: I20260124 00:35:27.846073 1634 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 24 00:35:27.847037 update_engine[1634]: I20260124 00:35:27.846568 1634 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 24 00:35:27.847037 update_engine[1634]: I20260124 00:35:27.846901 1634 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 24 00:35:27.847644 update_engine[1634]: E20260124 00:35:27.847589 1634 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 24 00:35:27.847741 update_engine[1634]: I20260124 00:35:27.847672 1634 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 24 00:35:27.847741 update_engine[1634]: I20260124 00:35:27.847690 1634 omaha_request_action.cc:617] Omaha request response: Jan 24 00:35:27.847859 update_engine[1634]: E20260124 00:35:27.847815 1634 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 24 00:35:27.847902 update_engine[1634]: I20260124 00:35:27.847857 1634 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 24 00:35:27.847902 update_engine[1634]: I20260124 00:35:27.847873 1634 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 24 00:35:27.847902 update_engine[1634]: I20260124 00:35:27.847888 1634 update_attempter.cc:306] Processing Done. Jan 24 00:35:27.848084 update_engine[1634]: E20260124 00:35:27.847914 1634 update_attempter.cc:619] Update failed. Jan 24 00:35:27.848084 update_engine[1634]: I20260124 00:35:27.847929 1634 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 24 00:35:27.848084 update_engine[1634]: I20260124 00:35:27.847944 1634 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 24 00:35:27.848084 update_engine[1634]: I20260124 00:35:27.847960 1634 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 24 00:35:27.848084 update_engine[1634]: I20260124 00:35:27.848063 1634 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 24 00:35:27.848320 update_engine[1634]: I20260124 00:35:27.848097 1634 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 24 00:35:27.848320 update_engine[1634]: I20260124 00:35:27.848114 1634 omaha_request_action.cc:272] Request: Jan 24 00:35:27.848320 update_engine[1634]: Jan 24 00:35:27.848320 update_engine[1634]: Jan 24 00:35:27.848320 update_engine[1634]: Jan 24 00:35:27.848320 update_engine[1634]: Jan 24 00:35:27.848320 update_engine[1634]: Jan 24 00:35:27.848320 update_engine[1634]: Jan 24 00:35:27.848320 update_engine[1634]: I20260124 00:35:27.848129 1634 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 24 00:35:27.848679 update_engine[1634]: I20260124 00:35:27.848443 1634 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 24 00:35:27.848789 update_engine[1634]: I20260124 00:35:27.848696 1634 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 24 00:35:27.849143 locksmithd[1684]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 24 00:35:27.849962 update_engine[1634]: E20260124 00:35:27.849893 1634 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 24 00:35:27.850019 update_engine[1634]: I20260124 00:35:27.849983 1634 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 24 00:35:27.850019 update_engine[1634]: I20260124 00:35:27.850003 1634 omaha_request_action.cc:617] Omaha request response: Jan 24 00:35:27.850097 update_engine[1634]: I20260124 00:35:27.850021 1634 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 24 00:35:27.850097 update_engine[1634]: I20260124 00:35:27.850042 1634 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 24 00:35:27.850097 update_engine[1634]: I20260124 00:35:27.850061 1634 update_attempter.cc:306] Processing Done. Jan 24 00:35:27.850097 update_engine[1634]: I20260124 00:35:27.850083 1634 update_attempter.cc:310] Error event sent. Jan 24 00:35:27.850249 update_engine[1634]: I20260124 00:35:27.850135 1634 update_check_scheduler.cc:74] Next update check in 49m41s Jan 24 00:35:27.850645 locksmithd[1684]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0