Jan 16 23:55:47.888217 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 16 23:55:47.888255 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jan 16 22:28:08 -00 2026 Jan 16 23:55:47.888272 kernel: KASLR enabled Jan 16 23:55:47.888375 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Jan 16 23:55:47.888387 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x138595418 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 Jan 16 23:55:47.888398 kernel: random: crng init done Jan 16 23:55:47.888411 kernel: ACPI: Early table checksum verification disabled Jan 16 23:55:47.888422 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Jan 16 23:55:47.888433 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Jan 16 23:55:47.888449 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 23:55:47.888504 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 23:55:47.888516 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 23:55:47.888527 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 23:55:47.888538 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 23:55:47.888552 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 23:55:47.888568 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 23:55:47.888580 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 23:55:47.888592 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 23:55:47.888604 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Jan 16 23:55:47.888615 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Jan 16 23:55:47.888627 kernel: NUMA: Failed to initialise from firmware Jan 16 23:55:47.888709 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Jan 16 23:55:47.888728 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] Jan 16 23:55:47.888740 kernel: Zone ranges: Jan 16 23:55:47.888752 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 16 23:55:47.888769 kernel: DMA32 empty Jan 16 23:55:47.888781 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Jan 16 23:55:47.888793 kernel: Movable zone start for each node Jan 16 23:55:47.888804 kernel: Early memory node ranges Jan 16 23:55:47.888816 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Jan 16 23:55:47.888828 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Jan 16 23:55:47.888840 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Jan 16 23:55:47.888851 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Jan 16 23:55:47.888863 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Jan 16 23:55:47.888875 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Jan 16 23:55:47.888887 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Jan 16 23:55:47.888899 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Jan 16 23:55:47.888913 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Jan 16 23:55:47.888935 kernel: psci: probing for conduit method from ACPI. Jan 16 23:55:47.888953 kernel: psci: PSCIv1.1 detected in firmware. Jan 16 23:55:47.888973 kernel: psci: Using standard PSCI v0.2 function IDs Jan 16 23:55:47.888986 kernel: psci: Trusted OS migration not required Jan 16 23:55:47.888998 kernel: psci: SMC Calling Convention v1.1 Jan 16 23:55:47.889013 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 16 23:55:47.889026 kernel: percpu: Embedded 30 pages/cpu s85672 r8192 d29016 u122880 Jan 16 23:55:47.889039 kernel: pcpu-alloc: s85672 r8192 d29016 u122880 alloc=30*4096 Jan 16 23:55:47.889052 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 16 23:55:47.889064 kernel: Detected PIPT I-cache on CPU0 Jan 16 23:55:47.889077 kernel: CPU features: detected: GIC system register CPU interface Jan 16 23:55:47.889089 kernel: CPU features: detected: Hardware dirty bit management Jan 16 23:55:47.889101 kernel: CPU features: detected: Spectre-v4 Jan 16 23:55:47.889114 kernel: CPU features: detected: Spectre-BHB Jan 16 23:55:47.889126 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 16 23:55:47.889141 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 16 23:55:47.889154 kernel: CPU features: detected: ARM erratum 1418040 Jan 16 23:55:47.889166 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 16 23:55:47.889179 kernel: alternatives: applying boot alternatives Jan 16 23:55:47.889194 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=d499dc3f7d5d4118d4e4300ad00f17ad72271d2a2f6bb9119457036ac5212c83 Jan 16 23:55:47.889207 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 16 23:55:47.889220 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 16 23:55:47.889233 kernel: Fallback order for Node 0: 0 Jan 16 23:55:47.889245 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Jan 16 23:55:47.889258 kernel: Policy zone: Normal Jan 16 23:55:47.889271 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 16 23:55:47.889299 kernel: software IO TLB: area num 2. Jan 16 23:55:47.889312 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Jan 16 23:55:47.889326 kernel: Memory: 3882816K/4096000K available (10304K kernel code, 2180K rwdata, 8112K rodata, 39424K init, 897K bss, 213184K reserved, 0K cma-reserved) Jan 16 23:55:47.889339 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 16 23:55:47.889351 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 16 23:55:47.889365 kernel: rcu: RCU event tracing is enabled. Jan 16 23:55:47.889378 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 16 23:55:47.889391 kernel: Trampoline variant of Tasks RCU enabled. Jan 16 23:55:47.889404 kernel: Tracing variant of Tasks RCU enabled. Jan 16 23:55:47.889416 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 16 23:55:47.889429 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 16 23:55:47.889441 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 16 23:55:47.889472 kernel: GICv3: 256 SPIs implemented Jan 16 23:55:47.889489 kernel: GICv3: 0 Extended SPIs implemented Jan 16 23:55:47.889502 kernel: Root IRQ handler: gic_handle_irq Jan 16 23:55:47.889515 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 16 23:55:47.889527 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 16 23:55:47.889540 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 16 23:55:47.889552 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Jan 16 23:55:47.889565 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Jan 16 23:55:47.889578 kernel: GICv3: using LPI property table @0x00000001000e0000 Jan 16 23:55:47.889591 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Jan 16 23:55:47.889604 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 16 23:55:47.889620 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 16 23:55:47.889633 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 16 23:55:47.889646 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 16 23:55:47.889659 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 16 23:55:47.889671 kernel: Console: colour dummy device 80x25 Jan 16 23:55:47.889684 kernel: ACPI: Core revision 20230628 Jan 16 23:55:47.889698 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 16 23:55:47.889711 kernel: pid_max: default: 32768 minimum: 301 Jan 16 23:55:47.889724 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 16 23:55:47.889737 kernel: landlock: Up and running. Jan 16 23:55:47.889751 kernel: SELinux: Initializing. Jan 16 23:55:47.889764 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 16 23:55:47.889778 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 16 23:55:47.889791 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 16 23:55:47.889804 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 16 23:55:47.889817 kernel: rcu: Hierarchical SRCU implementation. Jan 16 23:55:47.889830 kernel: rcu: Max phase no-delay instances is 400. Jan 16 23:55:47.889843 kernel: Platform MSI: ITS@0x8080000 domain created Jan 16 23:55:47.889855 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 16 23:55:47.889871 kernel: Remapping and enabling EFI services. Jan 16 23:55:47.889884 kernel: smp: Bringing up secondary CPUs ... Jan 16 23:55:47.889897 kernel: Detected PIPT I-cache on CPU1 Jan 16 23:55:47.889911 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 16 23:55:47.889924 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Jan 16 23:55:47.889935 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 16 23:55:47.889942 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 16 23:55:47.889949 kernel: smp: Brought up 1 node, 2 CPUs Jan 16 23:55:47.889956 kernel: SMP: Total of 2 processors activated. Jan 16 23:55:47.889965 kernel: CPU features: detected: 32-bit EL0 Support Jan 16 23:55:47.889972 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 16 23:55:47.889979 kernel: CPU features: detected: Common not Private translations Jan 16 23:55:47.889992 kernel: CPU features: detected: CRC32 instructions Jan 16 23:55:47.890002 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 16 23:55:47.890009 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 16 23:55:47.890017 kernel: CPU features: detected: LSE atomic instructions Jan 16 23:55:47.890025 kernel: CPU features: detected: Privileged Access Never Jan 16 23:55:47.890032 kernel: CPU features: detected: RAS Extension Support Jan 16 23:55:47.890042 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 16 23:55:47.890050 kernel: CPU: All CPU(s) started at EL1 Jan 16 23:55:47.890058 kernel: alternatives: applying system-wide alternatives Jan 16 23:55:47.890065 kernel: devtmpfs: initialized Jan 16 23:55:47.890073 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 16 23:55:47.890080 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 16 23:55:47.890131 kernel: pinctrl core: initialized pinctrl subsystem Jan 16 23:55:47.890142 kernel: SMBIOS 3.0.0 present. Jan 16 23:55:47.890153 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Jan 16 23:55:47.890160 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 16 23:55:47.890169 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 16 23:55:47.890177 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 16 23:55:47.890185 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 16 23:55:47.890192 kernel: audit: initializing netlink subsys (disabled) Jan 16 23:55:47.890200 kernel: audit: type=2000 audit(0.015:1): state=initialized audit_enabled=0 res=1 Jan 16 23:55:47.890208 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 16 23:55:47.890215 kernel: cpuidle: using governor menu Jan 16 23:55:47.890224 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 16 23:55:47.890232 kernel: ASID allocator initialised with 32768 entries Jan 16 23:55:47.890239 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 16 23:55:47.890247 kernel: Serial: AMBA PL011 UART driver Jan 16 23:55:47.890255 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 16 23:55:47.890263 kernel: Modules: 0 pages in range for non-PLT usage Jan 16 23:55:47.890270 kernel: Modules: 509008 pages in range for PLT usage Jan 16 23:55:47.890284 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 16 23:55:47.890292 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 16 23:55:47.890302 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 16 23:55:47.890310 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 16 23:55:47.890317 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 16 23:55:47.890324 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 16 23:55:47.890332 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 16 23:55:47.890340 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 16 23:55:47.890347 kernel: ACPI: Added _OSI(Module Device) Jan 16 23:55:47.890355 kernel: ACPI: Added _OSI(Processor Device) Jan 16 23:55:47.890364 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 16 23:55:47.890373 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 16 23:55:47.890381 kernel: ACPI: Interpreter enabled Jan 16 23:55:47.890389 kernel: ACPI: Using GIC for interrupt routing Jan 16 23:55:47.890396 kernel: ACPI: MCFG table detected, 1 entries Jan 16 23:55:47.890404 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 16 23:55:47.890412 kernel: printk: console [ttyAMA0] enabled Jan 16 23:55:47.890420 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 16 23:55:47.890615 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 16 23:55:47.890701 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 16 23:55:47.890769 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 16 23:55:47.890836 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 16 23:55:47.890903 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 16 23:55:47.890913 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 16 23:55:47.890921 kernel: PCI host bridge to bus 0000:00 Jan 16 23:55:47.890996 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 16 23:55:47.891061 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 16 23:55:47.891122 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 16 23:55:47.891183 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 16 23:55:47.891270 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 16 23:55:47.891725 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Jan 16 23:55:47.891803 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Jan 16 23:55:47.891872 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Jan 16 23:55:47.891956 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 16 23:55:47.892023 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Jan 16 23:55:47.892097 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 16 23:55:47.892165 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Jan 16 23:55:47.892239 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 16 23:55:47.892336 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Jan 16 23:55:47.892419 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 16 23:55:47.894596 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Jan 16 23:55:47.894706 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 16 23:55:47.894777 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Jan 16 23:55:47.894853 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 16 23:55:47.894921 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Jan 16 23:55:47.895002 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 16 23:55:47.895069 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Jan 16 23:55:47.895148 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 16 23:55:47.895302 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Jan 16 23:55:47.895389 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Jan 16 23:55:47.896613 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Jan 16 23:55:47.896767 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Jan 16 23:55:47.896839 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Jan 16 23:55:47.896918 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Jan 16 23:55:47.896987 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Jan 16 23:55:47.897058 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 16 23:55:47.897126 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 16 23:55:47.897200 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 16 23:55:47.897272 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Jan 16 23:55:47.897364 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Jan 16 23:55:47.897433 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Jan 16 23:55:47.898600 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Jan 16 23:55:47.898710 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Jan 16 23:55:47.898781 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Jan 16 23:55:47.898906 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 16 23:55:47.898981 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Jan 16 23:55:47.899050 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Jan 16 23:55:47.899133 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Jan 16 23:55:47.899205 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Jan 16 23:55:47.899283 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Jan 16 23:55:47.899371 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Jan 16 23:55:47.899441 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Jan 16 23:55:47.899530 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Jan 16 23:55:47.899603 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 16 23:55:47.899677 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jan 16 23:55:47.899755 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Jan 16 23:55:47.899821 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Jan 16 23:55:47.899896 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jan 16 23:55:47.899967 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jan 16 23:55:47.900032 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Jan 16 23:55:47.900102 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 16 23:55:47.900170 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Jan 16 23:55:47.900235 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Jan 16 23:55:47.900320 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 16 23:55:47.900391 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Jan 16 23:55:47.900830 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jan 16 23:55:47.900950 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 16 23:55:47.903526 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Jan 16 23:55:47.903663 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Jan 16 23:55:47.903739 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 16 23:55:47.903806 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Jan 16 23:55:47.903874 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Jan 16 23:55:47.903951 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 16 23:55:47.904020 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Jan 16 23:55:47.904093 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Jan 16 23:55:47.904164 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 16 23:55:47.904234 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Jan 16 23:55:47.904349 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Jan 16 23:55:47.904425 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 16 23:55:47.904510 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Jan 16 23:55:47.904576 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Jan 16 23:55:47.904653 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Jan 16 23:55:47.904723 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Jan 16 23:55:47.904801 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Jan 16 23:55:47.904869 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Jan 16 23:55:47.904940 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Jan 16 23:55:47.905010 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Jan 16 23:55:47.905078 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Jan 16 23:55:47.905143 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Jan 16 23:55:47.905209 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Jan 16 23:55:47.905284 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Jan 16 23:55:47.905358 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Jan 16 23:55:47.905425 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 16 23:55:47.905518 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Jan 16 23:55:47.905585 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 16 23:55:47.905651 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Jan 16 23:55:47.905932 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 16 23:55:47.906004 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Jan 16 23:55:47.906072 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Jan 16 23:55:47.906143 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Jan 16 23:55:47.906216 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Jan 16 23:55:47.906335 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Jan 16 23:55:47.906410 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jan 16 23:55:47.907112 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Jan 16 23:55:47.907202 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jan 16 23:55:47.907271 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Jan 16 23:55:47.907372 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jan 16 23:55:47.907442 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Jan 16 23:55:47.907539 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jan 16 23:55:47.907610 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Jan 16 23:55:47.907677 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jan 16 23:55:47.907744 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Jan 16 23:55:47.907809 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jan 16 23:55:47.907877 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Jan 16 23:55:47.907942 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jan 16 23:55:47.908010 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Jan 16 23:55:47.908079 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jan 16 23:55:47.908147 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Jan 16 23:55:47.908214 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Jan 16 23:55:47.908298 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Jan 16 23:55:47.908429 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Jan 16 23:55:47.908569 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 16 23:55:47.908646 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Jan 16 23:55:47.908715 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 16 23:55:47.908789 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 16 23:55:47.908855 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Jan 16 23:55:47.908921 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Jan 16 23:55:47.908997 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Jan 16 23:55:47.909068 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 16 23:55:47.909134 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 16 23:55:47.909243 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Jan 16 23:55:47.909370 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Jan 16 23:55:47.909451 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Jan 16 23:55:47.909559 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Jan 16 23:55:47.909632 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 16 23:55:47.909699 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 16 23:55:47.909772 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Jan 16 23:55:47.909839 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Jan 16 23:55:47.909923 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Jan 16 23:55:47.909993 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 16 23:55:47.910060 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 16 23:55:47.910126 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Jan 16 23:55:47.910194 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Jan 16 23:55:47.910271 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Jan 16 23:55:47.910360 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Jan 16 23:55:47.910428 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 16 23:55:47.912635 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 16 23:55:47.912736 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Jan 16 23:55:47.912806 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Jan 16 23:55:47.912884 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Jan 16 23:55:47.912953 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Jan 16 23:55:47.913021 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 16 23:55:47.913095 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 16 23:55:47.913161 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Jan 16 23:55:47.913228 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 16 23:55:47.913322 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Jan 16 23:55:47.913396 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Jan 16 23:55:47.913502 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Jan 16 23:55:47.913586 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 16 23:55:47.913654 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 16 23:55:47.913728 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Jan 16 23:55:47.913793 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 16 23:55:47.913863 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 16 23:55:47.913929 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 16 23:55:47.913995 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Jan 16 23:55:47.914063 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 16 23:55:47.914134 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 16 23:55:47.914200 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Jan 16 23:55:47.914269 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Jan 16 23:55:47.914384 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Jan 16 23:55:47.916528 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 16 23:55:47.916644 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 16 23:55:47.916721 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 16 23:55:47.916795 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 16 23:55:47.916857 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Jan 16 23:55:47.916924 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Jan 16 23:55:47.916994 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Jan 16 23:55:47.917055 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Jan 16 23:55:47.917116 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Jan 16 23:55:47.917185 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Jan 16 23:55:47.917245 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Jan 16 23:55:47.917323 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Jan 16 23:55:47.917393 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jan 16 23:55:47.917454 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Jan 16 23:55:47.917643 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Jan 16 23:55:47.917712 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Jan 16 23:55:47.917779 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Jan 16 23:55:47.917839 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Jan 16 23:55:47.917910 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Jan 16 23:55:47.917971 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Jan 16 23:55:47.918033 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 16 23:55:47.918102 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Jan 16 23:55:47.918166 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Jan 16 23:55:47.918227 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 16 23:55:47.918337 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Jan 16 23:55:47.918403 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Jan 16 23:55:47.918559 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 16 23:55:47.918635 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Jan 16 23:55:47.918696 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Jan 16 23:55:47.918762 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Jan 16 23:55:47.918772 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 16 23:55:47.918781 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 16 23:55:47.918788 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 16 23:55:47.918796 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 16 23:55:47.918804 kernel: iommu: Default domain type: Translated Jan 16 23:55:47.918813 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 16 23:55:47.918820 kernel: efivars: Registered efivars operations Jan 16 23:55:47.918830 kernel: vgaarb: loaded Jan 16 23:55:47.918838 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 16 23:55:47.918846 kernel: VFS: Disk quotas dquot_6.6.0 Jan 16 23:55:47.918854 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 16 23:55:47.918862 kernel: pnp: PnP ACPI init Jan 16 23:55:47.918935 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 16 23:55:47.918947 kernel: pnp: PnP ACPI: found 1 devices Jan 16 23:55:47.918955 kernel: NET: Registered PF_INET protocol family Jan 16 23:55:47.918962 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 16 23:55:47.918972 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 16 23:55:47.918981 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 16 23:55:47.918988 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 16 23:55:47.918996 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 16 23:55:47.919004 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 16 23:55:47.919012 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 16 23:55:47.919020 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 16 23:55:47.919028 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 16 23:55:47.919102 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Jan 16 23:55:47.919116 kernel: PCI: CLS 0 bytes, default 64 Jan 16 23:55:47.919123 kernel: kvm [1]: HYP mode not available Jan 16 23:55:47.919131 kernel: Initialise system trusted keyrings Jan 16 23:55:47.919140 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 16 23:55:47.919147 kernel: Key type asymmetric registered Jan 16 23:55:47.919155 kernel: Asymmetric key parser 'x509' registered Jan 16 23:55:47.919163 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 16 23:55:47.919170 kernel: io scheduler mq-deadline registered Jan 16 23:55:47.919180 kernel: io scheduler kyber registered Jan 16 23:55:47.919188 kernel: io scheduler bfq registered Jan 16 23:55:47.919196 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 16 23:55:47.919268 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Jan 16 23:55:47.919392 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Jan 16 23:55:47.919474 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 16 23:55:47.919547 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Jan 16 23:55:47.919621 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Jan 16 23:55:47.919686 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 16 23:55:47.919757 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Jan 16 23:55:47.919824 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Jan 16 23:55:47.919889 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 16 23:55:47.919958 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Jan 16 23:55:47.920028 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Jan 16 23:55:47.920093 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 16 23:55:47.920162 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Jan 16 23:55:47.920228 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Jan 16 23:55:47.920309 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 16 23:55:47.920382 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Jan 16 23:55:47.920453 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Jan 16 23:55:47.920535 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 16 23:55:47.920604 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Jan 16 23:55:47.920671 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Jan 16 23:55:47.920736 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 16 23:55:47.920806 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Jan 16 23:55:47.920876 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Jan 16 23:55:47.920943 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 16 23:55:47.920954 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Jan 16 23:55:47.921021 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Jan 16 23:55:47.921091 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Jan 16 23:55:47.921157 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 16 23:55:47.921168 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 16 23:55:47.921178 kernel: ACPI: button: Power Button [PWRB] Jan 16 23:55:47.921186 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 16 23:55:47.921309 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Jan 16 23:55:47.921400 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Jan 16 23:55:47.921413 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 16 23:55:47.921421 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 16 23:55:47.921556 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Jan 16 23:55:47.921570 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Jan 16 23:55:47.921578 kernel: thunder_xcv, ver 1.0 Jan 16 23:55:47.921591 kernel: thunder_bgx, ver 1.0 Jan 16 23:55:47.921598 kernel: nicpf, ver 1.0 Jan 16 23:55:47.921606 kernel: nicvf, ver 1.0 Jan 16 23:55:47.921687 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 16 23:55:47.921750 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-16T23:55:47 UTC (1768607747) Jan 16 23:55:47.921760 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 16 23:55:47.921768 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 16 23:55:47.921777 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 16 23:55:47.921787 kernel: watchdog: Hard watchdog permanently disabled Jan 16 23:55:47.921795 kernel: NET: Registered PF_INET6 protocol family Jan 16 23:55:47.921803 kernel: Segment Routing with IPv6 Jan 16 23:55:47.921810 kernel: In-situ OAM (IOAM) with IPv6 Jan 16 23:55:47.921819 kernel: NET: Registered PF_PACKET protocol family Jan 16 23:55:47.921827 kernel: Key type dns_resolver registered Jan 16 23:55:47.921835 kernel: registered taskstats version 1 Jan 16 23:55:47.921844 kernel: Loading compiled-in X.509 certificates Jan 16 23:55:47.921851 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 0aabad27df82424bfffc9b1a502a9ae84b35bad4' Jan 16 23:55:47.921861 kernel: Key type .fscrypt registered Jan 16 23:55:47.921868 kernel: Key type fscrypt-provisioning registered Jan 16 23:55:47.921876 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 16 23:55:47.921884 kernel: ima: Allocated hash algorithm: sha1 Jan 16 23:55:47.921892 kernel: ima: No architecture policies found Jan 16 23:55:47.921899 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 16 23:55:47.921907 kernel: clk: Disabling unused clocks Jan 16 23:55:47.921915 kernel: Freeing unused kernel memory: 39424K Jan 16 23:55:47.921923 kernel: Run /init as init process Jan 16 23:55:47.921933 kernel: with arguments: Jan 16 23:55:47.921941 kernel: /init Jan 16 23:55:47.921949 kernel: with environment: Jan 16 23:55:47.921957 kernel: HOME=/ Jan 16 23:55:47.921964 kernel: TERM=linux Jan 16 23:55:47.921974 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 16 23:55:47.921984 systemd[1]: Detected virtualization kvm. Jan 16 23:55:47.921993 systemd[1]: Detected architecture arm64. Jan 16 23:55:47.922002 systemd[1]: Running in initrd. Jan 16 23:55:47.922010 systemd[1]: No hostname configured, using default hostname. Jan 16 23:55:47.922018 systemd[1]: Hostname set to . Jan 16 23:55:47.922027 systemd[1]: Initializing machine ID from VM UUID. Jan 16 23:55:47.922037 systemd[1]: Queued start job for default target initrd.target. Jan 16 23:55:47.922046 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 16 23:55:47.922054 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 16 23:55:47.922063 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 16 23:55:47.922073 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 16 23:55:47.922082 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 16 23:55:47.922091 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 16 23:55:47.922100 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 16 23:55:47.922109 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 16 23:55:47.922117 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 16 23:55:47.922125 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 16 23:55:47.922135 systemd[1]: Reached target paths.target - Path Units. Jan 16 23:55:47.922144 systemd[1]: Reached target slices.target - Slice Units. Jan 16 23:55:47.922152 systemd[1]: Reached target swap.target - Swaps. Jan 16 23:55:47.922160 systemd[1]: Reached target timers.target - Timer Units. Jan 16 23:55:47.922168 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 16 23:55:47.922177 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 16 23:55:47.922185 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 16 23:55:47.922193 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 16 23:55:47.922203 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 16 23:55:47.922212 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 16 23:55:47.922220 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 16 23:55:47.922229 systemd[1]: Reached target sockets.target - Socket Units. Jan 16 23:55:47.922237 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 16 23:55:47.922246 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 16 23:55:47.922254 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 16 23:55:47.922263 systemd[1]: Starting systemd-fsck-usr.service... Jan 16 23:55:47.922271 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 16 23:55:47.922294 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 16 23:55:47.922303 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 23:55:47.922311 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 16 23:55:47.922320 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 16 23:55:47.922350 systemd-journald[238]: Collecting audit messages is disabled. Jan 16 23:55:47.922372 systemd[1]: Finished systemd-fsck-usr.service. Jan 16 23:55:47.922382 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 16 23:55:47.922391 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 16 23:55:47.922400 kernel: Bridge firewalling registered Jan 16 23:55:47.922409 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 16 23:55:47.922417 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 23:55:47.922426 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 16 23:55:47.922435 systemd-journald[238]: Journal started Jan 16 23:55:47.922466 systemd-journald[238]: Runtime Journal (/run/log/journal/7e9cc60c086a434a950f11fcd2b714b6) is 8.0M, max 76.6M, 68.6M free. Jan 16 23:55:47.923614 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 16 23:55:47.881498 systemd-modules-load[239]: Inserted module 'overlay' Jan 16 23:55:47.904031 systemd-modules-load[239]: Inserted module 'br_netfilter' Jan 16 23:55:47.929359 systemd[1]: Started systemd-journald.service - Journal Service. Jan 16 23:55:47.927887 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 16 23:55:47.945819 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 16 23:55:47.948111 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 16 23:55:47.949833 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 16 23:55:47.951747 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 16 23:55:47.958679 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 16 23:55:47.961266 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 16 23:55:47.970492 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 16 23:55:47.972089 dracut-cmdline[270]: dracut-dracut-053 Jan 16 23:55:47.973568 dracut-cmdline[270]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=d499dc3f7d5d4118d4e4300ad00f17ad72271d2a2f6bb9119457036ac5212c83 Jan 16 23:55:47.980743 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 16 23:55:48.018231 systemd-resolved[290]: Positive Trust Anchors: Jan 16 23:55:48.019223 systemd-resolved[290]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 16 23:55:48.019260 systemd-resolved[290]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 16 23:55:48.029219 systemd-resolved[290]: Defaulting to hostname 'linux'. Jan 16 23:55:48.030393 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 16 23:55:48.031212 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 16 23:55:48.059537 kernel: SCSI subsystem initialized Jan 16 23:55:48.063530 kernel: Loading iSCSI transport class v2.0-870. Jan 16 23:55:48.071513 kernel: iscsi: registered transport (tcp) Jan 16 23:55:48.084736 kernel: iscsi: registered transport (qla4xxx) Jan 16 23:55:48.084838 kernel: QLogic iSCSI HBA Driver Jan 16 23:55:48.134722 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 16 23:55:48.139652 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 16 23:55:48.160091 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 16 23:55:48.160222 kernel: device-mapper: uevent: version 1.0.3 Jan 16 23:55:48.160249 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 16 23:55:48.209527 kernel: raid6: neonx8 gen() 15670 MB/s Jan 16 23:55:48.226501 kernel: raid6: neonx4 gen() 15563 MB/s Jan 16 23:55:48.243497 kernel: raid6: neonx2 gen() 13185 MB/s Jan 16 23:55:48.260548 kernel: raid6: neonx1 gen() 10416 MB/s Jan 16 23:55:48.277508 kernel: raid6: int64x8 gen() 6921 MB/s Jan 16 23:55:48.294536 kernel: raid6: int64x4 gen() 7316 MB/s Jan 16 23:55:48.311504 kernel: raid6: int64x2 gen() 6098 MB/s Jan 16 23:55:48.328515 kernel: raid6: int64x1 gen() 5034 MB/s Jan 16 23:55:48.328591 kernel: raid6: using algorithm neonx8 gen() 15670 MB/s Jan 16 23:55:48.345517 kernel: raid6: .... xor() 11954 MB/s, rmw enabled Jan 16 23:55:48.345600 kernel: raid6: using neon recovery algorithm Jan 16 23:55:48.350746 kernel: xor: measuring software checksum speed Jan 16 23:55:48.350812 kernel: 8regs : 19740 MB/sec Jan 16 23:55:48.350836 kernel: 32regs : 17275 MB/sec Jan 16 23:55:48.350858 kernel: arm64_neon : 27123 MB/sec Jan 16 23:55:48.351498 kernel: xor: using function: arm64_neon (27123 MB/sec) Jan 16 23:55:48.401509 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 16 23:55:48.416205 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 16 23:55:48.424758 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 16 23:55:48.438620 systemd-udevd[457]: Using default interface naming scheme 'v255'. Jan 16 23:55:48.441977 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 16 23:55:48.450876 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 16 23:55:48.466901 dracut-pre-trigger[462]: rd.md=0: removing MD RAID activation Jan 16 23:55:48.503959 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 16 23:55:48.512782 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 16 23:55:48.561453 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 16 23:55:48.569657 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 16 23:55:48.591432 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 16 23:55:48.594244 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 16 23:55:48.596064 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 16 23:55:48.597505 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 16 23:55:48.603638 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 16 23:55:48.619515 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 16 23:55:48.670004 kernel: scsi host0: Virtio SCSI HBA Jan 16 23:55:48.674540 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 16 23:55:48.677257 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 16 23:55:48.686746 kernel: ACPI: bus type USB registered Jan 16 23:55:48.686811 kernel: usbcore: registered new interface driver usbfs Jan 16 23:55:48.686822 kernel: usbcore: registered new interface driver hub Jan 16 23:55:48.686833 kernel: usbcore: registered new device driver usb Jan 16 23:55:48.698233 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 16 23:55:48.698429 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 16 23:55:48.700865 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 16 23:55:48.702927 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 16 23:55:48.703083 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 23:55:48.704141 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 23:55:48.717773 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 23:55:48.733607 kernel: sr 0:0:0:0: Power-on or device reset occurred Jan 16 23:55:48.737498 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 16 23:55:48.737683 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jan 16 23:55:48.739154 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 23:55:48.740553 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 16 23:55:48.740703 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Jan 16 23:55:48.744656 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 16 23:55:48.745541 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 16 23:55:48.745555 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jan 16 23:55:48.745677 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jan 16 23:55:48.745761 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Jan 16 23:55:48.748906 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 16 23:55:48.753860 kernel: sd 0:0:0:1: Power-on or device reset occurred Jan 16 23:55:48.754030 kernel: hub 1-0:1.0: USB hub found Jan 16 23:55:48.754142 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Jan 16 23:55:48.754232 kernel: hub 1-0:1.0: 4 ports detected Jan 16 23:55:48.754333 kernel: sd 0:0:0:1: [sda] Write Protect is off Jan 16 23:55:48.754417 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Jan 16 23:55:48.754517 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 16 23:55:48.754620 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 16 23:55:48.754702 kernel: hub 2-0:1.0: USB hub found Jan 16 23:55:48.754792 kernel: hub 2-0:1.0: 4 ports detected Jan 16 23:55:48.766515 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 16 23:55:48.767010 kernel: GPT:17805311 != 80003071 Jan 16 23:55:48.767023 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 16 23:55:48.767542 kernel: GPT:17805311 != 80003071 Jan 16 23:55:48.767572 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 16 23:55:48.769321 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 16 23:55:48.769356 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Jan 16 23:55:48.777614 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 16 23:55:48.818515 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (501) Jan 16 23:55:48.820518 kernel: BTRFS: device fsid 257557f7-4bf9-4b29-86df-93ad67770d31 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (518) Jan 16 23:55:48.827642 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 16 23:55:48.839336 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 16 23:55:48.843848 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 16 23:55:48.844557 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jan 16 23:55:48.851690 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 16 23:55:48.856712 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 16 23:55:48.866115 disk-uuid[574]: Primary Header is updated. Jan 16 23:55:48.866115 disk-uuid[574]: Secondary Entries is updated. Jan 16 23:55:48.866115 disk-uuid[574]: Secondary Header is updated. Jan 16 23:55:48.872502 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 16 23:55:48.877500 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 16 23:55:48.881482 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 16 23:55:48.996489 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 16 23:55:49.136305 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Jan 16 23:55:49.136370 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jan 16 23:55:49.137474 kernel: usbcore: registered new interface driver usbhid Jan 16 23:55:49.137576 kernel: usbhid: USB HID core driver Jan 16 23:55:49.242554 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Jan 16 23:55:49.373513 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Jan 16 23:55:49.426519 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Jan 16 23:55:49.889895 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 16 23:55:49.889952 disk-uuid[575]: The operation has completed successfully. Jan 16 23:55:49.949115 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 16 23:55:49.949224 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 16 23:55:49.969929 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 16 23:55:49.976768 sh[593]: Success Jan 16 23:55:49.987653 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 16 23:55:50.036815 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 16 23:55:50.045674 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 16 23:55:50.047090 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 16 23:55:50.065101 kernel: BTRFS info (device dm-0): first mount of filesystem 257557f7-4bf9-4b29-86df-93ad67770d31 Jan 16 23:55:50.065161 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 16 23:55:50.065180 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 16 23:55:50.065194 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 16 23:55:50.065570 kernel: BTRFS info (device dm-0): using free space tree Jan 16 23:55:50.073508 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 16 23:55:50.075366 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 16 23:55:50.078829 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 16 23:55:50.084705 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 16 23:55:50.090731 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 16 23:55:50.099965 kernel: BTRFS info (device sda6): first mount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 16 23:55:50.100016 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 16 23:55:50.100467 kernel: BTRFS info (device sda6): using free space tree Jan 16 23:55:50.105922 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 16 23:55:50.105982 kernel: BTRFS info (device sda6): auto enabling async discard Jan 16 23:55:50.119978 kernel: BTRFS info (device sda6): last unmount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 16 23:55:50.119518 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 16 23:55:50.127094 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 16 23:55:50.132633 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 16 23:55:50.211851 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 16 23:55:50.219702 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 16 23:55:50.238650 ignition[687]: Ignition 2.19.0 Jan 16 23:55:50.238660 ignition[687]: Stage: fetch-offline Jan 16 23:55:50.239686 systemd-networkd[781]: lo: Link UP Jan 16 23:55:50.238701 ignition[687]: no configs at "/usr/lib/ignition/base.d" Jan 16 23:55:50.239690 systemd-networkd[781]: lo: Gained carrier Jan 16 23:55:50.238710 ignition[687]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 16 23:55:50.239425 ignition[687]: parsed url from cmdline: "" Jan 16 23:55:50.242156 systemd-networkd[781]: Enumeration completed Jan 16 23:55:50.239429 ignition[687]: no config URL provided Jan 16 23:55:50.242478 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 16 23:55:50.239436 ignition[687]: reading system config file "/usr/lib/ignition/user.ign" Jan 16 23:55:50.243537 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 16 23:55:50.239450 ignition[687]: no config at "/usr/lib/ignition/user.ign" Jan 16 23:55:50.244724 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 16 23:55:50.239469 ignition[687]: failed to fetch config: resource requires networking Jan 16 23:55:50.244728 systemd-networkd[781]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 16 23:55:50.239735 ignition[687]: Ignition finished successfully Jan 16 23:55:50.245902 systemd[1]: Reached target network.target - Network. Jan 16 23:55:50.246600 systemd-networkd[781]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 16 23:55:50.246603 systemd-networkd[781]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 16 23:55:50.247359 systemd-networkd[781]: eth0: Link UP Jan 16 23:55:50.247363 systemd-networkd[781]: eth0: Gained carrier Jan 16 23:55:50.247369 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 16 23:55:50.250729 systemd-networkd[781]: eth1: Link UP Jan 16 23:55:50.250732 systemd-networkd[781]: eth1: Gained carrier Jan 16 23:55:50.250739 systemd-networkd[781]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 16 23:55:50.256472 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 16 23:55:50.269229 ignition[784]: Ignition 2.19.0 Jan 16 23:55:50.269853 ignition[784]: Stage: fetch Jan 16 23:55:50.270037 ignition[784]: no configs at "/usr/lib/ignition/base.d" Jan 16 23:55:50.270049 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 16 23:55:50.270142 ignition[784]: parsed url from cmdline: "" Jan 16 23:55:50.270145 ignition[784]: no config URL provided Jan 16 23:55:50.270150 ignition[784]: reading system config file "/usr/lib/ignition/user.ign" Jan 16 23:55:50.270157 ignition[784]: no config at "/usr/lib/ignition/user.ign" Jan 16 23:55:50.270182 ignition[784]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jan 16 23:55:50.270810 ignition[784]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 16 23:55:50.289570 systemd-networkd[781]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Jan 16 23:55:50.311574 systemd-networkd[781]: eth0: DHCPv4 address 49.13.115.208/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 16 23:55:50.471675 ignition[784]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Jan 16 23:55:50.477388 ignition[784]: GET result: OK Jan 16 23:55:50.477521 ignition[784]: parsing config with SHA512: 60a86331c6a073deb895fd2bc4848a4b7a82ad62e8fcc0a40cf68e9b139e07984a9dc59adaa9270189969c9097271eb582733643243d6e12f20e776aa2f04d05 Jan 16 23:55:50.482888 unknown[784]: fetched base config from "system" Jan 16 23:55:50.482898 unknown[784]: fetched base config from "system" Jan 16 23:55:50.483260 ignition[784]: fetch: fetch complete Jan 16 23:55:50.482905 unknown[784]: fetched user config from "hetzner" Jan 16 23:55:50.483279 ignition[784]: fetch: fetch passed Jan 16 23:55:50.486228 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 16 23:55:50.483322 ignition[784]: Ignition finished successfully Jan 16 23:55:50.491704 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 16 23:55:50.505618 ignition[791]: Ignition 2.19.0 Jan 16 23:55:50.505630 ignition[791]: Stage: kargs Jan 16 23:55:50.505814 ignition[791]: no configs at "/usr/lib/ignition/base.d" Jan 16 23:55:50.505824 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 16 23:55:50.506820 ignition[791]: kargs: kargs passed Jan 16 23:55:50.510527 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 16 23:55:50.506872 ignition[791]: Ignition finished successfully Jan 16 23:55:50.516787 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 16 23:55:50.534133 ignition[797]: Ignition 2.19.0 Jan 16 23:55:50.534799 ignition[797]: Stage: disks Jan 16 23:55:50.534996 ignition[797]: no configs at "/usr/lib/ignition/base.d" Jan 16 23:55:50.535007 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 16 23:55:50.536025 ignition[797]: disks: disks passed Jan 16 23:55:50.537843 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 16 23:55:50.536074 ignition[797]: Ignition finished successfully Jan 16 23:55:50.540709 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 16 23:55:50.541454 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 16 23:55:50.543607 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 16 23:55:50.544190 systemd[1]: Reached target sysinit.target - System Initialization. Jan 16 23:55:50.545152 systemd[1]: Reached target basic.target - Basic System. Jan 16 23:55:50.550719 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 16 23:55:50.569261 systemd-fsck[805]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 16 23:55:50.573903 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 16 23:55:50.578562 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 16 23:55:50.620488 kernel: EXT4-fs (sda9): mounted filesystem b70ce012-b356-4603-a688-ee0b3b7de551 r/w with ordered data mode. Quota mode: none. Jan 16 23:55:50.621783 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 16 23:55:50.624294 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 16 23:55:50.639666 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 16 23:55:50.643807 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 16 23:55:50.647118 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 16 23:55:50.651188 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 16 23:55:50.651231 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 16 23:55:50.657640 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (813) Jan 16 23:55:50.654526 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 16 23:55:50.660478 kernel: BTRFS info (device sda6): first mount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 16 23:55:50.660524 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 16 23:55:50.660535 kernel: BTRFS info (device sda6): using free space tree Jan 16 23:55:50.663630 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 16 23:55:50.663674 kernel: BTRFS info (device sda6): auto enabling async discard Jan 16 23:55:50.669753 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 16 23:55:50.677133 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 16 23:55:50.724934 coreos-metadata[815]: Jan 16 23:55:50.724 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jan 16 23:55:50.727060 coreos-metadata[815]: Jan 16 23:55:50.726 INFO Fetch successful Jan 16 23:55:50.728609 coreos-metadata[815]: Jan 16 23:55:50.728 INFO wrote hostname ci-4081-3-6-n-32c338e5e2 to /sysroot/etc/hostname Jan 16 23:55:50.732895 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 16 23:55:50.744659 initrd-setup-root[841]: cut: /sysroot/etc/passwd: No such file or directory Jan 16 23:55:50.750562 initrd-setup-root[848]: cut: /sysroot/etc/group: No such file or directory Jan 16 23:55:50.756992 initrd-setup-root[855]: cut: /sysroot/etc/shadow: No such file or directory Jan 16 23:55:50.764326 initrd-setup-root[862]: cut: /sysroot/etc/gshadow: No such file or directory Jan 16 23:55:50.865130 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 16 23:55:50.871597 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 16 23:55:50.872846 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 16 23:55:50.887487 kernel: BTRFS info (device sda6): last unmount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 16 23:55:50.905686 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 16 23:55:50.917478 ignition[931]: INFO : Ignition 2.19.0 Jan 16 23:55:50.917478 ignition[931]: INFO : Stage: mount Jan 16 23:55:50.920025 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 16 23:55:50.920025 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 16 23:55:50.920025 ignition[931]: INFO : mount: mount passed Jan 16 23:55:50.920025 ignition[931]: INFO : Ignition finished successfully Jan 16 23:55:50.921517 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 16 23:55:50.928689 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 16 23:55:51.064541 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 16 23:55:51.075872 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 16 23:55:51.088658 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (942) Jan 16 23:55:51.090501 kernel: BTRFS info (device sda6): first mount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 16 23:55:51.090572 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 16 23:55:51.090601 kernel: BTRFS info (device sda6): using free space tree Jan 16 23:55:51.094080 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 16 23:55:51.094128 kernel: BTRFS info (device sda6): auto enabling async discard Jan 16 23:55:51.097182 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 16 23:55:51.118063 ignition[959]: INFO : Ignition 2.19.0 Jan 16 23:55:51.118063 ignition[959]: INFO : Stage: files Jan 16 23:55:51.119202 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 16 23:55:51.119202 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 16 23:55:51.119202 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Jan 16 23:55:51.121369 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 16 23:55:51.121369 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 16 23:55:51.123671 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 16 23:55:51.124541 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 16 23:55:51.124541 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 16 23:55:51.124077 unknown[959]: wrote ssh authorized keys file for user: core Jan 16 23:55:51.127197 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 16 23:55:51.127197 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jan 16 23:55:51.234351 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 16 23:55:51.424430 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 16 23:55:51.424430 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 16 23:55:51.429372 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 16 23:55:51.429372 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 16 23:55:51.429372 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 16 23:55:51.429372 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 16 23:55:51.429372 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 16 23:55:51.429372 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 16 23:55:51.429372 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 16 23:55:51.429372 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 16 23:55:51.429372 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 16 23:55:51.429372 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 16 23:55:51.429372 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 16 23:55:51.429372 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 16 23:55:51.429372 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Jan 16 23:55:51.463702 systemd-networkd[781]: eth0: Gained IPv6LL Jan 16 23:55:51.758829 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 16 23:55:51.847753 systemd-networkd[781]: eth1: Gained IPv6LL Jan 16 23:55:53.026377 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 16 23:55:53.026377 ignition[959]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 16 23:55:53.030189 ignition[959]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 16 23:55:53.030189 ignition[959]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 16 23:55:53.030189 ignition[959]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 16 23:55:53.030189 ignition[959]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 16 23:55:53.030189 ignition[959]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 16 23:55:53.030189 ignition[959]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 16 23:55:53.030189 ignition[959]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 16 23:55:53.030189 ignition[959]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Jan 16 23:55:53.030189 ignition[959]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Jan 16 23:55:53.030189 ignition[959]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 16 23:55:53.030189 ignition[959]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 16 23:55:53.030189 ignition[959]: INFO : files: files passed Jan 16 23:55:53.030189 ignition[959]: INFO : Ignition finished successfully Jan 16 23:55:53.033500 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 16 23:55:53.042053 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 16 23:55:53.044855 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 16 23:55:53.048057 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 16 23:55:53.048235 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 16 23:55:53.068051 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 16 23:55:53.068051 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 16 23:55:53.071738 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 16 23:55:53.074879 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 16 23:55:53.075997 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 16 23:55:53.086816 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 16 23:55:53.127573 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 16 23:55:53.127725 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 16 23:55:53.129922 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 16 23:55:53.130834 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 16 23:55:53.132178 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 16 23:55:53.137714 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 16 23:55:53.151997 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 16 23:55:53.161842 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 16 23:55:53.173119 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 16 23:55:53.173985 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 16 23:55:53.176911 systemd[1]: Stopped target timers.target - Timer Units. Jan 16 23:55:53.178550 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 16 23:55:53.178683 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 16 23:55:53.180207 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 16 23:55:53.180953 systemd[1]: Stopped target basic.target - Basic System. Jan 16 23:55:53.182064 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 16 23:55:53.183110 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 16 23:55:53.184084 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 16 23:55:53.185119 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 16 23:55:53.186182 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 16 23:55:53.187320 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 16 23:55:53.188321 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 16 23:55:53.189418 systemd[1]: Stopped target swap.target - Swaps. Jan 16 23:55:53.190309 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 16 23:55:53.190431 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 16 23:55:53.191732 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 16 23:55:53.192431 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 16 23:55:53.193474 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 16 23:55:53.193982 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 16 23:55:53.194730 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 16 23:55:53.194846 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 16 23:55:53.196439 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 16 23:55:53.196571 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 16 23:55:53.198343 systemd[1]: ignition-files.service: Deactivated successfully. Jan 16 23:55:53.198437 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 16 23:55:53.199373 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 16 23:55:53.199511 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 16 23:55:53.212945 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 16 23:55:53.214301 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 16 23:55:53.214807 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 16 23:55:53.220696 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 16 23:55:53.221266 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 16 23:55:53.221432 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 16 23:55:53.222671 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 16 23:55:53.222991 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 16 23:55:53.230226 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 16 23:55:53.231499 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 16 23:55:53.237055 ignition[1011]: INFO : Ignition 2.19.0 Jan 16 23:55:53.239100 ignition[1011]: INFO : Stage: umount Jan 16 23:55:53.239100 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 16 23:55:53.239100 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 16 23:55:53.239100 ignition[1011]: INFO : umount: umount passed Jan 16 23:55:53.239100 ignition[1011]: INFO : Ignition finished successfully Jan 16 23:55:53.243773 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 16 23:55:53.245534 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 16 23:55:53.247332 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 16 23:55:53.247830 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 16 23:55:53.247872 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 16 23:55:53.248563 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 16 23:55:53.248604 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 16 23:55:53.249568 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 16 23:55:53.249604 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 16 23:55:53.250679 systemd[1]: Stopped target network.target - Network. Jan 16 23:55:53.251366 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 16 23:55:53.251420 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 16 23:55:53.253664 systemd[1]: Stopped target paths.target - Path Units. Jan 16 23:55:53.254351 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 16 23:55:53.256006 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 16 23:55:53.256835 systemd[1]: Stopped target slices.target - Slice Units. Jan 16 23:55:53.257811 systemd[1]: Stopped target sockets.target - Socket Units. Jan 16 23:55:53.259001 systemd[1]: iscsid.socket: Deactivated successfully. Jan 16 23:55:53.259045 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 16 23:55:53.259876 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 16 23:55:53.259907 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 16 23:55:53.260851 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 16 23:55:53.260899 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 16 23:55:53.261747 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 16 23:55:53.261787 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 16 23:55:53.262809 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 16 23:55:53.263887 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 16 23:55:53.265098 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 16 23:55:53.265187 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 16 23:55:53.266286 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 16 23:55:53.266374 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 16 23:55:53.268535 systemd-networkd[781]: eth1: DHCPv6 lease lost Jan 16 23:55:53.274572 systemd-networkd[781]: eth0: DHCPv6 lease lost Jan 16 23:55:53.278847 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 16 23:55:53.278988 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 16 23:55:53.282040 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 16 23:55:53.282504 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 16 23:55:53.283946 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 16 23:55:53.284012 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 16 23:55:53.293708 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 16 23:55:53.294501 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 16 23:55:53.294587 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 16 23:55:53.297132 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 16 23:55:53.297197 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 16 23:55:53.298096 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 16 23:55:53.298136 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 16 23:55:53.298805 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 16 23:55:53.298843 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 16 23:55:53.300149 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 16 23:55:53.317822 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 16 23:55:53.317992 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 16 23:55:53.319123 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 16 23:55:53.319169 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 16 23:55:53.319960 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 16 23:55:53.319996 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 16 23:55:53.322451 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 16 23:55:53.322594 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 16 23:55:53.324325 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 16 23:55:53.324372 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 16 23:55:53.325927 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 16 23:55:53.325972 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 16 23:55:53.333803 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 16 23:55:53.336863 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 16 23:55:53.336990 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 16 23:55:53.339149 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 16 23:55:53.339207 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 23:55:53.340466 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 16 23:55:53.341193 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 16 23:55:53.341996 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 16 23:55:53.342070 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 16 23:55:53.343437 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 16 23:55:53.352743 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 16 23:55:53.362426 systemd[1]: Switching root. Jan 16 23:55:53.400808 systemd-journald[238]: Journal stopped Jan 16 23:55:54.294512 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Jan 16 23:55:54.294588 kernel: SELinux: policy capability network_peer_controls=1 Jan 16 23:55:54.294601 kernel: SELinux: policy capability open_perms=1 Jan 16 23:55:54.294611 kernel: SELinux: policy capability extended_socket_class=1 Jan 16 23:55:54.294627 kernel: SELinux: policy capability always_check_network=0 Jan 16 23:55:54.294636 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 16 23:55:54.294646 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 16 23:55:54.294655 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 16 23:55:54.294666 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 16 23:55:54.294676 kernel: audit: type=1403 audit(1768607753.554:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 16 23:55:54.294687 systemd[1]: Successfully loaded SELinux policy in 37.215ms. Jan 16 23:55:54.294710 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.830ms. Jan 16 23:55:54.294722 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 16 23:55:54.294733 systemd[1]: Detected virtualization kvm. Jan 16 23:55:54.294743 systemd[1]: Detected architecture arm64. Jan 16 23:55:54.294753 systemd[1]: Detected first boot. Jan 16 23:55:54.294769 systemd[1]: Hostname set to . Jan 16 23:55:54.294780 systemd[1]: Initializing machine ID from VM UUID. Jan 16 23:55:54.294791 zram_generator::config[1054]: No configuration found. Jan 16 23:55:54.294802 systemd[1]: Populated /etc with preset unit settings. Jan 16 23:55:54.294813 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 16 23:55:54.294823 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 16 23:55:54.294833 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 16 23:55:54.294845 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 16 23:55:54.294855 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 16 23:55:54.294868 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 16 23:55:54.294879 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 16 23:55:54.294889 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 16 23:55:54.294900 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 16 23:55:54.294911 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 16 23:55:54.294921 systemd[1]: Created slice user.slice - User and Session Slice. Jan 16 23:55:54.294931 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 16 23:55:54.294942 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 16 23:55:54.294953 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 16 23:55:54.294964 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 16 23:55:54.294975 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 16 23:55:54.294986 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 16 23:55:54.294997 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 16 23:55:54.295007 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 16 23:55:54.295018 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 16 23:55:54.295029 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 16 23:55:54.295041 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 16 23:55:54.295051 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 16 23:55:54.295062 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 16 23:55:54.295077 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 16 23:55:54.295087 systemd[1]: Reached target slices.target - Slice Units. Jan 16 23:55:54.295097 systemd[1]: Reached target swap.target - Swaps. Jan 16 23:55:54.295108 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 16 23:55:54.295120 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 16 23:55:54.295131 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 16 23:55:54.295142 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 16 23:55:54.295153 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 16 23:55:54.295163 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 16 23:55:54.295173 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 16 23:55:54.295184 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 16 23:55:54.295195 systemd[1]: Mounting media.mount - External Media Directory... Jan 16 23:55:54.295205 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 16 23:55:54.295215 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 16 23:55:54.295227 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 16 23:55:54.295238 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 16 23:55:54.295260 systemd[1]: Reached target machines.target - Containers. Jan 16 23:55:54.295273 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 16 23:55:54.295284 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 23:55:54.295297 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 16 23:55:54.295309 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 16 23:55:54.295327 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 16 23:55:54.295339 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 16 23:55:54.295350 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 16 23:55:54.295364 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 16 23:55:54.295375 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 16 23:55:54.295387 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 16 23:55:54.295399 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 16 23:55:54.295410 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 16 23:55:54.295420 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 16 23:55:54.295430 systemd[1]: Stopped systemd-fsck-usr.service. Jan 16 23:55:54.295441 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 16 23:55:54.295452 kernel: loop: module loaded Jan 16 23:55:54.295541 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 16 23:55:54.295554 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 16 23:55:54.295564 kernel: fuse: init (API version 7.39) Jan 16 23:55:54.295574 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 16 23:55:54.295587 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 16 23:55:54.295598 systemd[1]: verity-setup.service: Deactivated successfully. Jan 16 23:55:54.295609 systemd[1]: Stopped verity-setup.service. Jan 16 23:55:54.295619 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 16 23:55:54.295630 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 16 23:55:54.295642 systemd[1]: Mounted media.mount - External Media Directory. Jan 16 23:55:54.295652 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 16 23:55:54.295662 kernel: ACPI: bus type drm_connector registered Jan 16 23:55:54.295672 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 16 23:55:54.295682 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 16 23:55:54.295697 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 16 23:55:54.295708 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 16 23:55:54.295718 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 16 23:55:54.295731 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 16 23:55:54.295741 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 16 23:55:54.295752 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 16 23:55:54.295763 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 16 23:55:54.295773 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 16 23:55:54.295784 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 16 23:55:54.295797 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 16 23:55:54.295809 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 16 23:55:54.295819 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 16 23:55:54.295830 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 16 23:55:54.295841 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 16 23:55:54.295851 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 16 23:55:54.295862 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 16 23:55:54.295873 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 16 23:55:54.295886 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 16 23:55:54.295926 systemd-journald[1128]: Collecting audit messages is disabled. Jan 16 23:55:54.295950 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 16 23:55:54.295962 systemd-journald[1128]: Journal started Jan 16 23:55:54.295984 systemd-journald[1128]: Runtime Journal (/run/log/journal/7e9cc60c086a434a950f11fcd2b714b6) is 8.0M, max 76.6M, 68.6M free. Jan 16 23:55:54.302694 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 16 23:55:54.017192 systemd[1]: Queued start job for default target multi-user.target. Jan 16 23:55:54.033202 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 16 23:55:54.034158 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 16 23:55:54.308474 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 16 23:55:54.308540 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 16 23:55:54.310704 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 16 23:55:54.317798 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 16 23:55:54.322622 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 16 23:55:54.325486 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 23:55:54.332492 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 16 23:55:54.338492 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 16 23:55:54.344538 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 16 23:55:54.344610 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 16 23:55:54.352617 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 16 23:55:54.358481 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 16 23:55:54.365070 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 16 23:55:54.369485 systemd[1]: Started systemd-journald.service - Journal Service. Jan 16 23:55:54.370350 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 16 23:55:54.372759 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 16 23:55:54.375578 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 16 23:55:54.409450 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 16 23:55:54.414881 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 16 23:55:54.420764 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 16 23:55:54.424506 kernel: loop0: detected capacity change from 0 to 8 Jan 16 23:55:54.434408 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 16 23:55:54.436587 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 16 23:55:54.440655 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 16 23:55:54.452826 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 16 23:55:54.456496 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 16 23:55:54.458831 kernel: loop1: detected capacity change from 0 to 114328 Jan 16 23:55:54.479205 systemd-journald[1128]: Time spent on flushing to /var/log/journal/7e9cc60c086a434a950f11fcd2b714b6 is 50.138ms for 1133 entries. Jan 16 23:55:54.479205 systemd-journald[1128]: System Journal (/var/log/journal/7e9cc60c086a434a950f11fcd2b714b6) is 8.0M, max 584.8M, 576.8M free. Jan 16 23:55:54.536705 systemd-journald[1128]: Received client request to flush runtime journal. Jan 16 23:55:54.536750 kernel: loop2: detected capacity change from 0 to 211168 Jan 16 23:55:54.481331 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 16 23:55:54.485895 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 16 23:55:54.487783 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 16 23:55:54.500695 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 16 23:55:54.505876 udevadm[1182]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 16 23:55:54.542407 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 16 23:55:54.548533 kernel: loop3: detected capacity change from 0 to 114432 Jan 16 23:55:54.552058 systemd-tmpfiles[1186]: ACLs are not supported, ignoring. Jan 16 23:55:54.552076 systemd-tmpfiles[1186]: ACLs are not supported, ignoring. Jan 16 23:55:54.562610 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 16 23:55:54.586187 kernel: loop4: detected capacity change from 0 to 8 Jan 16 23:55:54.589493 kernel: loop5: detected capacity change from 0 to 114328 Jan 16 23:55:54.607490 kernel: loop6: detected capacity change from 0 to 211168 Jan 16 23:55:54.628578 kernel: loop7: detected capacity change from 0 to 114432 Jan 16 23:55:54.645333 (sd-merge)[1194]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Jan 16 23:55:54.646606 (sd-merge)[1194]: Merged extensions into '/usr'. Jan 16 23:55:54.655267 systemd[1]: Reloading requested from client PID 1150 ('systemd-sysext') (unit systemd-sysext.service)... Jan 16 23:55:54.655384 systemd[1]: Reloading... Jan 16 23:55:54.767482 zram_generator::config[1220]: No configuration found. Jan 16 23:55:54.901684 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 16 23:55:54.928182 ldconfig[1146]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 16 23:55:54.953264 systemd[1]: Reloading finished in 297 ms. Jan 16 23:55:54.976111 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 16 23:55:54.978494 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 16 23:55:54.990316 systemd[1]: Starting ensure-sysext.service... Jan 16 23:55:54.992650 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 16 23:55:55.005685 systemd[1]: Reloading requested from client PID 1257 ('systemctl') (unit ensure-sysext.service)... Jan 16 23:55:55.005710 systemd[1]: Reloading... Jan 16 23:55:55.042795 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 16 23:55:55.044145 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 16 23:55:55.045012 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 16 23:55:55.045352 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Jan 16 23:55:55.047312 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Jan 16 23:55:55.055364 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot. Jan 16 23:55:55.056675 systemd-tmpfiles[1258]: Skipping /boot Jan 16 23:55:55.083674 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot. Jan 16 23:55:55.083802 systemd-tmpfiles[1258]: Skipping /boot Jan 16 23:55:55.127488 zram_generator::config[1296]: No configuration found. Jan 16 23:55:55.217820 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 16 23:55:55.264049 systemd[1]: Reloading finished in 257 ms. Jan 16 23:55:55.285287 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 16 23:55:55.287560 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 16 23:55:55.307774 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 16 23:55:55.312708 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 16 23:55:55.317500 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 16 23:55:55.328678 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 16 23:55:55.332654 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 16 23:55:55.336650 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 16 23:55:55.351764 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 16 23:55:55.354880 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 23:55:55.359759 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 16 23:55:55.369838 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 16 23:55:55.378502 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 16 23:55:55.379572 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 23:55:55.384009 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 16 23:55:55.386516 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 16 23:55:55.388726 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 16 23:55:55.391348 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 16 23:55:55.391598 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 16 23:55:55.398965 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 16 23:55:55.410524 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 23:55:55.413881 systemd-udevd[1334]: Using default interface naming scheme 'v255'. Jan 16 23:55:55.418451 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 16 23:55:55.425077 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 16 23:55:55.426408 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 23:55:55.432756 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 16 23:55:55.433992 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 16 23:55:55.434154 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 16 23:55:55.440381 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 23:55:55.448548 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 16 23:55:55.456572 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 16 23:55:55.457223 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 23:55:55.457784 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 16 23:55:55.459185 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 16 23:55:55.459828 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 16 23:55:55.474907 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 16 23:55:55.477535 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 16 23:55:55.477778 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 16 23:55:55.478786 systemd[1]: Finished ensure-sysext.service. Jan 16 23:55:55.480526 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 16 23:55:55.482235 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 16 23:55:55.482418 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 16 23:55:55.490228 augenrules[1367]: No rules Jan 16 23:55:55.493542 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 16 23:55:55.500411 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 16 23:55:55.501530 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 16 23:55:55.511650 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 16 23:55:55.512276 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 16 23:55:55.530863 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 16 23:55:55.531054 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 16 23:55:55.533097 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 16 23:55:55.534549 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 16 23:55:55.583231 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 16 23:55:55.609562 systemd-resolved[1330]: Positive Trust Anchors: Jan 16 23:55:55.609581 systemd-resolved[1330]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 16 23:55:55.609613 systemd-resolved[1330]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 16 23:55:55.614715 systemd-resolved[1330]: Using system hostname 'ci-4081-3-6-n-32c338e5e2'. Jan 16 23:55:55.625570 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 16 23:55:55.626298 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 16 23:55:55.642338 systemd-networkd[1380]: lo: Link UP Jan 16 23:55:55.642353 systemd-networkd[1380]: lo: Gained carrier Jan 16 23:55:55.645328 systemd-networkd[1380]: Enumeration completed Jan 16 23:55:55.645606 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 16 23:55:55.646784 systemd[1]: Reached target network.target - Network. Jan 16 23:55:55.649616 systemd-networkd[1380]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 16 23:55:55.649629 systemd-networkd[1380]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 16 23:55:55.653674 systemd-networkd[1380]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 16 23:55:55.653684 systemd-networkd[1380]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 16 23:55:55.654321 systemd-networkd[1380]: eth0: Link UP Jan 16 23:55:55.654330 systemd-networkd[1380]: eth0: Gained carrier Jan 16 23:55:55.654344 systemd-networkd[1380]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 16 23:55:55.655985 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 16 23:55:55.657750 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 16 23:55:55.658430 systemd[1]: Reached target time-set.target - System Time Set. Jan 16 23:55:55.665845 systemd-networkd[1380]: eth1: Link UP Jan 16 23:55:55.665852 systemd-networkd[1380]: eth1: Gained carrier Jan 16 23:55:55.665873 systemd-networkd[1380]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 16 23:55:55.689485 systemd-networkd[1380]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 16 23:55:55.712643 kernel: mousedev: PS/2 mouse device common for all mice Jan 16 23:55:55.712761 systemd-networkd[1380]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Jan 16 23:55:55.713943 systemd-timesyncd[1393]: Network configuration changed, trying to establish connection. Jan 16 23:55:55.718554 systemd-networkd[1380]: eth0: DHCPv4 address 49.13.115.208/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 16 23:55:55.724782 systemd-networkd[1380]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 16 23:55:55.778642 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Jan 16 23:55:55.778807 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 23:55:55.782691 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 16 23:55:55.796492 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 16 23:55:55.799193 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 16 23:55:55.801694 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 23:55:55.801736 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 16 23:55:55.802077 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 16 23:55:55.802237 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 16 23:55:55.815873 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 16 23:55:55.816049 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 16 23:55:55.818529 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 16 23:55:55.818667 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 16 23:55:55.822450 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 16 23:55:55.823816 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 16 23:55:55.842488 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1383) Jan 16 23:55:55.854593 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Jan 16 23:55:55.854686 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 16 23:55:55.854700 kernel: [drm] features: -context_init Jan 16 23:55:55.868504 kernel: [drm] number of scanouts: 1 Jan 16 23:55:55.868584 kernel: [drm] number of cap sets: 0 Jan 16 23:55:55.870484 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Jan 16 23:55:55.879673 kernel: Console: switching to colour frame buffer device 160x50 Jan 16 23:55:55.885554 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 16 23:55:55.889956 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 23:55:55.895784 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 16 23:55:55.905787 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 16 23:55:55.918506 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 16 23:55:55.918699 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 23:55:55.925760 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 23:55:55.926827 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 16 23:55:55.983510 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 23:55:56.019916 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 16 23:55:56.024769 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 16 23:55:56.041441 lvm[1444]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 16 23:55:56.073064 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 16 23:55:56.074736 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 16 23:55:56.075357 systemd[1]: Reached target sysinit.target - System Initialization. Jan 16 23:55:56.076135 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 16 23:55:56.077034 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 16 23:55:56.077923 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 16 23:55:56.078624 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 16 23:55:56.079274 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 16 23:55:56.080561 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 16 23:55:56.080597 systemd[1]: Reached target paths.target - Path Units. Jan 16 23:55:56.081056 systemd[1]: Reached target timers.target - Timer Units. Jan 16 23:55:56.082177 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 16 23:55:56.084269 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 16 23:55:56.094262 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 16 23:55:56.096914 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 16 23:55:56.098145 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 16 23:55:56.098926 systemd[1]: Reached target sockets.target - Socket Units. Jan 16 23:55:56.099494 systemd[1]: Reached target basic.target - Basic System. Jan 16 23:55:56.100075 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 16 23:55:56.100105 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 16 23:55:56.104670 systemd[1]: Starting containerd.service - containerd container runtime... Jan 16 23:55:56.107538 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 16 23:55:56.112707 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 16 23:55:56.115491 lvm[1448]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 16 23:55:56.115700 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 16 23:55:56.118895 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 16 23:55:56.119452 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 16 23:55:56.121326 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 16 23:55:56.126655 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 16 23:55:56.137754 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jan 16 23:55:56.142681 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 16 23:55:56.148692 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 16 23:55:56.166961 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 16 23:55:56.168968 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 16 23:55:56.171126 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 16 23:55:56.173742 systemd[1]: Starting update-engine.service - Update Engine... Jan 16 23:55:56.176606 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 16 23:55:56.179984 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 16 23:55:56.187101 systemd[1]: motdgen.service: Deactivated successfully. Jan 16 23:55:56.188549 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 16 23:55:56.191626 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 16 23:55:56.192850 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 16 23:55:56.198147 jq[1452]: false Jan 16 23:55:56.217782 coreos-metadata[1450]: Jan 16 23:55:56.215 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jan 16 23:55:56.218417 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 16 23:55:56.219728 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 16 23:55:56.222992 extend-filesystems[1453]: Found loop4 Jan 16 23:55:56.224496 extend-filesystems[1453]: Found loop5 Jan 16 23:55:56.224496 extend-filesystems[1453]: Found loop6 Jan 16 23:55:56.224496 extend-filesystems[1453]: Found loop7 Jan 16 23:55:56.224496 extend-filesystems[1453]: Found sda Jan 16 23:55:56.224496 extend-filesystems[1453]: Found sda1 Jan 16 23:55:56.224496 extend-filesystems[1453]: Found sda2 Jan 16 23:55:56.224496 extend-filesystems[1453]: Found sda3 Jan 16 23:55:56.224496 extend-filesystems[1453]: Found usr Jan 16 23:55:56.224496 extend-filesystems[1453]: Found sda4 Jan 16 23:55:56.224496 extend-filesystems[1453]: Found sda6 Jan 16 23:55:56.259670 extend-filesystems[1453]: Found sda7 Jan 16 23:55:56.259670 extend-filesystems[1453]: Found sda9 Jan 16 23:55:56.259670 extend-filesystems[1453]: Checking size of /dev/sda9 Jan 16 23:55:56.254222 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 16 23:55:56.253785 dbus-daemon[1451]: [system] SELinux support is enabled Jan 16 23:55:56.267088 tar[1473]: linux-arm64/LICENSE Jan 16 23:55:56.267088 tar[1473]: linux-arm64/helm Jan 16 23:55:56.267290 coreos-metadata[1450]: Jan 16 23:55:56.233 INFO Fetch successful Jan 16 23:55:56.267290 coreos-metadata[1450]: Jan 16 23:55:56.233 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jan 16 23:55:56.267290 coreos-metadata[1450]: Jan 16 23:55:56.239 INFO Fetch successful Jan 16 23:55:56.262603 (ntainerd)[1486]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 16 23:55:56.276728 jq[1465]: true Jan 16 23:55:56.269942 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 16 23:55:56.269968 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 16 23:55:56.272594 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 16 23:55:56.272614 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 16 23:55:56.300626 extend-filesystems[1453]: Resized partition /dev/sda9 Jan 16 23:55:56.307676 extend-filesystems[1494]: resize2fs 1.47.1 (20-May-2024) Jan 16 23:55:56.313013 jq[1490]: true Jan 16 23:55:56.319179 update_engine[1463]: I20260116 23:55:56.318816 1463 main.cc:92] Flatcar Update Engine starting Jan 16 23:55:56.325780 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Jan 16 23:55:56.340672 update_engine[1463]: I20260116 23:55:56.340391 1463 update_check_scheduler.cc:74] Next update check in 7m30s Jan 16 23:55:56.340908 systemd[1]: Started update-engine.service - Update Engine. Jan 16 23:55:56.356771 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 16 23:55:56.403702 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1372) Jan 16 23:55:56.448381 systemd-logind[1461]: New seat seat0. Jan 16 23:55:56.448987 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 16 23:55:56.456987 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 16 23:55:56.458766 systemd-logind[1461]: Watching system buttons on /dev/input/event0 (Power Button) Jan 16 23:55:56.458783 systemd-logind[1461]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Jan 16 23:55:56.459567 systemd[1]: Started systemd-logind.service - User Login Management. Jan 16 23:55:56.510714 bash[1523]: Updated "/home/core/.ssh/authorized_keys" Jan 16 23:55:56.513630 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 16 23:55:56.530973 systemd[1]: Starting sshkeys.service... Jan 16 23:55:56.550522 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Jan 16 23:55:56.554163 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 16 23:55:56.564158 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 16 23:55:56.575966 extend-filesystems[1494]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 16 23:55:56.575966 extend-filesystems[1494]: old_desc_blocks = 1, new_desc_blocks = 5 Jan 16 23:55:56.575966 extend-filesystems[1494]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Jan 16 23:55:56.575210 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 16 23:55:56.584685 extend-filesystems[1453]: Resized filesystem in /dev/sda9 Jan 16 23:55:56.584685 extend-filesystems[1453]: Found sr0 Jan 16 23:55:56.575483 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 16 23:55:56.625492 coreos-metadata[1530]: Jan 16 23:55:56.624 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jan 16 23:55:56.626937 coreos-metadata[1530]: Jan 16 23:55:56.626 INFO Fetch successful Jan 16 23:55:56.631206 unknown[1530]: wrote ssh authorized keys file for user: core Jan 16 23:55:56.633748 containerd[1486]: time="2026-01-16T23:55:56.633656400Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 16 23:55:56.649580 locksmithd[1501]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 16 23:55:56.674974 update-ssh-keys[1540]: Updated "/home/core/.ssh/authorized_keys" Jan 16 23:55:56.676406 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 16 23:55:56.683075 systemd[1]: Finished sshkeys.service. Jan 16 23:55:56.711469 containerd[1486]: time="2026-01-16T23:55:56.709975760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 16 23:55:56.716680 containerd[1486]: time="2026-01-16T23:55:56.716634800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 16 23:55:56.716785 containerd[1486]: time="2026-01-16T23:55:56.716770000Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 16 23:55:56.716864 containerd[1486]: time="2026-01-16T23:55:56.716849840Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 16 23:55:56.717078 containerd[1486]: time="2026-01-16T23:55:56.717058400Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 16 23:55:56.717515 containerd[1486]: time="2026-01-16T23:55:56.717496520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 16 23:55:56.717666 containerd[1486]: time="2026-01-16T23:55:56.717643520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 16 23:55:56.720419 containerd[1486]: time="2026-01-16T23:55:56.719476400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 16 23:55:56.720419 containerd[1486]: time="2026-01-16T23:55:56.719691160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 16 23:55:56.720419 containerd[1486]: time="2026-01-16T23:55:56.719708720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 16 23:55:56.720419 containerd[1486]: time="2026-01-16T23:55:56.719721920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 16 23:55:56.720419 containerd[1486]: time="2026-01-16T23:55:56.719732800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 16 23:55:56.720419 containerd[1486]: time="2026-01-16T23:55:56.719808320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 16 23:55:56.720419 containerd[1486]: time="2026-01-16T23:55:56.719998600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 16 23:55:56.720419 containerd[1486]: time="2026-01-16T23:55:56.720096000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 16 23:55:56.720419 containerd[1486]: time="2026-01-16T23:55:56.720110800Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 16 23:55:56.720419 containerd[1486]: time="2026-01-16T23:55:56.720178840Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 16 23:55:56.720419 containerd[1486]: time="2026-01-16T23:55:56.720220960Z" level=info msg="metadata content store policy set" policy=shared Jan 16 23:55:56.726485 containerd[1486]: time="2026-01-16T23:55:56.726435520Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 16 23:55:56.726625 containerd[1486]: time="2026-01-16T23:55:56.726610520Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 16 23:55:56.726734 containerd[1486]: time="2026-01-16T23:55:56.726721320Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 16 23:55:56.726793 containerd[1486]: time="2026-01-16T23:55:56.726780800Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 16 23:55:56.726869 containerd[1486]: time="2026-01-16T23:55:56.726855760Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 16 23:55:56.727071 containerd[1486]: time="2026-01-16T23:55:56.727051680Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 16 23:55:56.732491 containerd[1486]: time="2026-01-16T23:55:56.731661920Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 16 23:55:56.732491 containerd[1486]: time="2026-01-16T23:55:56.731904360Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 16 23:55:56.732491 containerd[1486]: time="2026-01-16T23:55:56.731928120Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 16 23:55:56.732491 containerd[1486]: time="2026-01-16T23:55:56.731942400Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 16 23:55:56.732491 containerd[1486]: time="2026-01-16T23:55:56.731961480Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 16 23:55:56.732491 containerd[1486]: time="2026-01-16T23:55:56.731978600Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 16 23:55:56.732491 containerd[1486]: time="2026-01-16T23:55:56.731995200Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 16 23:55:56.732491 containerd[1486]: time="2026-01-16T23:55:56.732014680Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 16 23:55:56.732491 containerd[1486]: time="2026-01-16T23:55:56.732034360Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 16 23:55:56.732491 containerd[1486]: time="2026-01-16T23:55:56.732052000Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 16 23:55:56.732491 containerd[1486]: time="2026-01-16T23:55:56.732066040Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 16 23:55:56.732491 containerd[1486]: time="2026-01-16T23:55:56.732081680Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 16 23:55:56.732491 containerd[1486]: time="2026-01-16T23:55:56.732107680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 16 23:55:56.732491 containerd[1486]: time="2026-01-16T23:55:56.732127000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 16 23:55:56.732794 containerd[1486]: time="2026-01-16T23:55:56.732146440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 16 23:55:56.732794 containerd[1486]: time="2026-01-16T23:55:56.732164080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 16 23:55:56.732794 containerd[1486]: time="2026-01-16T23:55:56.732180880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 16 23:55:56.732794 containerd[1486]: time="2026-01-16T23:55:56.732194560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 16 23:55:56.732794 containerd[1486]: time="2026-01-16T23:55:56.732211240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 16 23:55:56.732794 containerd[1486]: time="2026-01-16T23:55:56.732228200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 16 23:55:56.732794 containerd[1486]: time="2026-01-16T23:55:56.732291440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 16 23:55:56.732794 containerd[1486]: time="2026-01-16T23:55:56.732317280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 16 23:55:56.732794 containerd[1486]: time="2026-01-16T23:55:56.732334080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 16 23:55:56.732794 containerd[1486]: time="2026-01-16T23:55:56.732349760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 16 23:55:56.732794 containerd[1486]: time="2026-01-16T23:55:56.732363120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 16 23:55:56.732794 containerd[1486]: time="2026-01-16T23:55:56.732386400Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 16 23:55:56.732794 containerd[1486]: time="2026-01-16T23:55:56.732412040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 16 23:55:56.732794 containerd[1486]: time="2026-01-16T23:55:56.732426760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 16 23:55:56.732794 containerd[1486]: time="2026-01-16T23:55:56.732441720Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 16 23:55:56.737155 containerd[1486]: time="2026-01-16T23:55:56.736759360Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 16 23:55:56.737291 containerd[1486]: time="2026-01-16T23:55:56.737235200Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 16 23:55:56.737291 containerd[1486]: time="2026-01-16T23:55:56.737271880Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 16 23:55:56.737291 containerd[1486]: time="2026-01-16T23:55:56.737287640Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 16 23:55:56.737351 containerd[1486]: time="2026-01-16T23:55:56.737298960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 16 23:55:56.737351 containerd[1486]: time="2026-01-16T23:55:56.737316600Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 16 23:55:56.737351 containerd[1486]: time="2026-01-16T23:55:56.737329880Z" level=info msg="NRI interface is disabled by configuration." Jan 16 23:55:56.737351 containerd[1486]: time="2026-01-16T23:55:56.737340760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 16 23:55:56.737783 containerd[1486]: time="2026-01-16T23:55:56.737713960Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 16 23:55:56.737897 containerd[1486]: time="2026-01-16T23:55:56.737784280Z" level=info msg="Connect containerd service" Jan 16 23:55:56.737897 containerd[1486]: time="2026-01-16T23:55:56.737828280Z" level=info msg="using legacy CRI server" Jan 16 23:55:56.737897 containerd[1486]: time="2026-01-16T23:55:56.737835720Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 16 23:55:56.737962 containerd[1486]: time="2026-01-16T23:55:56.737924360Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 16 23:55:56.738701 containerd[1486]: time="2026-01-16T23:55:56.738671240Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 16 23:55:56.738979 containerd[1486]: time="2026-01-16T23:55:56.738943240Z" level=info msg="Start subscribing containerd event" Jan 16 23:55:56.739961 containerd[1486]: time="2026-01-16T23:55:56.739514760Z" level=info msg="Start recovering state" Jan 16 23:55:56.739961 containerd[1486]: time="2026-01-16T23:55:56.739591040Z" level=info msg="Start event monitor" Jan 16 23:55:56.739961 containerd[1486]: time="2026-01-16T23:55:56.739605240Z" level=info msg="Start snapshots syncer" Jan 16 23:55:56.739961 containerd[1486]: time="2026-01-16T23:55:56.739614640Z" level=info msg="Start cni network conf syncer for default" Jan 16 23:55:56.739961 containerd[1486]: time="2026-01-16T23:55:56.739621680Z" level=info msg="Start streaming server" Jan 16 23:55:56.742124 containerd[1486]: time="2026-01-16T23:55:56.742090720Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 16 23:55:56.742163 containerd[1486]: time="2026-01-16T23:55:56.742154520Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 16 23:55:56.742329 systemd[1]: Started containerd.service - containerd container runtime. Jan 16 23:55:56.745468 containerd[1486]: time="2026-01-16T23:55:56.743381600Z" level=info msg="containerd successfully booted in 0.113200s" Jan 16 23:55:56.903698 systemd-networkd[1380]: eth0: Gained IPv6LL Jan 16 23:55:56.910639 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 16 23:55:56.911949 systemd[1]: Reached target network-online.target - Network is Online. Jan 16 23:55:56.921678 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 23:55:56.930802 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 16 23:55:56.958265 sshd_keygen[1485]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 16 23:55:56.980172 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 16 23:55:56.989498 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 16 23:55:57.002721 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 16 23:55:57.010843 systemd[1]: issuegen.service: Deactivated successfully. Jan 16 23:55:57.011209 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 16 23:55:57.019262 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 16 23:55:57.034520 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 16 23:55:57.047790 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 16 23:55:57.059386 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 16 23:55:57.061411 systemd[1]: Reached target getty.target - Login Prompts. Jan 16 23:55:57.102547 tar[1473]: linux-arm64/README.md Jan 16 23:55:57.116519 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 16 23:55:57.160217 systemd-networkd[1380]: eth1: Gained IPv6LL Jan 16 23:55:57.777754 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 23:55:57.780312 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 16 23:55:57.780519 (kubelet)[1581]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 23:55:57.783382 systemd[1]: Startup finished in 835ms (kernel) + 5.857s (initrd) + 4.266s (userspace) = 10.958s. Jan 16 23:55:58.306881 kubelet[1581]: E0116 23:55:58.306827 1581 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 23:55:58.311506 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 23:55:58.311821 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 23:55:59.044538 systemd-timesyncd[1393]: Contacted time server 85.220.190.246:123 (3.flatcar.pool.ntp.org). Jan 16 23:55:59.044631 systemd-timesyncd[1393]: Initial clock synchronization to Fri 2026-01-16 23:55:59.207485 UTC. Jan 16 23:56:08.514335 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 16 23:56:08.522862 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 23:56:08.646712 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 23:56:08.656015 (kubelet)[1600]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 23:56:08.715988 kubelet[1600]: E0116 23:56:08.715902 1600 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 23:56:08.721789 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 23:56:08.722171 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 23:56:15.484047 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 16 23:56:15.490055 systemd[1]: Started sshd@0-49.13.115.208:22-4.153.228.146:56576.service - OpenSSH per-connection server daemon (4.153.228.146:56576). Jan 16 23:56:16.130604 sshd[1608]: Accepted publickey for core from 4.153.228.146 port 56576 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 16 23:56:16.133861 sshd[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 23:56:16.143679 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 16 23:56:16.151848 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 16 23:56:16.155287 systemd-logind[1461]: New session 1 of user core. Jan 16 23:56:16.165703 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 16 23:56:16.182176 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 16 23:56:16.187170 (systemd)[1612]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 16 23:56:16.311564 systemd[1612]: Queued start job for default target default.target. Jan 16 23:56:16.326012 systemd[1612]: Created slice app.slice - User Application Slice. Jan 16 23:56:16.326075 systemd[1612]: Reached target paths.target - Paths. Jan 16 23:56:16.326098 systemd[1612]: Reached target timers.target - Timers. Jan 16 23:56:16.328109 systemd[1612]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 16 23:56:16.350868 systemd[1612]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 16 23:56:16.351053 systemd[1612]: Reached target sockets.target - Sockets. Jan 16 23:56:16.351078 systemd[1612]: Reached target basic.target - Basic System. Jan 16 23:56:16.351155 systemd[1612]: Reached target default.target - Main User Target. Jan 16 23:56:16.351190 systemd[1612]: Startup finished in 157ms. Jan 16 23:56:16.351236 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 16 23:56:16.358720 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 16 23:56:16.836872 systemd[1]: Started sshd@1-49.13.115.208:22-4.153.228.146:56578.service - OpenSSH per-connection server daemon (4.153.228.146:56578). Jan 16 23:56:17.476637 sshd[1623]: Accepted publickey for core from 4.153.228.146 port 56578 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 16 23:56:17.479790 sshd[1623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 23:56:17.486809 systemd-logind[1461]: New session 2 of user core. Jan 16 23:56:17.495857 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 16 23:56:17.936343 sshd[1623]: pam_unix(sshd:session): session closed for user core Jan 16 23:56:17.942041 systemd[1]: sshd@1-49.13.115.208:22-4.153.228.146:56578.service: Deactivated successfully. Jan 16 23:56:17.944261 systemd[1]: session-2.scope: Deactivated successfully. Jan 16 23:56:17.946845 systemd-logind[1461]: Session 2 logged out. Waiting for processes to exit. Jan 16 23:56:17.948283 systemd-logind[1461]: Removed session 2. Jan 16 23:56:18.038722 systemd[1]: Started sshd@2-49.13.115.208:22-4.153.228.146:56580.service - OpenSSH per-connection server daemon (4.153.228.146:56580). Jan 16 23:56:18.648623 sshd[1630]: Accepted publickey for core from 4.153.228.146 port 56580 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 16 23:56:18.650901 sshd[1630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 23:56:18.656622 systemd-logind[1461]: New session 3 of user core. Jan 16 23:56:18.667801 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 16 23:56:18.763973 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 16 23:56:18.772830 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 23:56:18.891346 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 23:56:18.896528 (kubelet)[1641]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 23:56:18.934543 kubelet[1641]: E0116 23:56:18.934377 1641 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 23:56:18.938243 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 23:56:18.938396 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 23:56:19.072538 sshd[1630]: pam_unix(sshd:session): session closed for user core Jan 16 23:56:19.077663 systemd[1]: sshd@2-49.13.115.208:22-4.153.228.146:56580.service: Deactivated successfully. Jan 16 23:56:19.079863 systemd[1]: session-3.scope: Deactivated successfully. Jan 16 23:56:19.080744 systemd-logind[1461]: Session 3 logged out. Waiting for processes to exit. Jan 16 23:56:19.081998 systemd-logind[1461]: Removed session 3. Jan 16 23:56:19.184749 systemd[1]: Started sshd@3-49.13.115.208:22-4.153.228.146:56592.service - OpenSSH per-connection server daemon (4.153.228.146:56592). Jan 16 23:56:19.810590 sshd[1652]: Accepted publickey for core from 4.153.228.146 port 56592 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 16 23:56:19.812554 sshd[1652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 23:56:19.816749 systemd-logind[1461]: New session 4 of user core. Jan 16 23:56:19.827974 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 16 23:56:20.250325 sshd[1652]: pam_unix(sshd:session): session closed for user core Jan 16 23:56:20.256032 systemd[1]: sshd@3-49.13.115.208:22-4.153.228.146:56592.service: Deactivated successfully. Jan 16 23:56:20.256730 systemd-logind[1461]: Session 4 logged out. Waiting for processes to exit. Jan 16 23:56:20.259020 systemd[1]: session-4.scope: Deactivated successfully. Jan 16 23:56:20.260805 systemd-logind[1461]: Removed session 4. Jan 16 23:56:20.373987 systemd[1]: Started sshd@4-49.13.115.208:22-4.153.228.146:56602.service - OpenSSH per-connection server daemon (4.153.228.146:56602). Jan 16 23:56:20.997850 sshd[1659]: Accepted publickey for core from 4.153.228.146 port 56602 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 16 23:56:21.000124 sshd[1659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 23:56:21.005879 systemd-logind[1461]: New session 5 of user core. Jan 16 23:56:21.017845 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 16 23:56:21.353125 sudo[1662]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 16 23:56:21.353876 sudo[1662]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 23:56:21.369727 sudo[1662]: pam_unix(sudo:session): session closed for user root Jan 16 23:56:21.472214 sshd[1659]: pam_unix(sshd:session): session closed for user core Jan 16 23:56:21.479271 systemd[1]: sshd@4-49.13.115.208:22-4.153.228.146:56602.service: Deactivated successfully. Jan 16 23:56:21.481106 systemd[1]: session-5.scope: Deactivated successfully. Jan 16 23:56:21.483040 systemd-logind[1461]: Session 5 logged out. Waiting for processes to exit. Jan 16 23:56:21.485771 systemd-logind[1461]: Removed session 5. Jan 16 23:56:21.601164 systemd[1]: Started sshd@5-49.13.115.208:22-4.153.228.146:56612.service - OpenSSH per-connection server daemon (4.153.228.146:56612). Jan 16 23:56:22.244878 sshd[1667]: Accepted publickey for core from 4.153.228.146 port 56612 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 16 23:56:22.247616 sshd[1667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 23:56:22.253069 systemd-logind[1461]: New session 6 of user core. Jan 16 23:56:22.261783 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 16 23:56:22.601820 sudo[1671]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 16 23:56:22.602127 sudo[1671]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 23:56:22.606184 sudo[1671]: pam_unix(sudo:session): session closed for user root Jan 16 23:56:22.611614 sudo[1670]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 16 23:56:22.611911 sudo[1670]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 23:56:22.627943 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 16 23:56:22.629956 auditctl[1674]: No rules Jan 16 23:56:22.630277 systemd[1]: audit-rules.service: Deactivated successfully. Jan 16 23:56:22.630448 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 16 23:56:22.632798 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 16 23:56:22.664680 augenrules[1692]: No rules Jan 16 23:56:22.666563 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 16 23:56:22.668711 sudo[1670]: pam_unix(sudo:session): session closed for user root Jan 16 23:56:22.772617 sshd[1667]: pam_unix(sshd:session): session closed for user core Jan 16 23:56:22.779112 systemd[1]: sshd@5-49.13.115.208:22-4.153.228.146:56612.service: Deactivated successfully. Jan 16 23:56:22.781956 systemd[1]: session-6.scope: Deactivated successfully. Jan 16 23:56:22.783102 systemd-logind[1461]: Session 6 logged out. Waiting for processes to exit. Jan 16 23:56:22.784372 systemd-logind[1461]: Removed session 6. Jan 16 23:56:22.877537 systemd[1]: Started sshd@6-49.13.115.208:22-4.153.228.146:56618.service - OpenSSH per-connection server daemon (4.153.228.146:56618). Jan 16 23:56:23.498986 sshd[1700]: Accepted publickey for core from 4.153.228.146 port 56618 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 16 23:56:23.500784 sshd[1700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 23:56:23.505602 systemd-logind[1461]: New session 7 of user core. Jan 16 23:56:23.513743 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 16 23:56:23.838529 sudo[1703]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 16 23:56:23.838842 sudo[1703]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 23:56:24.130859 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 16 23:56:24.132236 (dockerd)[1719]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 16 23:56:24.379587 dockerd[1719]: time="2026-01-16T23:56:24.379444278Z" level=info msg="Starting up" Jan 16 23:56:24.456442 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1793594454-merged.mount: Deactivated successfully. Jan 16 23:56:24.479512 dockerd[1719]: time="2026-01-16T23:56:24.479415708Z" level=info msg="Loading containers: start." Jan 16 23:56:24.587486 kernel: Initializing XFRM netlink socket Jan 16 23:56:24.664144 systemd-networkd[1380]: docker0: Link UP Jan 16 23:56:24.684276 dockerd[1719]: time="2026-01-16T23:56:24.684217346Z" level=info msg="Loading containers: done." Jan 16 23:56:24.699753 dockerd[1719]: time="2026-01-16T23:56:24.699697507Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 16 23:56:24.699908 dockerd[1719]: time="2026-01-16T23:56:24.699822811Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 16 23:56:24.700003 dockerd[1719]: time="2026-01-16T23:56:24.699963487Z" level=info msg="Daemon has completed initialization" Jan 16 23:56:24.744383 dockerd[1719]: time="2026-01-16T23:56:24.743839930Z" level=info msg="API listen on /run/docker.sock" Jan 16 23:56:24.744608 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 16 23:56:25.450271 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3887716509-merged.mount: Deactivated successfully. Jan 16 23:56:25.791330 containerd[1486]: time="2026-01-16T23:56:25.791052190Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 16 23:56:26.516405 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1938506715.mount: Deactivated successfully. Jan 16 23:56:27.980581 containerd[1486]: time="2026-01-16T23:56:27.980505021Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:56:27.982476 containerd[1486]: time="2026-01-16T23:56:27.982082535Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=27387379" Jan 16 23:56:27.983880 containerd[1486]: time="2026-01-16T23:56:27.983841870Z" level=info msg="ImageCreate event name:\"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:56:27.989281 containerd[1486]: time="2026-01-16T23:56:27.989221611Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:56:27.991166 containerd[1486]: time="2026-01-16T23:56:27.991119383Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"27383880\" in 2.200024683s" Jan 16 23:56:27.991166 containerd[1486]: time="2026-01-16T23:56:27.991160006Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\"" Jan 16 23:56:27.993510 containerd[1486]: time="2026-01-16T23:56:27.993478530Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 16 23:56:29.014020 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 16 23:56:29.021803 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 23:56:29.143990 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 23:56:29.155872 (kubelet)[1923]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 23:56:29.207553 kubelet[1923]: E0116 23:56:29.207497 1923 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 23:56:29.210511 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 23:56:29.210661 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 23:56:29.594506 containerd[1486]: time="2026-01-16T23:56:29.592706215Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:56:29.595015 containerd[1486]: time="2026-01-16T23:56:29.594594096Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=23553101" Jan 16 23:56:29.595761 containerd[1486]: time="2026-01-16T23:56:29.595710049Z" level=info msg="ImageCreate event name:\"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:56:29.599385 containerd[1486]: time="2026-01-16T23:56:29.599338669Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:56:29.600682 containerd[1486]: time="2026-01-16T23:56:29.600630697Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"25137562\" in 1.606941531s" Jan 16 23:56:29.600766 containerd[1486]: time="2026-01-16T23:56:29.600682439Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\"" Jan 16 23:56:29.601886 containerd[1486]: time="2026-01-16T23:56:29.601851575Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 16 23:56:30.784534 containerd[1486]: time="2026-01-16T23:56:30.784360449Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:56:30.786550 containerd[1486]: time="2026-01-16T23:56:30.786024347Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=18298087" Jan 16 23:56:30.788295 containerd[1486]: time="2026-01-16T23:56:30.787788842Z" level=info msg="ImageCreate event name:\"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:56:30.791589 containerd[1486]: time="2026-01-16T23:56:30.791550438Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:56:30.792858 containerd[1486]: time="2026-01-16T23:56:30.792815668Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"19882566\" in 1.190816791s" Jan 16 23:56:30.792858 containerd[1486]: time="2026-01-16T23:56:30.792856523Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\"" Jan 16 23:56:30.793711 containerd[1486]: time="2026-01-16T23:56:30.793675107Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 16 23:56:31.797363 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount792941583.mount: Deactivated successfully. Jan 16 23:56:32.131629 containerd[1486]: time="2026-01-16T23:56:32.130919201Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:56:32.133191 containerd[1486]: time="2026-01-16T23:56:32.133125515Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=28258699" Jan 16 23:56:32.135777 containerd[1486]: time="2026-01-16T23:56:32.135670978Z" level=info msg="ImageCreate event name:\"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:56:32.138697 containerd[1486]: time="2026-01-16T23:56:32.138356807Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:56:32.139354 containerd[1486]: time="2026-01-16T23:56:32.139310956Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"28257692\" in 1.345595074s" Jan 16 23:56:32.139354 containerd[1486]: time="2026-01-16T23:56:32.139350728Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\"" Jan 16 23:56:32.140121 containerd[1486]: time="2026-01-16T23:56:32.140075003Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 16 23:56:32.784592 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1352968517.mount: Deactivated successfully. Jan 16 23:56:33.677484 containerd[1486]: time="2026-01-16T23:56:33.676398055Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:56:33.677977 containerd[1486]: time="2026-01-16T23:56:33.677889190Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152209" Jan 16 23:56:33.678648 containerd[1486]: time="2026-01-16T23:56:33.678603368Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:56:33.682486 containerd[1486]: time="2026-01-16T23:56:33.681780979Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:56:33.683196 containerd[1486]: time="2026-01-16T23:56:33.683073694Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.542947355s" Jan 16 23:56:33.683196 containerd[1486]: time="2026-01-16T23:56:33.683100902Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Jan 16 23:56:33.683793 containerd[1486]: time="2026-01-16T23:56:33.683608617Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 16 23:56:34.225947 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3966254985.mount: Deactivated successfully. Jan 16 23:56:34.235226 containerd[1486]: time="2026-01-16T23:56:34.235136489Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:56:34.237319 containerd[1486]: time="2026-01-16T23:56:34.237231374Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Jan 16 23:56:34.237699 containerd[1486]: time="2026-01-16T23:56:34.237605962Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:56:34.239931 containerd[1486]: time="2026-01-16T23:56:34.239872696Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:56:34.240860 containerd[1486]: time="2026-01-16T23:56:34.240748589Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 557.110123ms" Jan 16 23:56:34.240860 containerd[1486]: time="2026-01-16T23:56:34.240779718Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 16 23:56:34.241392 containerd[1486]: time="2026-01-16T23:56:34.241370128Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 16 23:56:34.892951 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount632076830.mount: Deactivated successfully. Jan 16 23:56:36.801936 containerd[1486]: time="2026-01-16T23:56:36.800272669Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:56:36.801936 containerd[1486]: time="2026-01-16T23:56:36.801828151Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=70013713" Jan 16 23:56:36.802892 containerd[1486]: time="2026-01-16T23:56:36.802843653Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:56:36.806659 containerd[1486]: time="2026-01-16T23:56:36.806621868Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:56:36.808168 containerd[1486]: time="2026-01-16T23:56:36.808114173Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.566711515s" Jan 16 23:56:36.808168 containerd[1486]: time="2026-01-16T23:56:36.808158104Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Jan 16 23:56:39.263990 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 16 23:56:39.271928 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 23:56:39.406657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 23:56:39.415951 (kubelet)[2087]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 23:56:39.458010 kubelet[2087]: E0116 23:56:39.457966 2087 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 23:56:39.461598 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 23:56:39.461886 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 23:56:41.841758 update_engine[1463]: I20260116 23:56:41.841614 1463 update_attempter.cc:509] Updating boot flags... Jan 16 23:56:41.909337 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2103) Jan 16 23:56:41.977524 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2102) Jan 16 23:56:42.046483 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2102) Jan 16 23:56:42.237242 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 23:56:42.242994 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 23:56:42.279296 systemd[1]: Reloading requested from client PID 2122 ('systemctl') (unit session-7.scope)... Jan 16 23:56:42.279315 systemd[1]: Reloading... Jan 16 23:56:42.398500 zram_generator::config[2158]: No configuration found. Jan 16 23:56:42.502449 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 16 23:56:42.570978 systemd[1]: Reloading finished in 291 ms. Jan 16 23:56:42.629556 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 16 23:56:42.629691 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 16 23:56:42.630236 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 23:56:42.635937 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 23:56:42.752086 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 23:56:42.763145 (kubelet)[2211]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 16 23:56:42.809470 kubelet[2211]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 23:56:42.809470 kubelet[2211]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 16 23:56:42.809470 kubelet[2211]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 23:56:42.809913 kubelet[2211]: I0116 23:56:42.809598 2211 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 16 23:56:43.476498 kubelet[2211]: I0116 23:56:43.475186 2211 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 16 23:56:43.476498 kubelet[2211]: I0116 23:56:43.475224 2211 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 16 23:56:43.476498 kubelet[2211]: I0116 23:56:43.475719 2211 server.go:956] "Client rotation is on, will bootstrap in background" Jan 16 23:56:43.514749 kubelet[2211]: E0116 23:56:43.514691 2211 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://49.13.115.208:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 49.13.115.208:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 16 23:56:43.518728 kubelet[2211]: I0116 23:56:43.518673 2211 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 16 23:56:43.531866 kubelet[2211]: E0116 23:56:43.531790 2211 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 16 23:56:43.531866 kubelet[2211]: I0116 23:56:43.531850 2211 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 16 23:56:43.536467 kubelet[2211]: I0116 23:56:43.536409 2211 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 16 23:56:43.537148 kubelet[2211]: I0116 23:56:43.537058 2211 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 16 23:56:43.537508 kubelet[2211]: I0116 23:56:43.537120 2211 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-32c338e5e2","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 16 23:56:43.537635 kubelet[2211]: I0116 23:56:43.537588 2211 topology_manager.go:138] "Creating topology manager with none policy" Jan 16 23:56:43.537635 kubelet[2211]: I0116 23:56:43.537608 2211 container_manager_linux.go:303] "Creating device plugin manager" Jan 16 23:56:43.538002 kubelet[2211]: I0116 23:56:43.537948 2211 state_mem.go:36] "Initialized new in-memory state store" Jan 16 23:56:43.544338 kubelet[2211]: I0116 23:56:43.544078 2211 kubelet.go:480] "Attempting to sync node with API server" Jan 16 23:56:43.544338 kubelet[2211]: I0116 23:56:43.544122 2211 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 16 23:56:43.544338 kubelet[2211]: I0116 23:56:43.544158 2211 kubelet.go:386] "Adding apiserver pod source" Jan 16 23:56:43.545427 kubelet[2211]: I0116 23:56:43.545396 2211 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 16 23:56:43.550446 kubelet[2211]: E0116 23:56:43.550407 2211 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://49.13.115.208:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-32c338e5e2&limit=500&resourceVersion=0\": dial tcp 49.13.115.208:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 16 23:56:43.551304 kubelet[2211]: E0116 23:56:43.551264 2211 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://49.13.115.208:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 49.13.115.208:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 16 23:56:43.551547 kubelet[2211]: I0116 23:56:43.551527 2211 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 16 23:56:43.552855 kubelet[2211]: I0116 23:56:43.552832 2211 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 16 23:56:43.553125 kubelet[2211]: W0116 23:56:43.553107 2211 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 16 23:56:43.559795 kubelet[2211]: I0116 23:56:43.559759 2211 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 16 23:56:43.560788 kubelet[2211]: I0116 23:56:43.559996 2211 server.go:1289] "Started kubelet" Jan 16 23:56:43.563230 kubelet[2211]: I0116 23:56:43.563198 2211 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 16 23:56:43.566722 kubelet[2211]: E0116 23:56:43.565450 2211 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://49.13.115.208:6443/api/v1/namespaces/default/events\": dial tcp 49.13.115.208:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-6-n-32c338e5e2.188b5b6d462c4650 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-n-32c338e5e2,UID:ci-4081-3-6-n-32c338e5e2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-32c338e5e2,},FirstTimestamp:2026-01-16 23:56:43.559921232 +0000 UTC m=+0.788876272,LastTimestamp:2026-01-16 23:56:43.559921232 +0000 UTC m=+0.788876272,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-32c338e5e2,}" Jan 16 23:56:43.568361 kubelet[2211]: I0116 23:56:43.568301 2211 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 16 23:56:43.569334 kubelet[2211]: I0116 23:56:43.569299 2211 server.go:317] "Adding debug handlers to kubelet server" Jan 16 23:56:43.572667 kubelet[2211]: I0116 23:56:43.572597 2211 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 16 23:56:43.572861 kubelet[2211]: I0116 23:56:43.572838 2211 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 16 23:56:43.573128 kubelet[2211]: I0116 23:56:43.573100 2211 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 16 23:56:43.574668 kubelet[2211]: E0116 23:56:43.574376 2211 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-32c338e5e2\" not found" Jan 16 23:56:43.574668 kubelet[2211]: I0116 23:56:43.574436 2211 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 16 23:56:43.574794 kubelet[2211]: I0116 23:56:43.574770 2211 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 16 23:56:43.574996 kubelet[2211]: I0116 23:56:43.574836 2211 reconciler.go:26] "Reconciler: start to sync state" Jan 16 23:56:43.575332 kubelet[2211]: E0116 23:56:43.575296 2211 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://49.13.115.208:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 49.13.115.208:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 16 23:56:43.576221 kubelet[2211]: E0116 23:56:43.575870 2211 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.115.208:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-32c338e5e2?timeout=10s\": dial tcp 49.13.115.208:6443: connect: connection refused" interval="200ms" Jan 16 23:56:43.577470 kubelet[2211]: E0116 23:56:43.576603 2211 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 16 23:56:43.577470 kubelet[2211]: I0116 23:56:43.576818 2211 factory.go:223] Registration of the systemd container factory successfully Jan 16 23:56:43.577470 kubelet[2211]: I0116 23:56:43.576905 2211 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 16 23:56:43.578556 kubelet[2211]: I0116 23:56:43.578538 2211 factory.go:223] Registration of the containerd container factory successfully Jan 16 23:56:43.589041 kubelet[2211]: I0116 23:56:43.588962 2211 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 16 23:56:43.590333 kubelet[2211]: I0116 23:56:43.590283 2211 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 16 23:56:43.590333 kubelet[2211]: I0116 23:56:43.590315 2211 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 16 23:56:43.590551 kubelet[2211]: I0116 23:56:43.590343 2211 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 16 23:56:43.590551 kubelet[2211]: I0116 23:56:43.590351 2211 kubelet.go:2436] "Starting kubelet main sync loop" Jan 16 23:56:43.590551 kubelet[2211]: E0116 23:56:43.590451 2211 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 16 23:56:43.598086 kubelet[2211]: E0116 23:56:43.598042 2211 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://49.13.115.208:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 49.13.115.208:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 16 23:56:43.603965 kubelet[2211]: I0116 23:56:43.603921 2211 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 16 23:56:43.603965 kubelet[2211]: I0116 23:56:43.603940 2211 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 16 23:56:43.603965 kubelet[2211]: I0116 23:56:43.603959 2211 state_mem.go:36] "Initialized new in-memory state store" Jan 16 23:56:43.605837 kubelet[2211]: I0116 23:56:43.605811 2211 policy_none.go:49] "None policy: Start" Jan 16 23:56:43.605837 kubelet[2211]: I0116 23:56:43.605838 2211 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 16 23:56:43.606012 kubelet[2211]: I0116 23:56:43.605848 2211 state_mem.go:35] "Initializing new in-memory state store" Jan 16 23:56:43.611426 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 16 23:56:43.624434 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 16 23:56:43.628196 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 16 23:56:43.635717 kubelet[2211]: E0116 23:56:43.635683 2211 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 16 23:56:43.637386 kubelet[2211]: I0116 23:56:43.637357 2211 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 16 23:56:43.637755 kubelet[2211]: I0116 23:56:43.637709 2211 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 16 23:56:43.638344 kubelet[2211]: I0116 23:56:43.638223 2211 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 16 23:56:43.641364 kubelet[2211]: E0116 23:56:43.640854 2211 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 16 23:56:43.641364 kubelet[2211]: E0116 23:56:43.640909 2211 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-6-n-32c338e5e2\" not found" Jan 16 23:56:43.706405 systemd[1]: Created slice kubepods-burstable-podebf7c238877d49620b63bdf994b25361.slice - libcontainer container kubepods-burstable-podebf7c238877d49620b63bdf994b25361.slice. Jan 16 23:56:43.723145 kubelet[2211]: E0116 23:56:43.722743 2211 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-32c338e5e2\" not found" node="ci-4081-3-6-n-32c338e5e2" Jan 16 23:56:43.727300 systemd[1]: Created slice kubepods-burstable-podcc7f5b95766f0856c01a4583a5cd7206.slice - libcontainer container kubepods-burstable-podcc7f5b95766f0856c01a4583a5cd7206.slice. Jan 16 23:56:43.742710 kubelet[2211]: I0116 23:56:43.742678 2211 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-32c338e5e2" Jan 16 23:56:43.743196 kubelet[2211]: E0116 23:56:43.742953 2211 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-32c338e5e2\" not found" node="ci-4081-3-6-n-32c338e5e2" Jan 16 23:56:43.745687 kubelet[2211]: E0116 23:56:43.745522 2211 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://49.13.115.208:6443/api/v1/nodes\": dial tcp 49.13.115.208:6443: connect: connection refused" node="ci-4081-3-6-n-32c338e5e2" Jan 16 23:56:43.749431 systemd[1]: Created slice kubepods-burstable-pod5f013bc6fa1c2296564f2e5dbffaaa85.slice - libcontainer container kubepods-burstable-pod5f013bc6fa1c2296564f2e5dbffaaa85.slice. Jan 16 23:56:43.753517 kubelet[2211]: E0116 23:56:43.753306 2211 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-32c338e5e2\" not found" node="ci-4081-3-6-n-32c338e5e2" Jan 16 23:56:43.777442 kubelet[2211]: E0116 23:56:43.777376 2211 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.115.208:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-32c338e5e2?timeout=10s\": dial tcp 49.13.115.208:6443: connect: connection refused" interval="400ms" Jan 16 23:56:43.876270 kubelet[2211]: I0116 23:56:43.876154 2211 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cc7f5b95766f0856c01a4583a5cd7206-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-32c338e5e2\" (UID: \"cc7f5b95766f0856c01a4583a5cd7206\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-32c338e5e2" Jan 16 23:56:43.876270 kubelet[2211]: I0116 23:56:43.876239 2211 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cc7f5b95766f0856c01a4583a5cd7206-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-32c338e5e2\" (UID: \"cc7f5b95766f0856c01a4583a5cd7206\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-32c338e5e2" Jan 16 23:56:43.876270 kubelet[2211]: I0116 23:56:43.876283 2211 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ebf7c238877d49620b63bdf994b25361-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-32c338e5e2\" (UID: \"ebf7c238877d49620b63bdf994b25361\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-32c338e5e2" Jan 16 23:56:43.877055 kubelet[2211]: I0116 23:56:43.876322 2211 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cc7f5b95766f0856c01a4583a5cd7206-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-32c338e5e2\" (UID: \"cc7f5b95766f0856c01a4583a5cd7206\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-32c338e5e2" Jan 16 23:56:43.877055 kubelet[2211]: I0116 23:56:43.876358 2211 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cc7f5b95766f0856c01a4583a5cd7206-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-32c338e5e2\" (UID: \"cc7f5b95766f0856c01a4583a5cd7206\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-32c338e5e2" Jan 16 23:56:43.877055 kubelet[2211]: I0116 23:56:43.876509 2211 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cc7f5b95766f0856c01a4583a5cd7206-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-32c338e5e2\" (UID: \"cc7f5b95766f0856c01a4583a5cd7206\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-32c338e5e2" Jan 16 23:56:43.877055 kubelet[2211]: I0116 23:56:43.876556 2211 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5f013bc6fa1c2296564f2e5dbffaaa85-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-32c338e5e2\" (UID: \"5f013bc6fa1c2296564f2e5dbffaaa85\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-32c338e5e2" Jan 16 23:56:43.877055 kubelet[2211]: I0116 23:56:43.876589 2211 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ebf7c238877d49620b63bdf994b25361-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-32c338e5e2\" (UID: \"ebf7c238877d49620b63bdf994b25361\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-32c338e5e2" Jan 16 23:56:43.877279 kubelet[2211]: I0116 23:56:43.876650 2211 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ebf7c238877d49620b63bdf994b25361-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-32c338e5e2\" (UID: \"ebf7c238877d49620b63bdf994b25361\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-32c338e5e2" Jan 16 23:56:43.948910 kubelet[2211]: I0116 23:56:43.948554 2211 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-32c338e5e2" Jan 16 23:56:43.949125 kubelet[2211]: E0116 23:56:43.949063 2211 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://49.13.115.208:6443/api/v1/nodes\": dial tcp 49.13.115.208:6443: connect: connection refused" node="ci-4081-3-6-n-32c338e5e2" Jan 16 23:56:44.024809 containerd[1486]: time="2026-01-16T23:56:44.024679216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-32c338e5e2,Uid:ebf7c238877d49620b63bdf994b25361,Namespace:kube-system,Attempt:0,}" Jan 16 23:56:44.046178 containerd[1486]: time="2026-01-16T23:56:44.046042656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-32c338e5e2,Uid:cc7f5b95766f0856c01a4583a5cd7206,Namespace:kube-system,Attempt:0,}" Jan 16 23:56:44.055431 containerd[1486]: time="2026-01-16T23:56:44.055173275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-32c338e5e2,Uid:5f013bc6fa1c2296564f2e5dbffaaa85,Namespace:kube-system,Attempt:0,}" Jan 16 23:56:44.178490 kubelet[2211]: E0116 23:56:44.178393 2211 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.115.208:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-32c338e5e2?timeout=10s\": dial tcp 49.13.115.208:6443: connect: connection refused" interval="800ms" Jan 16 23:56:44.351661 kubelet[2211]: I0116 23:56:44.351253 2211 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-32c338e5e2" Jan 16 23:56:44.351760 kubelet[2211]: E0116 23:56:44.351642 2211 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://49.13.115.208:6443/api/v1/nodes\": dial tcp 49.13.115.208:6443: connect: connection refused" node="ci-4081-3-6-n-32c338e5e2" Jan 16 23:56:44.550594 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1736376714.mount: Deactivated successfully. Jan 16 23:56:44.558497 kubelet[2211]: E0116 23:56:44.556941 2211 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://49.13.115.208:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 49.13.115.208:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 16 23:56:44.558656 containerd[1486]: time="2026-01-16T23:56:44.557106573Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 23:56:44.559122 containerd[1486]: time="2026-01-16T23:56:44.559086266Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Jan 16 23:56:44.560856 containerd[1486]: time="2026-01-16T23:56:44.560824159Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 23:56:44.561887 containerd[1486]: time="2026-01-16T23:56:44.561847692Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 16 23:56:44.562903 containerd[1486]: time="2026-01-16T23:56:44.562875505Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 23:56:44.563420 containerd[1486]: time="2026-01-16T23:56:44.563382910Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 16 23:56:44.563888 containerd[1486]: time="2026-01-16T23:56:44.563863231Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 23:56:44.565890 kubelet[2211]: E0116 23:56:44.565856 2211 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://49.13.115.208:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-32c338e5e2&limit=500&resourceVersion=0\": dial tcp 49.13.115.208:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 16 23:56:44.568545 containerd[1486]: time="2026-01-16T23:56:44.568492291Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 23:56:44.570482 containerd[1486]: time="2026-01-16T23:56:44.569445932Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 514.196804ms" Jan 16 23:56:44.571197 containerd[1486]: time="2026-01-16T23:56:44.571165102Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 546.378027ms" Jan 16 23:56:44.571700 containerd[1486]: time="2026-01-16T23:56:44.571653824Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 525.349644ms" Jan 16 23:56:44.710135 containerd[1486]: time="2026-01-16T23:56:44.709787340Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 23:56:44.710135 containerd[1486]: time="2026-01-16T23:56:44.709860072Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 23:56:44.710135 containerd[1486]: time="2026-01-16T23:56:44.709876115Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:56:44.710135 containerd[1486]: time="2026-01-16T23:56:44.709965610Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:56:44.715205 containerd[1486]: time="2026-01-16T23:56:44.715083353Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 23:56:44.715406 containerd[1486]: time="2026-01-16T23:56:44.715198572Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 23:56:44.715406 containerd[1486]: time="2026-01-16T23:56:44.715213654Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:56:44.715406 containerd[1486]: time="2026-01-16T23:56:44.715385723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:56:44.718290 containerd[1486]: time="2026-01-16T23:56:44.718165552Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 23:56:44.718290 containerd[1486]: time="2026-01-16T23:56:44.718254167Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 23:56:44.718480 containerd[1486]: time="2026-01-16T23:56:44.718273290Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:56:44.718480 containerd[1486]: time="2026-01-16T23:56:44.718348543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:56:44.738658 systemd[1]: Started cri-containerd-4171e4868b18c019bcc930d1e4c51cfc32b9f7dc899332eabae7f6e057370a68.scope - libcontainer container 4171e4868b18c019bcc930d1e4c51cfc32b9f7dc899332eabae7f6e057370a68. Jan 16 23:56:44.745559 systemd[1]: Started cri-containerd-fb98d834193eab1c89148f2c27e54fced0a6ab18ca72aaa02edb5073bef1d8cb.scope - libcontainer container fb98d834193eab1c89148f2c27e54fced0a6ab18ca72aaa02edb5073bef1d8cb. Jan 16 23:56:44.760310 systemd[1]: Started cri-containerd-216b420e37d65ec1159d71d51ea5d6d7076dd836bceda1a661826821b48e242b.scope - libcontainer container 216b420e37d65ec1159d71d51ea5d6d7076dd836bceda1a661826821b48e242b. Jan 16 23:56:44.805798 containerd[1486]: time="2026-01-16T23:56:44.805595844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-32c338e5e2,Uid:ebf7c238877d49620b63bdf994b25361,Namespace:kube-system,Attempt:0,} returns sandbox id \"4171e4868b18c019bcc930d1e4c51cfc32b9f7dc899332eabae7f6e057370a68\"" Jan 16 23:56:44.814611 containerd[1486]: time="2026-01-16T23:56:44.814569796Z" level=info msg="CreateContainer within sandbox \"4171e4868b18c019bcc930d1e4c51cfc32b9f7dc899332eabae7f6e057370a68\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 16 23:56:44.826579 containerd[1486]: time="2026-01-16T23:56:44.826445838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-32c338e5e2,Uid:5f013bc6fa1c2296564f2e5dbffaaa85,Namespace:kube-system,Attempt:0,} returns sandbox id \"216b420e37d65ec1159d71d51ea5d6d7076dd836bceda1a661826821b48e242b\"" Jan 16 23:56:44.828047 kubelet[2211]: E0116 23:56:44.827995 2211 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://49.13.115.208:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 49.13.115.208:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 16 23:56:44.833695 containerd[1486]: time="2026-01-16T23:56:44.833653492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-32c338e5e2,Uid:cc7f5b95766f0856c01a4583a5cd7206,Namespace:kube-system,Attempt:0,} returns sandbox id \"fb98d834193eab1c89148f2c27e54fced0a6ab18ca72aaa02edb5073bef1d8cb\"" Jan 16 23:56:44.835058 containerd[1486]: time="2026-01-16T23:56:44.834948350Z" level=info msg="CreateContainer within sandbox \"216b420e37d65ec1159d71d51ea5d6d7076dd836bceda1a661826821b48e242b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 16 23:56:44.839877 containerd[1486]: time="2026-01-16T23:56:44.839759201Z" level=info msg="CreateContainer within sandbox \"4171e4868b18c019bcc930d1e4c51cfc32b9f7dc899332eabae7f6e057370a68\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"228f68df5ac1976aa77faea9f9c701c8d7366b70242a3ee47ae61cad6220bcbd\"" Jan 16 23:56:44.841372 containerd[1486]: time="2026-01-16T23:56:44.841175800Z" level=info msg="CreateContainer within sandbox \"fb98d834193eab1c89148f2c27e54fced0a6ab18ca72aaa02edb5073bef1d8cb\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 16 23:56:44.841685 containerd[1486]: time="2026-01-16T23:56:44.841655320Z" level=info msg="StartContainer for \"228f68df5ac1976aa77faea9f9c701c8d7366b70242a3ee47ae61cad6220bcbd\"" Jan 16 23:56:44.861474 containerd[1486]: time="2026-01-16T23:56:44.861361121Z" level=info msg="CreateContainer within sandbox \"216b420e37d65ec1159d71d51ea5d6d7076dd836bceda1a661826821b48e242b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"625662a81f5a132aaa084e3ed25efda1292af29d47e39510e18d809baf25770b\"" Jan 16 23:56:44.862267 containerd[1486]: time="2026-01-16T23:56:44.862076642Z" level=info msg="StartContainer for \"625662a81f5a132aaa084e3ed25efda1292af29d47e39510e18d809baf25770b\"" Jan 16 23:56:44.866001 containerd[1486]: time="2026-01-16T23:56:44.865882963Z" level=info msg="CreateContainer within sandbox \"fb98d834193eab1c89148f2c27e54fced0a6ab18ca72aaa02edb5073bef1d8cb\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d3905ded49821b4e07cc00d8f512338c522369f966b47a278a91039b253959f0\"" Jan 16 23:56:44.866573 containerd[1486]: time="2026-01-16T23:56:44.866542354Z" level=info msg="StartContainer for \"d3905ded49821b4e07cc00d8f512338c522369f966b47a278a91039b253959f0\"" Jan 16 23:56:44.882649 systemd[1]: Started cri-containerd-228f68df5ac1976aa77faea9f9c701c8d7366b70242a3ee47ae61cad6220bcbd.scope - libcontainer container 228f68df5ac1976aa77faea9f9c701c8d7366b70242a3ee47ae61cad6220bcbd. Jan 16 23:56:44.901670 systemd[1]: Started cri-containerd-625662a81f5a132aaa084e3ed25efda1292af29d47e39510e18d809baf25770b.scope - libcontainer container 625662a81f5a132aaa084e3ed25efda1292af29d47e39510e18d809baf25770b. Jan 16 23:56:44.917703 systemd[1]: Started cri-containerd-d3905ded49821b4e07cc00d8f512338c522369f966b47a278a91039b253959f0.scope - libcontainer container d3905ded49821b4e07cc00d8f512338c522369f966b47a278a91039b253959f0. Jan 16 23:56:44.922344 kubelet[2211]: E0116 23:56:44.922272 2211 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://49.13.115.208:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 49.13.115.208:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 16 23:56:44.945449 containerd[1486]: time="2026-01-16T23:56:44.945146359Z" level=info msg="StartContainer for \"228f68df5ac1976aa77faea9f9c701c8d7366b70242a3ee47ae61cad6220bcbd\" returns successfully" Jan 16 23:56:44.967246 containerd[1486]: time="2026-01-16T23:56:44.967065173Z" level=info msg="StartContainer for \"625662a81f5a132aaa084e3ed25efda1292af29d47e39510e18d809baf25770b\" returns successfully" Jan 16 23:56:44.980384 kubelet[2211]: E0116 23:56:44.980159 2211 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.115.208:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-32c338e5e2?timeout=10s\": dial tcp 49.13.115.208:6443: connect: connection refused" interval="1.6s" Jan 16 23:56:44.988993 containerd[1486]: time="2026-01-16T23:56:44.988947620Z" level=info msg="StartContainer for \"d3905ded49821b4e07cc00d8f512338c522369f966b47a278a91039b253959f0\" returns successfully" Jan 16 23:56:45.155094 kubelet[2211]: I0116 23:56:45.154934 2211 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-32c338e5e2" Jan 16 23:56:45.611166 kubelet[2211]: E0116 23:56:45.610999 2211 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-32c338e5e2\" not found" node="ci-4081-3-6-n-32c338e5e2" Jan 16 23:56:45.613382 kubelet[2211]: E0116 23:56:45.612767 2211 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-32c338e5e2\" not found" node="ci-4081-3-6-n-32c338e5e2" Jan 16 23:56:45.617973 kubelet[2211]: E0116 23:56:45.617943 2211 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-32c338e5e2\" not found" node="ci-4081-3-6-n-32c338e5e2" Jan 16 23:56:46.621036 kubelet[2211]: E0116 23:56:46.619419 2211 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-32c338e5e2\" not found" node="ci-4081-3-6-n-32c338e5e2" Jan 16 23:56:46.621036 kubelet[2211]: E0116 23:56:46.619594 2211 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-32c338e5e2\" not found" node="ci-4081-3-6-n-32c338e5e2" Jan 16 23:56:47.125381 kubelet[2211]: E0116 23:56:47.125340 2211 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-32c338e5e2\" not found" node="ci-4081-3-6-n-32c338e5e2" Jan 16 23:56:47.604386 kubelet[2211]: E0116 23:56:47.604326 2211 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-6-n-32c338e5e2\" not found" node="ci-4081-3-6-n-32c338e5e2" Jan 16 23:56:47.620308 kubelet[2211]: E0116 23:56:47.619917 2211 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-32c338e5e2\" not found" node="ci-4081-3-6-n-32c338e5e2" Jan 16 23:56:47.695020 kubelet[2211]: I0116 23:56:47.694942 2211 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-n-32c338e5e2" Jan 16 23:56:47.695020 kubelet[2211]: E0116 23:56:47.694989 2211 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081-3-6-n-32c338e5e2\": node \"ci-4081-3-6-n-32c338e5e2\" not found" Jan 16 23:56:47.776514 kubelet[2211]: I0116 23:56:47.776467 2211 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-32c338e5e2" Jan 16 23:56:47.787896 kubelet[2211]: E0116 23:56:47.787802 2211 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-6-n-32c338e5e2\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-32c338e5e2" Jan 16 23:56:47.787896 kubelet[2211]: I0116 23:56:47.787838 2211 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-32c338e5e2" Jan 16 23:56:47.790775 kubelet[2211]: E0116 23:56:47.790518 2211 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-n-32c338e5e2\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-6-n-32c338e5e2" Jan 16 23:56:47.790775 kubelet[2211]: I0116 23:56:47.790548 2211 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-32c338e5e2" Jan 16 23:56:47.793861 kubelet[2211]: E0116 23:56:47.793822 2211 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-32c338e5e2\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-6-n-32c338e5e2" Jan 16 23:56:48.554567 kubelet[2211]: I0116 23:56:48.554522 2211 apiserver.go:52] "Watching apiserver" Jan 16 23:56:48.575671 kubelet[2211]: I0116 23:56:48.575609 2211 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 16 23:56:49.726926 systemd[1]: Reloading requested from client PID 2489 ('systemctl') (unit session-7.scope)... Jan 16 23:56:49.726943 systemd[1]: Reloading... Jan 16 23:56:49.841555 zram_generator::config[2525]: No configuration found. Jan 16 23:56:49.963431 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 16 23:56:50.050362 systemd[1]: Reloading finished in 323 ms. Jan 16 23:56:50.092774 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 23:56:50.107020 systemd[1]: kubelet.service: Deactivated successfully. Jan 16 23:56:50.107367 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 23:56:50.107454 systemd[1]: kubelet.service: Consumed 1.227s CPU time, 125.6M memory peak, 0B memory swap peak. Jan 16 23:56:50.113091 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 23:56:50.253805 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 23:56:50.259850 (kubelet)[2574]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 16 23:56:50.316786 kubelet[2574]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 23:56:50.316786 kubelet[2574]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 16 23:56:50.316786 kubelet[2574]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 23:56:50.316786 kubelet[2574]: I0116 23:56:50.314912 2574 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 16 23:56:50.327297 kubelet[2574]: I0116 23:56:50.327267 2574 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 16 23:56:50.327445 kubelet[2574]: I0116 23:56:50.327435 2574 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 16 23:56:50.327780 kubelet[2574]: I0116 23:56:50.327764 2574 server.go:956] "Client rotation is on, will bootstrap in background" Jan 16 23:56:50.329195 kubelet[2574]: I0116 23:56:50.329172 2574 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 16 23:56:50.335451 kubelet[2574]: I0116 23:56:50.335068 2574 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 16 23:56:50.340693 kubelet[2574]: E0116 23:56:50.340653 2574 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 16 23:56:50.340904 kubelet[2574]: I0116 23:56:50.340888 2574 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 16 23:56:50.343826 kubelet[2574]: I0116 23:56:50.343799 2574 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 16 23:56:50.344256 kubelet[2574]: I0116 23:56:50.344226 2574 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 16 23:56:50.344601 kubelet[2574]: I0116 23:56:50.344336 2574 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-32c338e5e2","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 16 23:56:50.344740 kubelet[2574]: I0116 23:56:50.344727 2574 topology_manager.go:138] "Creating topology manager with none policy" Jan 16 23:56:50.344796 kubelet[2574]: I0116 23:56:50.344788 2574 container_manager_linux.go:303] "Creating device plugin manager" Jan 16 23:56:50.344894 kubelet[2574]: I0116 23:56:50.344885 2574 state_mem.go:36] "Initialized new in-memory state store" Jan 16 23:56:50.345140 kubelet[2574]: I0116 23:56:50.345126 2574 kubelet.go:480] "Attempting to sync node with API server" Jan 16 23:56:50.345226 kubelet[2574]: I0116 23:56:50.345215 2574 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 16 23:56:50.345325 kubelet[2574]: I0116 23:56:50.345317 2574 kubelet.go:386] "Adding apiserver pod source" Jan 16 23:56:50.345384 kubelet[2574]: I0116 23:56:50.345376 2574 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 16 23:56:50.349348 kubelet[2574]: I0116 23:56:50.349329 2574 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 16 23:56:50.350049 kubelet[2574]: I0116 23:56:50.350030 2574 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 16 23:56:50.352283 kubelet[2574]: I0116 23:56:50.352267 2574 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 16 23:56:50.352432 kubelet[2574]: I0116 23:56:50.352420 2574 server.go:1289] "Started kubelet" Jan 16 23:56:50.354568 kubelet[2574]: I0116 23:56:50.354550 2574 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 16 23:56:50.360790 kubelet[2574]: I0116 23:56:50.360762 2574 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 16 23:56:50.367610 kubelet[2574]: I0116 23:56:50.367314 2574 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 16 23:56:50.370498 kubelet[2574]: I0116 23:56:50.369732 2574 server.go:317] "Adding debug handlers to kubelet server" Jan 16 23:56:50.372464 kubelet[2574]: I0116 23:56:50.371037 2574 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 16 23:56:50.373340 kubelet[2574]: E0116 23:56:50.373300 2574 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-32c338e5e2\" not found" Jan 16 23:56:50.375489 kubelet[2574]: I0116 23:56:50.373485 2574 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 16 23:56:50.375809 kubelet[2574]: I0116 23:56:50.368434 2574 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 16 23:56:50.376018 kubelet[2574]: I0116 23:56:50.375994 2574 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 16 23:56:50.377787 kubelet[2574]: I0116 23:56:50.377753 2574 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 16 23:56:50.377787 kubelet[2574]: I0116 23:56:50.377792 2574 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 16 23:56:50.377879 kubelet[2574]: I0116 23:56:50.377815 2574 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 16 23:56:50.377879 kubelet[2574]: I0116 23:56:50.377821 2574 kubelet.go:2436] "Starting kubelet main sync loop" Jan 16 23:56:50.377879 kubelet[2574]: E0116 23:56:50.377861 2574 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 16 23:56:50.381640 kubelet[2574]: I0116 23:56:50.381620 2574 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 16 23:56:50.381841 kubelet[2574]: I0116 23:56:50.381829 2574 reconciler.go:26] "Reconciler: start to sync state" Jan 16 23:56:50.385045 kubelet[2574]: E0116 23:56:50.385018 2574 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 16 23:56:50.387745 kubelet[2574]: I0116 23:56:50.387722 2574 factory.go:223] Registration of the containerd container factory successfully Jan 16 23:56:50.387855 kubelet[2574]: I0116 23:56:50.387845 2574 factory.go:223] Registration of the systemd container factory successfully Jan 16 23:56:50.388866 kubelet[2574]: I0116 23:56:50.388845 2574 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 16 23:56:50.455341 kubelet[2574]: I0116 23:56:50.455238 2574 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 16 23:56:50.455341 kubelet[2574]: I0116 23:56:50.455261 2574 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 16 23:56:50.455341 kubelet[2574]: I0116 23:56:50.455291 2574 state_mem.go:36] "Initialized new in-memory state store" Jan 16 23:56:50.455722 kubelet[2574]: I0116 23:56:50.455441 2574 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 16 23:56:50.455722 kubelet[2574]: I0116 23:56:50.455452 2574 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 16 23:56:50.455722 kubelet[2574]: I0116 23:56:50.455570 2574 policy_none.go:49] "None policy: Start" Jan 16 23:56:50.455722 kubelet[2574]: I0116 23:56:50.455603 2574 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 16 23:56:50.455722 kubelet[2574]: I0116 23:56:50.455616 2574 state_mem.go:35] "Initializing new in-memory state store" Jan 16 23:56:50.455937 kubelet[2574]: I0116 23:56:50.455734 2574 state_mem.go:75] "Updated machine memory state" Jan 16 23:56:50.461681 kubelet[2574]: E0116 23:56:50.461649 2574 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 16 23:56:50.461884 kubelet[2574]: I0116 23:56:50.461828 2574 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 16 23:56:50.461884 kubelet[2574]: I0116 23:56:50.461846 2574 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 16 23:56:50.463533 kubelet[2574]: I0116 23:56:50.462500 2574 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 16 23:56:50.464852 kubelet[2574]: E0116 23:56:50.463662 2574 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 16 23:56:50.480154 kubelet[2574]: I0116 23:56:50.480098 2574 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-32c338e5e2" Jan 16 23:56:50.481959 kubelet[2574]: I0116 23:56:50.480543 2574 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-32c338e5e2" Jan 16 23:56:50.482430 kubelet[2574]: I0116 23:56:50.482251 2574 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-32c338e5e2" Jan 16 23:56:50.572111 kubelet[2574]: I0116 23:56:50.570132 2574 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-32c338e5e2" Jan 16 23:56:50.582881 kubelet[2574]: I0116 23:56:50.582844 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cc7f5b95766f0856c01a4583a5cd7206-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-32c338e5e2\" (UID: \"cc7f5b95766f0856c01a4583a5cd7206\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-32c338e5e2" Jan 16 23:56:50.582881 kubelet[2574]: I0116 23:56:50.582882 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ebf7c238877d49620b63bdf994b25361-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-32c338e5e2\" (UID: \"ebf7c238877d49620b63bdf994b25361\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-32c338e5e2" Jan 16 23:56:50.583090 kubelet[2574]: I0116 23:56:50.583048 2574 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-6-n-32c338e5e2" Jan 16 23:56:50.583143 kubelet[2574]: I0116 23:56:50.583121 2574 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-n-32c338e5e2" Jan 16 23:56:50.583475 kubelet[2574]: I0116 23:56:50.583413 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ebf7c238877d49620b63bdf994b25361-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-32c338e5e2\" (UID: \"ebf7c238877d49620b63bdf994b25361\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-32c338e5e2" Jan 16 23:56:50.584010 kubelet[2574]: I0116 23:56:50.583972 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cc7f5b95766f0856c01a4583a5cd7206-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-32c338e5e2\" (UID: \"cc7f5b95766f0856c01a4583a5cd7206\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-32c338e5e2" Jan 16 23:56:50.584091 kubelet[2574]: I0116 23:56:50.584042 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cc7f5b95766f0856c01a4583a5cd7206-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-32c338e5e2\" (UID: \"cc7f5b95766f0856c01a4583a5cd7206\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-32c338e5e2" Jan 16 23:56:50.584091 kubelet[2574]: I0116 23:56:50.584061 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cc7f5b95766f0856c01a4583a5cd7206-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-32c338e5e2\" (UID: \"cc7f5b95766f0856c01a4583a5cd7206\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-32c338e5e2" Jan 16 23:56:50.584091 kubelet[2574]: I0116 23:56:50.584082 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5f013bc6fa1c2296564f2e5dbffaaa85-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-32c338e5e2\" (UID: \"5f013bc6fa1c2296564f2e5dbffaaa85\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-32c338e5e2" Jan 16 23:56:50.584196 kubelet[2574]: I0116 23:56:50.584098 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ebf7c238877d49620b63bdf994b25361-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-32c338e5e2\" (UID: \"ebf7c238877d49620b63bdf994b25361\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-32c338e5e2" Jan 16 23:56:50.584373 kubelet[2574]: I0116 23:56:50.584350 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cc7f5b95766f0856c01a4583a5cd7206-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-32c338e5e2\" (UID: \"cc7f5b95766f0856c01a4583a5cd7206\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-32c338e5e2" Jan 16 23:56:51.350053 kubelet[2574]: I0116 23:56:51.349903 2574 apiserver.go:52] "Watching apiserver" Jan 16 23:56:51.382489 kubelet[2574]: I0116 23:56:51.382442 2574 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 16 23:56:51.444673 kubelet[2574]: I0116 23:56:51.439320 2574 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-32c338e5e2" Jan 16 23:56:51.451543 kubelet[2574]: E0116 23:56:51.451509 2574 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-n-32c338e5e2\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-6-n-32c338e5e2" Jan 16 23:56:51.473329 kubelet[2574]: I0116 23:56:51.472969 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-6-n-32c338e5e2" podStartSLOduration=1.472918935 podStartE2EDuration="1.472918935s" podCreationTimestamp="2026-01-16 23:56:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-16 23:56:51.472804441 +0000 UTC m=+1.206867269" watchObservedRunningTime="2026-01-16 23:56:51.472918935 +0000 UTC m=+1.206981803" Jan 16 23:56:51.484952 kubelet[2574]: I0116 23:56:51.484608 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-6-n-32c338e5e2" podStartSLOduration=1.484589859 podStartE2EDuration="1.484589859s" podCreationTimestamp="2026-01-16 23:56:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-16 23:56:51.484491488 +0000 UTC m=+1.218554356" watchObservedRunningTime="2026-01-16 23:56:51.484589859 +0000 UTC m=+1.218652687" Jan 16 23:56:51.516148 kubelet[2574]: I0116 23:56:51.516036 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-32c338e5e2" podStartSLOduration=1.5160173609999998 podStartE2EDuration="1.516017361s" podCreationTimestamp="2026-01-16 23:56:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-16 23:56:51.49830399 +0000 UTC m=+1.232366858" watchObservedRunningTime="2026-01-16 23:56:51.516017361 +0000 UTC m=+1.250080189" Jan 16 23:56:56.013388 kubelet[2574]: I0116 23:56:56.013346 2574 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 16 23:56:56.014389 kubelet[2574]: I0116 23:56:56.014017 2574 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 16 23:56:56.014666 containerd[1486]: time="2026-01-16T23:56:56.013795317Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 16 23:56:57.053129 systemd[1]: Created slice kubepods-besteffort-pod7cbb2d26_a860_4235_9e20_c3c00ab1ffae.slice - libcontainer container kubepods-besteffort-pod7cbb2d26_a860_4235_9e20_c3c00ab1ffae.slice. Jan 16 23:56:57.124083 kubelet[2574]: I0116 23:56:57.124025 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62n5g\" (UniqueName: \"kubernetes.io/projected/7cbb2d26-a860-4235-9e20-c3c00ab1ffae-kube-api-access-62n5g\") pod \"kube-proxy-9spph\" (UID: \"7cbb2d26-a860-4235-9e20-c3c00ab1ffae\") " pod="kube-system/kube-proxy-9spph" Jan 16 23:56:57.124083 kubelet[2574]: I0116 23:56:57.124080 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7cbb2d26-a860-4235-9e20-c3c00ab1ffae-kube-proxy\") pod \"kube-proxy-9spph\" (UID: \"7cbb2d26-a860-4235-9e20-c3c00ab1ffae\") " pod="kube-system/kube-proxy-9spph" Jan 16 23:56:57.124571 kubelet[2574]: I0116 23:56:57.124101 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7cbb2d26-a860-4235-9e20-c3c00ab1ffae-xtables-lock\") pod \"kube-proxy-9spph\" (UID: \"7cbb2d26-a860-4235-9e20-c3c00ab1ffae\") " pod="kube-system/kube-proxy-9spph" Jan 16 23:56:57.124571 kubelet[2574]: I0116 23:56:57.124117 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7cbb2d26-a860-4235-9e20-c3c00ab1ffae-lib-modules\") pod \"kube-proxy-9spph\" (UID: \"7cbb2d26-a860-4235-9e20-c3c00ab1ffae\") " pod="kube-system/kube-proxy-9spph" Jan 16 23:56:57.230515 systemd[1]: Created slice kubepods-besteffort-pod1accf453_fbfa_4478_9d2d_8212daab5e43.slice - libcontainer container kubepods-besteffort-pod1accf453_fbfa_4478_9d2d_8212daab5e43.slice. Jan 16 23:56:57.325432 kubelet[2574]: I0116 23:56:57.325169 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swlfs\" (UniqueName: \"kubernetes.io/projected/1accf453-fbfa-4478-9d2d-8212daab5e43-kube-api-access-swlfs\") pod \"tigera-operator-7dcd859c48-wk4cr\" (UID: \"1accf453-fbfa-4478-9d2d-8212daab5e43\") " pod="tigera-operator/tigera-operator-7dcd859c48-wk4cr" Jan 16 23:56:57.325432 kubelet[2574]: I0116 23:56:57.325273 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1accf453-fbfa-4478-9d2d-8212daab5e43-var-lib-calico\") pod \"tigera-operator-7dcd859c48-wk4cr\" (UID: \"1accf453-fbfa-4478-9d2d-8212daab5e43\") " pod="tigera-operator/tigera-operator-7dcd859c48-wk4cr" Jan 16 23:56:57.364113 containerd[1486]: time="2026-01-16T23:56:57.364035470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9spph,Uid:7cbb2d26-a860-4235-9e20-c3c00ab1ffae,Namespace:kube-system,Attempt:0,}" Jan 16 23:56:57.396267 containerd[1486]: time="2026-01-16T23:56:57.395269582Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 23:56:57.396267 containerd[1486]: time="2026-01-16T23:56:57.395349430Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 23:56:57.396267 containerd[1486]: time="2026-01-16T23:56:57.395395474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:56:57.396267 containerd[1486]: time="2026-01-16T23:56:57.395613854Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:56:57.416657 systemd[1]: Started cri-containerd-26d4b8a7d7bcaa3bff3cc1e34b00490a782de275aa5543b161cd2b8cb45696f9.scope - libcontainer container 26d4b8a7d7bcaa3bff3cc1e34b00490a782de275aa5543b161cd2b8cb45696f9. Jan 16 23:56:57.451525 containerd[1486]: time="2026-01-16T23:56:57.451152432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9spph,Uid:7cbb2d26-a860-4235-9e20-c3c00ab1ffae,Namespace:kube-system,Attempt:0,} returns sandbox id \"26d4b8a7d7bcaa3bff3cc1e34b00490a782de275aa5543b161cd2b8cb45696f9\"" Jan 16 23:56:57.458561 containerd[1486]: time="2026-01-16T23:56:57.458431591Z" level=info msg="CreateContainer within sandbox \"26d4b8a7d7bcaa3bff3cc1e34b00490a782de275aa5543b161cd2b8cb45696f9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 16 23:56:57.473071 containerd[1486]: time="2026-01-16T23:56:57.472955345Z" level=info msg="CreateContainer within sandbox \"26d4b8a7d7bcaa3bff3cc1e34b00490a782de275aa5543b161cd2b8cb45696f9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2201adffb3e7144f041e9baef716b6dca7a778375c94fc8be1901cb9e8f2abed\"" Jan 16 23:56:57.475124 containerd[1486]: time="2026-01-16T23:56:57.473857509Z" level=info msg="StartContainer for \"2201adffb3e7144f041e9baef716b6dca7a778375c94fc8be1901cb9e8f2abed\"" Jan 16 23:56:57.503661 systemd[1]: Started cri-containerd-2201adffb3e7144f041e9baef716b6dca7a778375c94fc8be1901cb9e8f2abed.scope - libcontainer container 2201adffb3e7144f041e9baef716b6dca7a778375c94fc8be1901cb9e8f2abed. Jan 16 23:56:57.539233 containerd[1486]: time="2026-01-16T23:56:57.538482055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-wk4cr,Uid:1accf453-fbfa-4478-9d2d-8212daab5e43,Namespace:tigera-operator,Attempt:0,}" Jan 16 23:56:57.539816 containerd[1486]: time="2026-01-16T23:56:57.539762654Z" level=info msg="StartContainer for \"2201adffb3e7144f041e9baef716b6dca7a778375c94fc8be1901cb9e8f2abed\" returns successfully" Jan 16 23:56:57.576775 containerd[1486]: time="2026-01-16T23:56:57.576592368Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 23:56:57.576775 containerd[1486]: time="2026-01-16T23:56:57.576702618Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 23:56:57.576775 containerd[1486]: time="2026-01-16T23:56:57.576743822Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:56:57.577517 containerd[1486]: time="2026-01-16T23:56:57.577380602Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:56:57.601030 systemd[1]: Started cri-containerd-1eba388e6ac7dc4c069d138e2aa2d0a07f523041a7adf9f904a42e88670d1c39.scope - libcontainer container 1eba388e6ac7dc4c069d138e2aa2d0a07f523041a7adf9f904a42e88670d1c39. Jan 16 23:56:57.647422 containerd[1486]: time="2026-01-16T23:56:57.647353406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-wk4cr,Uid:1accf453-fbfa-4478-9d2d-8212daab5e43,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"1eba388e6ac7dc4c069d138e2aa2d0a07f523041a7adf9f904a42e88670d1c39\"" Jan 16 23:56:57.652537 containerd[1486]: time="2026-01-16T23:56:57.652479124Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 16 23:56:58.604734 kubelet[2574]: I0116 23:56:58.603995 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9spph" podStartSLOduration=1.603972441 podStartE2EDuration="1.603972441s" podCreationTimestamp="2026-01-16 23:56:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-16 23:56:58.472604343 +0000 UTC m=+8.206667251" watchObservedRunningTime="2026-01-16 23:56:58.603972441 +0000 UTC m=+8.338035269" Jan 16 23:56:59.172452 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1695648217.mount: Deactivated successfully. Jan 16 23:56:59.605500 containerd[1486]: time="2026-01-16T23:56:59.603768979Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:56:59.605500 containerd[1486]: time="2026-01-16T23:56:59.605380958Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Jan 16 23:56:59.606419 containerd[1486]: time="2026-01-16T23:56:59.605654542Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:56:59.608348 containerd[1486]: time="2026-01-16T23:56:59.608278208Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:56:59.609222 containerd[1486]: time="2026-01-16T23:56:59.609183486Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 1.956641556s" Jan 16 23:56:59.609404 containerd[1486]: time="2026-01-16T23:56:59.609306577Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Jan 16 23:56:59.615627 containerd[1486]: time="2026-01-16T23:56:59.615591119Z" level=info msg="CreateContainer within sandbox \"1eba388e6ac7dc4c069d138e2aa2d0a07f523041a7adf9f904a42e88670d1c39\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 16 23:56:59.628782 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3872864944.mount: Deactivated successfully. Jan 16 23:56:59.633624 containerd[1486]: time="2026-01-16T23:56:59.633574671Z" level=info msg="CreateContainer within sandbox \"1eba388e6ac7dc4c069d138e2aa2d0a07f523041a7adf9f904a42e88670d1c39\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"31cd9c23b400b1b9a9869f41319a6c32d00898cf08bd43c1b3ab4ef0819230f6\"" Jan 16 23:56:59.635680 containerd[1486]: time="2026-01-16T23:56:59.634657725Z" level=info msg="StartContainer for \"31cd9c23b400b1b9a9869f41319a6c32d00898cf08bd43c1b3ab4ef0819230f6\"" Jan 16 23:56:59.668807 systemd[1]: Started cri-containerd-31cd9c23b400b1b9a9869f41319a6c32d00898cf08bd43c1b3ab4ef0819230f6.scope - libcontainer container 31cd9c23b400b1b9a9869f41319a6c32d00898cf08bd43c1b3ab4ef0819230f6. Jan 16 23:56:59.699850 containerd[1486]: time="2026-01-16T23:56:59.699796106Z" level=info msg="StartContainer for \"31cd9c23b400b1b9a9869f41319a6c32d00898cf08bd43c1b3ab4ef0819230f6\" returns successfully" Jan 16 23:57:00.478323 kubelet[2574]: I0116 23:57:00.477744 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-wk4cr" podStartSLOduration=1.518126567 podStartE2EDuration="3.477725223s" podCreationTimestamp="2026-01-16 23:56:57 +0000 UTC" firstStartedPulling="2026-01-16 23:56:57.651890589 +0000 UTC m=+7.385953457" lastFinishedPulling="2026-01-16 23:56:59.611489285 +0000 UTC m=+9.345552113" observedRunningTime="2026-01-16 23:57:00.477415118 +0000 UTC m=+10.211477986" watchObservedRunningTime="2026-01-16 23:57:00.477725223 +0000 UTC m=+10.211788051" Jan 16 23:57:06.014973 sudo[1703]: pam_unix(sudo:session): session closed for user root Jan 16 23:57:06.115118 sshd[1700]: pam_unix(sshd:session): session closed for user core Jan 16 23:57:06.120981 systemd-logind[1461]: Session 7 logged out. Waiting for processes to exit. Jan 16 23:57:06.121245 systemd[1]: sshd@6-49.13.115.208:22-4.153.228.146:56618.service: Deactivated successfully. Jan 16 23:57:06.124964 systemd[1]: session-7.scope: Deactivated successfully. Jan 16 23:57:06.125348 systemd[1]: session-7.scope: Consumed 7.052s CPU time, 152.3M memory peak, 0B memory swap peak. Jan 16 23:57:06.127355 systemd-logind[1461]: Removed session 7. Jan 16 23:57:16.001604 systemd[1]: Created slice kubepods-besteffort-poddd598b0a_dbbb_49de_8fdf_24f922b3546b.slice - libcontainer container kubepods-besteffort-poddd598b0a_dbbb_49de_8fdf_24f922b3546b.slice. Jan 16 23:57:16.058372 kubelet[2574]: I0116 23:57:16.058312 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd598b0a-dbbb-49de-8fdf-24f922b3546b-tigera-ca-bundle\") pod \"calico-typha-98bc6c554-bdcv9\" (UID: \"dd598b0a-dbbb-49de-8fdf-24f922b3546b\") " pod="calico-system/calico-typha-98bc6c554-bdcv9" Jan 16 23:57:16.058372 kubelet[2574]: I0116 23:57:16.058368 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/dd598b0a-dbbb-49de-8fdf-24f922b3546b-typha-certs\") pod \"calico-typha-98bc6c554-bdcv9\" (UID: \"dd598b0a-dbbb-49de-8fdf-24f922b3546b\") " pod="calico-system/calico-typha-98bc6c554-bdcv9" Jan 16 23:57:16.059120 kubelet[2574]: I0116 23:57:16.058387 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zjs2\" (UniqueName: \"kubernetes.io/projected/dd598b0a-dbbb-49de-8fdf-24f922b3546b-kube-api-access-9zjs2\") pod \"calico-typha-98bc6c554-bdcv9\" (UID: \"dd598b0a-dbbb-49de-8fdf-24f922b3546b\") " pod="calico-system/calico-typha-98bc6c554-bdcv9" Jan 16 23:57:16.234093 systemd[1]: Created slice kubepods-besteffort-pod5346478f_1dc9_4e20_8619_034e658537fb.slice - libcontainer container kubepods-besteffort-pod5346478f_1dc9_4e20_8619_034e658537fb.slice. Jan 16 23:57:16.261454 kubelet[2574]: I0116 23:57:16.260544 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/5346478f-1dc9-4e20-8619-034e658537fb-cni-net-dir\") pod \"calico-node-788k7\" (UID: \"5346478f-1dc9-4e20-8619-034e658537fb\") " pod="calico-system/calico-node-788k7" Jan 16 23:57:16.261454 kubelet[2574]: I0116 23:57:16.260581 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5346478f-1dc9-4e20-8619-034e658537fb-tigera-ca-bundle\") pod \"calico-node-788k7\" (UID: \"5346478f-1dc9-4e20-8619-034e658537fb\") " pod="calico-system/calico-node-788k7" Jan 16 23:57:16.261454 kubelet[2574]: I0116 23:57:16.260642 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5346478f-1dc9-4e20-8619-034e658537fb-cni-bin-dir\") pod \"calico-node-788k7\" (UID: \"5346478f-1dc9-4e20-8619-034e658537fb\") " pod="calico-system/calico-node-788k7" Jan 16 23:57:16.261454 kubelet[2574]: I0116 23:57:16.260658 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/5346478f-1dc9-4e20-8619-034e658537fb-cni-log-dir\") pod \"calico-node-788k7\" (UID: \"5346478f-1dc9-4e20-8619-034e658537fb\") " pod="calico-system/calico-node-788k7" Jan 16 23:57:16.261454 kubelet[2574]: I0116 23:57:16.260686 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5346478f-1dc9-4e20-8619-034e658537fb-var-run-calico\") pod \"calico-node-788k7\" (UID: \"5346478f-1dc9-4e20-8619-034e658537fb\") " pod="calico-system/calico-node-788k7" Jan 16 23:57:16.261757 kubelet[2574]: I0116 23:57:16.260712 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5346478f-1dc9-4e20-8619-034e658537fb-lib-modules\") pod \"calico-node-788k7\" (UID: \"5346478f-1dc9-4e20-8619-034e658537fb\") " pod="calico-system/calico-node-788k7" Jan 16 23:57:16.261757 kubelet[2574]: I0116 23:57:16.260728 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5346478f-1dc9-4e20-8619-034e658537fb-policysync\") pod \"calico-node-788k7\" (UID: \"5346478f-1dc9-4e20-8619-034e658537fb\") " pod="calico-system/calico-node-788k7" Jan 16 23:57:16.261757 kubelet[2574]: I0116 23:57:16.260746 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/5346478f-1dc9-4e20-8619-034e658537fb-flexvol-driver-host\") pod \"calico-node-788k7\" (UID: \"5346478f-1dc9-4e20-8619-034e658537fb\") " pod="calico-system/calico-node-788k7" Jan 16 23:57:16.261757 kubelet[2574]: I0116 23:57:16.260763 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5346478f-1dc9-4e20-8619-034e658537fb-var-lib-calico\") pod \"calico-node-788k7\" (UID: \"5346478f-1dc9-4e20-8619-034e658537fb\") " pod="calico-system/calico-node-788k7" Jan 16 23:57:16.261757 kubelet[2574]: I0116 23:57:16.260777 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5346478f-1dc9-4e20-8619-034e658537fb-xtables-lock\") pod \"calico-node-788k7\" (UID: \"5346478f-1dc9-4e20-8619-034e658537fb\") " pod="calico-system/calico-node-788k7" Jan 16 23:57:16.261929 kubelet[2574]: I0116 23:57:16.260793 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87b7w\" (UniqueName: \"kubernetes.io/projected/5346478f-1dc9-4e20-8619-034e658537fb-kube-api-access-87b7w\") pod \"calico-node-788k7\" (UID: \"5346478f-1dc9-4e20-8619-034e658537fb\") " pod="calico-system/calico-node-788k7" Jan 16 23:57:16.261929 kubelet[2574]: I0116 23:57:16.260843 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/5346478f-1dc9-4e20-8619-034e658537fb-node-certs\") pod \"calico-node-788k7\" (UID: \"5346478f-1dc9-4e20-8619-034e658537fb\") " pod="calico-system/calico-node-788k7" Jan 16 23:57:16.307559 containerd[1486]: time="2026-01-16T23:57:16.307497941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-98bc6c554-bdcv9,Uid:dd598b0a-dbbb-49de-8fdf-24f922b3546b,Namespace:calico-system,Attempt:0,}" Jan 16 23:57:16.338833 containerd[1486]: time="2026-01-16T23:57:16.338719148Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 23:57:16.338833 containerd[1486]: time="2026-01-16T23:57:16.338779991Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 23:57:16.338833 containerd[1486]: time="2026-01-16T23:57:16.338806153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:57:16.339334 containerd[1486]: time="2026-01-16T23:57:16.338915158Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:57:16.368200 systemd[1]: Started cri-containerd-fb915bd4f37696488184425e31030b035a41e423be5659ab7b84419844ae5810.scope - libcontainer container fb915bd4f37696488184425e31030b035a41e423be5659ab7b84419844ae5810. Jan 16 23:57:16.393773 kubelet[2574]: E0116 23:57:16.393720 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:16.394655 kubelet[2574]: W0116 23:57:16.394623 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:16.394807 kubelet[2574]: E0116 23:57:16.394754 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:16.415708 kubelet[2574]: E0116 23:57:16.415657 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b76rd" podUID="ebb15273-01f0-4342-86a4-e67c5f3e53d0" Jan 16 23:57:16.438383 kubelet[2574]: E0116 23:57:16.438347 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:16.438383 kubelet[2574]: W0116 23:57:16.438374 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:16.438552 kubelet[2574]: E0116 23:57:16.438397 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:16.439702 kubelet[2574]: E0116 23:57:16.439676 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:16.439802 kubelet[2574]: W0116 23:57:16.439697 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:16.439802 kubelet[2574]: E0116 23:57:16.439761 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:16.440057 kubelet[2574]: E0116 23:57:16.440037 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:16.440057 kubelet[2574]: W0116 23:57:16.440052 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:16.440146 kubelet[2574]: E0116 23:57:16.440063 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:16.440266 kubelet[2574]: E0116 23:57:16.440253 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:16.440266 kubelet[2574]: W0116 23:57:16.440263 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:16.440326 kubelet[2574]: E0116 23:57:16.440273 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:16.440629 kubelet[2574]: E0116 23:57:16.440602 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:16.440629 kubelet[2574]: W0116 23:57:16.440618 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:16.440629 kubelet[2574]: E0116 23:57:16.440629 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:16.440838 kubelet[2574]: E0116 23:57:16.440822 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:16.440838 kubelet[2574]: W0116 23:57:16.440834 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:16.440901 kubelet[2574]: E0116 23:57:16.440844 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:16.441024 kubelet[2574]: E0116 23:57:16.441009 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:16.441024 kubelet[2574]: W0116 23:57:16.441020 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:16.441132 kubelet[2574]: E0116 23:57:16.441029 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:16.441197 kubelet[2574]: E0116 23:57:16.441177 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:16.441197 kubelet[2574]: W0116 23:57:16.441189 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:16.441255 kubelet[2574]: E0116 23:57:16.441205 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:16.441384 kubelet[2574]: E0116 23:57:16.441371 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:16.441384 kubelet[2574]: W0116 23:57:16.441381 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:16.441448 kubelet[2574]: E0116 23:57:16.441389 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:16.441601 kubelet[2574]: E0116 23:57:16.441570 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:16.441601 kubelet[2574]: W0116 23:57:16.441581 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:16.441601 kubelet[2574]: E0116 23:57:16.441590 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:16.441756 kubelet[2574]: E0116 23:57:16.441741 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:16.441788 kubelet[2574]: W0116 23:57:16.441759 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:16.441788 kubelet[2574]: E0116 23:57:16.441768 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:16.441929 kubelet[2574]: E0116 23:57:16.441915 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:16.441929 kubelet[2574]: W0116 23:57:16.441926 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:16.442015 kubelet[2574]: E0116 23:57:16.441935 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:16.442138 kubelet[2574]: E0116 23:57:16.442122 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:16.442178 kubelet[2574]: W0116 23:57:16.442134 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:16.442178 kubelet[2574]: E0116 23:57:16.442153 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:16.442393 kubelet[2574]: E0116 23:57:16.442375 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:16.442393 kubelet[2574]: W0116 23:57:16.442390 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:16.442498 kubelet[2574]: E0116 23:57:16.442398 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:16.442758 kubelet[2574]: E0116 23:57:16.442739 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:16.442758 kubelet[2574]: W0116 23:57:16.442755 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:16.442825 kubelet[2574]: E0116 23:57:16.442765 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:16.447152 kubelet[2574]: E0116 23:57:16.446393 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:16.447152 kubelet[2574]: W0116 23:57:16.446416 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:16.447152 kubelet[2574]: E0116 23:57:16.446433 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:16.448678 kubelet[2574]: E0116 23:57:16.448545 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:16.448678 kubelet[2574]: W0116 23:57:16.448565 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:16.448678 kubelet[2574]: E0116 23:57:16.448583 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:16.449763 kubelet[2574]: E0116 23:57:16.449141 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:16.449763 kubelet[2574]: W0116 23:57:16.449161 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:16.449763 kubelet[2574]: E0116 23:57:16.449174 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:16.450740 kubelet[2574]: E0116 23:57:16.450543 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:16.450740 kubelet[2574]: W0116 23:57:16.450562 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:16.450740 kubelet[2574]: E0116 23:57:16.450577 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:16.451304 kubelet[2574]: E0116 23:57:16.451105 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:16.451304 kubelet[2574]: W0116 23:57:16.451119 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:16.451304 kubelet[2574]: E0116 23:57:16.451148 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:16.466048 kubelet[2574]: E0116 23:57:16.465738 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:16.466048 kubelet[2574]: W0116 23:57:16.465800 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:16.466048 kubelet[2574]: E0116 23:57:16.465824 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:16.466048 kubelet[2574]: I0116 23:57:16.465853 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ebb15273-01f0-4342-86a4-e67c5f3e53d0-kubelet-dir\") pod \"csi-node-driver-b76rd\" (UID: \"ebb15273-01f0-4342-86a4-e67c5f3e53d0\") " pod="calico-system/csi-node-driver-b76rd" Jan 16 23:57:16.466748 kubelet[2574]: E0116 23:57:16.466531 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:16.466748 kubelet[2574]: W0116 23:57:16.466548 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:16.466748 kubelet[2574]: E0116 23:57:16.466564 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:16.466748 kubelet[2574]: I0116 23:57:16.466581 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ebb15273-01f0-4342-86a4-e67c5f3e53d0-registration-dir\") pod \"csi-node-driver-b76rd\" (UID: \"ebb15273-01f0-4342-86a4-e67c5f3e53d0\") " pod="calico-system/csi-node-driver-b76rd" Jan 16 23:57:16.467513 kubelet[2574]: E0116 23:57:16.467454 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:16.467728 kubelet[2574]: W0116 23:57:16.467593 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:16.467728 kubelet[2574]: E0116 23:57:16.467613 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:16.467728 kubelet[2574]: I0116 23:57:16.467633 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ebb15273-01f0-4342-86a4-e67c5f3e53d0-socket-dir\") pod \"csi-node-driver-b76rd\" (UID: \"ebb15273-01f0-4342-86a4-e67c5f3e53d0\") " pod="calico-system/csi-node-driver-b76rd" Jan 16 23:57:16.467903 kubelet[2574]: E0116 23:57:16.467881 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:16.468087 kubelet[2574]: W0116 23:57:16.467992 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:16.468087 kubelet[2574]: E0116 23:57:16.468010 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:16.468087 kubelet[2574]: I0116 23:57:16.468036 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr4fs\" (UniqueName: \"kubernetes.io/projected/ebb15273-01f0-4342-86a4-e67c5f3e53d0-kube-api-access-gr4fs\") pod \"csi-node-driver-b76rd\" (UID: \"ebb15273-01f0-4342-86a4-e67c5f3e53d0\") " pod="calico-system/csi-node-driver-b76rd" Jan 16 23:57:16.468468 kubelet[2574]: E0116 23:57:16.468398 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:16.468468 kubelet[2574]: W0116 23:57:16.468411 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:16.468468 kubelet[2574]: E0116 23:57:16.468422 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:16.468468 kubelet[2574]: I0116 23:57:16.468447 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ebb15273-01f0-4342-86a4-e67c5f3e53d0-varrun\") pod \"csi-node-driver-b76rd\" (UID: \"ebb15273-01f0-4342-86a4-e67c5f3e53d0\") " pod="calico-system/csi-node-driver-b76rd" Jan 16 23:57:16.468889 kubelet[2574]: E0116 23:57:16.468775 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:16.468889 kubelet[2574]: W0116 23:57:16.468787 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:16.468889 kubelet[2574]: E0116 23:57:16.468800 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:16.469214 kubelet[2574]: E0116 23:57:16.469156 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:16.469214 kubelet[2574]: W0116 23:57:16.469166 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:16.469214 kubelet[2574]: E0116 23:57:16.469177 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:16.469788 kubelet[2574]: E0116 23:57:16.469686 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:16.469788 kubelet[2574]: W0116 23:57:16.469699 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:16.469788 kubelet[2574]: E0116 23:57:16.469709 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:16.470160 kubelet[2574]: E0116 23:57:16.470054 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:16.470160 kubelet[2574]: W0116 23:57:16.470067 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:16.470160 kubelet[2574]: E0116 23:57:16.470082 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:16.470502 kubelet[2574]: E0116 23:57:16.470379 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:16.470502 kubelet[2574]: W0116 23:57:16.470391 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:16.470502 kubelet[2574]: E0116 23:57:16.470401 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:16.470839 kubelet[2574]: E0116 23:57:16.470740 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:16.470839 kubelet[2574]: W0116 23:57:16.470751 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:16.470839 kubelet[2574]: E0116 23:57:16.470760 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:16.471236 kubelet[2574]: E0116 23:57:16.471110 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:16.471236 kubelet[2574]: W0116 23:57:16.471121 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:16.471236 kubelet[2574]: E0116 23:57:16.471131 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:16.471408 kubelet[2574]: E0116 23:57:16.471377 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:16.471408 kubelet[2574]: W0116 23:57:16.471387 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:16.471408 kubelet[2574]: E0116 23:57:16.471397 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:16.471958 kubelet[2574]: E0116 23:57:16.471836 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:16.471958 kubelet[2574]: W0116 23:57:16.471847 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:16.471958 kubelet[2574]: E0116 23:57:16.471856 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:16.472267 kubelet[2574]: E0116 23:57:16.472186 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:16.472267 kubelet[2574]: W0116 23:57:16.472197 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:16.472267 kubelet[2574]: E0116 23:57:16.472206 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:16.473940 containerd[1486]: time="2026-01-16T23:57:16.473814076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-98bc6c554-bdcv9,Uid:dd598b0a-dbbb-49de-8fdf-24f922b3546b,Namespace:calico-system,Attempt:0,} returns sandbox id \"fb915bd4f37696488184425e31030b035a41e423be5659ab7b84419844ae5810\"" Jan 16 23:57:16.476588 containerd[1486]: time="2026-01-16T23:57:16.476441054Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 16 23:57:16.539047 containerd[1486]: time="2026-01-16T23:57:16.538994475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-788k7,Uid:5346478f-1dc9-4e20-8619-034e658537fb,Namespace:calico-system,Attempt:0,}" Jan 16 23:57:16.574316 kubelet[2574]: E0116 23:57:16.574007 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:16.574316 kubelet[2574]: W0116 23:57:16.574035 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:16.574316 kubelet[2574]: E0116 23:57:16.574057 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:16.575173 kubelet[2574]: E0116 23:57:16.574878 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:16.575173 kubelet[2574]: W0116 23:57:16.574894 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:16.575173 kubelet[2574]: E0116 23:57:16.574909 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:16.577389 kubelet[2574]: E0116 23:57:16.576217 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:16.577389 kubelet[2574]: W0116 23:57:16.576234 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:16.577389 kubelet[2574]: E0116 23:57:16.576249 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:16.577677 kubelet[2574]: E0116 23:57:16.577619 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:16.577677 kubelet[2574]: W0116 23:57:16.577635 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:16.577677 kubelet[2574]: E0116 23:57:16.577649 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:16.578303 containerd[1486]: time="2026-01-16T23:57:16.573720387Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 23:57:16.578303 containerd[1486]: time="2026-01-16T23:57:16.575968865Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 23:57:16.578303 containerd[1486]: time="2026-01-16T23:57:16.575989466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:57:16.578303 containerd[1486]: time="2026-01-16T23:57:16.576099792Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:57:16.578652 kubelet[2574]: E0116 23:57:16.578497 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:16.578652 kubelet[2574]: W0116 23:57:16.578514 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:16.578652 kubelet[2574]: E0116 23:57:16.578525 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:16.579190 kubelet[2574]: E0116 23:57:16.579174 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:16.579449 kubelet[2574]: W0116 23:57:16.579270 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:16.579449 kubelet[2574]: E0116 23:57:16.579289 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:16.580294 kubelet[2574]: E0116 23:57:16.580097 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:16.580294 kubelet[2574]: W0116 23:57:16.580114 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:16.580294 kubelet[2574]: E0116 23:57:16.580126 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:16.583173 kubelet[2574]: E0116 23:57:16.582781 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:16.583173 kubelet[2574]: W0116 23:57:16.582798 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:16.583173 kubelet[2574]: E0116 23:57:16.582813 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:16.584518 kubelet[2574]: E0116 23:57:16.584419 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:16.584518 kubelet[2574]: W0116 23:57:16.584435 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:16.584845 kubelet[2574]: E0116 23:57:16.584449 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:16.585200 kubelet[2574]: E0116 23:57:16.584946 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:16.585200 kubelet[2574]: W0116 23:57:16.585103 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:16.585200 kubelet[2574]: E0116 23:57:16.585117 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:16.585839 kubelet[2574]: E0116 23:57:16.585715 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:16.585839 kubelet[2574]: W0116 23:57:16.585736 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:16.585839 kubelet[2574]: E0116 23:57:16.585750 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:16.586118 kubelet[2574]: E0116 23:57:16.586104 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:16.586323 kubelet[2574]: W0116 23:57:16.586196 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:16.586323 kubelet[2574]: E0116 23:57:16.586233 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:16.586828 kubelet[2574]: E0116 23:57:16.586672 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:16.586828 kubelet[2574]: W0116 23:57:16.586736 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:16.586828 kubelet[2574]: E0116 23:57:16.586750 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:16.587766 kubelet[2574]: E0116 23:57:16.587374 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:16.587766 kubelet[2574]: W0116 23:57:16.587391 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:16.587766 kubelet[2574]: E0116 23:57:16.587502 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:16.588240 kubelet[2574]: E0116 23:57:16.588221 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:16.588240 kubelet[2574]: W0116 23:57:16.588239 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:16.588536 kubelet[2574]: E0116 23:57:16.588252 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:16.589436 kubelet[2574]: E0116 23:57:16.589414 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:16.589513 kubelet[2574]: W0116 23:57:16.589435 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:16.589513 kubelet[2574]: E0116 23:57:16.589452 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:16.589850 kubelet[2574]: E0116 23:57:16.589817 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:16.589897 kubelet[2574]: W0116 23:57:16.589851 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:16.589897 kubelet[2574]: E0116 23:57:16.589865 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:16.590193 kubelet[2574]: E0116 23:57:16.590166 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:16.590193 kubelet[2574]: W0116 23:57:16.590193 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:16.590260 kubelet[2574]: E0116 23:57:16.590206 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:16.590738 kubelet[2574]: E0116 23:57:16.590711 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:16.590738 kubelet[2574]: W0116 23:57:16.590730 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:16.590892 kubelet[2574]: E0116 23:57:16.590745 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:16.591302 kubelet[2574]: E0116 23:57:16.591273 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:16.591476 kubelet[2574]: W0116 23:57:16.591349 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:16.591476 kubelet[2574]: E0116 23:57:16.591399 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:16.592160 kubelet[2574]: E0116 23:57:16.592140 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:16.592160 kubelet[2574]: W0116 23:57:16.592158 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:16.592304 kubelet[2574]: E0116 23:57:16.592170 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:16.592616 kubelet[2574]: E0116 23:57:16.592598 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:16.592679 kubelet[2574]: W0116 23:57:16.592614 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:16.592679 kubelet[2574]: E0116 23:57:16.592650 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:16.593010 kubelet[2574]: E0116 23:57:16.592975 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:16.593010 kubelet[2574]: W0116 23:57:16.593011 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:16.593080 kubelet[2574]: E0116 23:57:16.593022 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:16.593432 kubelet[2574]: E0116 23:57:16.593417 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:16.593432 kubelet[2574]: W0116 23:57:16.593430 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:16.593561 kubelet[2574]: E0116 23:57:16.593441 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:16.593826 kubelet[2574]: E0116 23:57:16.593808 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:16.593826 kubelet[2574]: W0116 23:57:16.593822 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:16.594026 kubelet[2574]: E0116 23:57:16.593834 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:16.608712 systemd[1]: Started cri-containerd-be7db9b67e61455d5b4c284e0e2c8c73dcbcf3ef497c3c1aa1ba61184f808fa6.scope - libcontainer container be7db9b67e61455d5b4c284e0e2c8c73dcbcf3ef497c3c1aa1ba61184f808fa6. Jan 16 23:57:16.613266 kubelet[2574]: E0116 23:57:16.613070 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:16.613266 kubelet[2574]: W0116 23:57:16.613095 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:16.613266 kubelet[2574]: E0116 23:57:16.613116 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:16.647154 containerd[1486]: time="2026-01-16T23:57:16.646816843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-788k7,Uid:5346478f-1dc9-4e20-8619-034e658537fb,Namespace:calico-system,Attempt:0,} returns sandbox id \"be7db9b67e61455d5b4c284e0e2c8c73dcbcf3ef497c3c1aa1ba61184f808fa6\"" Jan 16 23:57:17.900786 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2820597202.mount: Deactivated successfully. Jan 16 23:57:18.287006 containerd[1486]: time="2026-01-16T23:57:18.286952063Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:57:18.289069 containerd[1486]: time="2026-01-16T23:57:18.288986646Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Jan 16 23:57:18.291355 containerd[1486]: time="2026-01-16T23:57:18.290133464Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:57:18.293940 containerd[1486]: time="2026-01-16T23:57:18.293888254Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:57:18.294924 containerd[1486]: time="2026-01-16T23:57:18.294742138Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 1.818142235s" Jan 16 23:57:18.294924 containerd[1486]: time="2026-01-16T23:57:18.294804101Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Jan 16 23:57:18.298873 containerd[1486]: time="2026-01-16T23:57:18.298583173Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 16 23:57:18.318885 containerd[1486]: time="2026-01-16T23:57:18.318816279Z" level=info msg="CreateContainer within sandbox \"fb915bd4f37696488184425e31030b035a41e423be5659ab7b84419844ae5810\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 16 23:57:18.340882 containerd[1486]: time="2026-01-16T23:57:18.340707029Z" level=info msg="CreateContainer within sandbox \"fb915bd4f37696488184425e31030b035a41e423be5659ab7b84419844ae5810\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"5ad8d4c95c85a718a8c17759fd982221f15373ed855d22695930d286b3fb4373\"" Jan 16 23:57:18.343249 containerd[1486]: time="2026-01-16T23:57:18.343118472Z" level=info msg="StartContainer for \"5ad8d4c95c85a718a8c17759fd982221f15373ed855d22695930d286b3fb4373\"" Jan 16 23:57:18.379580 kubelet[2574]: E0116 23:57:18.378877 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b76rd" podUID="ebb15273-01f0-4342-86a4-e67c5f3e53d0" Jan 16 23:57:18.388823 systemd[1]: Started cri-containerd-5ad8d4c95c85a718a8c17759fd982221f15373ed855d22695930d286b3fb4373.scope - libcontainer container 5ad8d4c95c85a718a8c17759fd982221f15373ed855d22695930d286b3fb4373. Jan 16 23:57:18.430854 containerd[1486]: time="2026-01-16T23:57:18.430777558Z" level=info msg="StartContainer for \"5ad8d4c95c85a718a8c17759fd982221f15373ed855d22695930d286b3fb4373\" returns successfully" Jan 16 23:57:18.542566 kubelet[2574]: I0116 23:57:18.541768 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-98bc6c554-bdcv9" podStartSLOduration=1.7193930549999998 podStartE2EDuration="3.541748147s" podCreationTimestamp="2026-01-16 23:57:15 +0000 UTC" firstStartedPulling="2026-01-16 23:57:16.475190748 +0000 UTC m=+26.209253536" lastFinishedPulling="2026-01-16 23:57:18.2975458 +0000 UTC m=+28.031608628" observedRunningTime="2026-01-16 23:57:18.541623901 +0000 UTC m=+28.275686729" watchObservedRunningTime="2026-01-16 23:57:18.541748147 +0000 UTC m=+28.275810975" Jan 16 23:57:18.567374 kubelet[2574]: E0116 23:57:18.567338 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:18.567654 kubelet[2574]: W0116 23:57:18.567499 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:18.567654 kubelet[2574]: E0116 23:57:18.567527 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:18.568904 kubelet[2574]: E0116 23:57:18.568778 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:18.568904 kubelet[2574]: W0116 23:57:18.568800 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:18.568904 kubelet[2574]: E0116 23:57:18.568852 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:18.569420 kubelet[2574]: E0116 23:57:18.569294 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:18.569420 kubelet[2574]: W0116 23:57:18.569310 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:18.569420 kubelet[2574]: E0116 23:57:18.569324 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:18.569846 kubelet[2574]: E0116 23:57:18.569515 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:18.569846 kubelet[2574]: W0116 23:57:18.569524 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:18.569846 kubelet[2574]: E0116 23:57:18.569534 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:18.570697 kubelet[2574]: E0116 23:57:18.570556 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:18.570697 kubelet[2574]: W0116 23:57:18.570574 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:18.570697 kubelet[2574]: E0116 23:57:18.570586 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:18.571026 kubelet[2574]: E0116 23:57:18.570955 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:18.571026 kubelet[2574]: W0116 23:57:18.570967 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:18.571026 kubelet[2574]: E0116 23:57:18.570978 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:18.572302 kubelet[2574]: E0116 23:57:18.572170 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:18.572302 kubelet[2574]: W0116 23:57:18.572188 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:18.572302 kubelet[2574]: E0116 23:57:18.572201 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:18.572714 kubelet[2574]: E0116 23:57:18.572626 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:18.572714 kubelet[2574]: W0116 23:57:18.572642 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:18.572714 kubelet[2574]: E0116 23:57:18.572655 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:18.573804 kubelet[2574]: E0116 23:57:18.573685 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:18.573804 kubelet[2574]: W0116 23:57:18.573704 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:18.573804 kubelet[2574]: E0116 23:57:18.573716 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:18.575792 kubelet[2574]: E0116 23:57:18.575715 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:18.575792 kubelet[2574]: W0116 23:57:18.575731 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:18.575792 kubelet[2574]: E0116 23:57:18.575745 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:18.576259 kubelet[2574]: E0116 23:57:18.576154 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:18.576259 kubelet[2574]: W0116 23:57:18.576170 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:18.576259 kubelet[2574]: E0116 23:57:18.576181 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:18.576609 kubelet[2574]: E0116 23:57:18.576531 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:18.576609 kubelet[2574]: W0116 23:57:18.576545 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:18.576609 kubelet[2574]: E0116 23:57:18.576556 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:18.577830 kubelet[2574]: E0116 23:57:18.577744 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:18.577830 kubelet[2574]: W0116 23:57:18.577758 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:18.577830 kubelet[2574]: E0116 23:57:18.577773 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:18.578311 kubelet[2574]: E0116 23:57:18.578206 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:18.578311 kubelet[2574]: W0116 23:57:18.578219 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:18.578311 kubelet[2574]: E0116 23:57:18.578231 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:18.579619 kubelet[2574]: E0116 23:57:18.579520 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:18.579619 kubelet[2574]: W0116 23:57:18.579539 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:18.579619 kubelet[2574]: E0116 23:57:18.579552 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:18.602808 kubelet[2574]: E0116 23:57:18.602589 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:18.602808 kubelet[2574]: W0116 23:57:18.602622 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:18.602808 kubelet[2574]: E0116 23:57:18.602645 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:18.603446 kubelet[2574]: E0116 23:57:18.603346 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:18.603446 kubelet[2574]: W0116 23:57:18.603363 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:18.603446 kubelet[2574]: E0116 23:57:18.603377 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:18.603680 kubelet[2574]: E0116 23:57:18.603660 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:18.603680 kubelet[2574]: W0116 23:57:18.603676 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:18.603773 kubelet[2574]: E0116 23:57:18.603690 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:18.603912 kubelet[2574]: E0116 23:57:18.603893 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:18.603912 kubelet[2574]: W0116 23:57:18.603907 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:18.604675 kubelet[2574]: E0116 23:57:18.603916 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:18.604675 kubelet[2574]: E0116 23:57:18.604081 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:18.604675 kubelet[2574]: W0116 23:57:18.604115 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:18.604675 kubelet[2574]: E0116 23:57:18.604127 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:18.604675 kubelet[2574]: E0116 23:57:18.604335 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:18.604675 kubelet[2574]: W0116 23:57:18.604354 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:18.604675 kubelet[2574]: E0116 23:57:18.604363 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:18.605134 kubelet[2574]: E0116 23:57:18.605010 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:18.605134 kubelet[2574]: W0116 23:57:18.605026 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:18.605134 kubelet[2574]: E0116 23:57:18.605039 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:18.605340 kubelet[2574]: E0116 23:57:18.605323 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:18.605340 kubelet[2574]: W0116 23:57:18.605337 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:18.606544 kubelet[2574]: E0116 23:57:18.605348 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:18.606765 kubelet[2574]: E0116 23:57:18.606739 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:18.606765 kubelet[2574]: W0116 23:57:18.606762 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:18.606862 kubelet[2574]: E0116 23:57:18.606779 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:18.607650 kubelet[2574]: E0116 23:57:18.607628 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:18.607650 kubelet[2574]: W0116 23:57:18.607646 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:18.607809 kubelet[2574]: E0116 23:57:18.607661 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:18.608016 kubelet[2574]: E0116 23:57:18.607985 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:18.608016 kubelet[2574]: W0116 23:57:18.607997 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:18.608016 kubelet[2574]: E0116 23:57:18.608011 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:18.608247 kubelet[2574]: E0116 23:57:18.608234 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:18.608295 kubelet[2574]: W0116 23:57:18.608245 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:18.608295 kubelet[2574]: E0116 23:57:18.608269 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:18.610590 kubelet[2574]: E0116 23:57:18.610563 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:18.610590 kubelet[2574]: W0116 23:57:18.610582 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:18.610590 kubelet[2574]: E0116 23:57:18.610598 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:18.611107 kubelet[2574]: E0116 23:57:18.611087 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:18.611107 kubelet[2574]: W0116 23:57:18.611103 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:18.611107 kubelet[2574]: E0116 23:57:18.611114 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:18.611301 kubelet[2574]: E0116 23:57:18.611283 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:18.611301 kubelet[2574]: W0116 23:57:18.611291 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:18.611301 kubelet[2574]: E0116 23:57:18.611299 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:18.612655 kubelet[2574]: E0116 23:57:18.612633 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:18.612655 kubelet[2574]: W0116 23:57:18.612652 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:18.612760 kubelet[2574]: E0116 23:57:18.612670 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:18.612908 kubelet[2574]: E0116 23:57:18.612896 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:18.612908 kubelet[2574]: W0116 23:57:18.612907 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:18.613012 kubelet[2574]: E0116 23:57:18.612916 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:18.613636 kubelet[2574]: E0116 23:57:18.613617 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:18.613636 kubelet[2574]: W0116 23:57:18.613632 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:18.613732 kubelet[2574]: E0116 23:57:18.613643 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:19.524883 kubelet[2574]: I0116 23:57:19.524363 2574 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 16 23:57:19.587057 kubelet[2574]: E0116 23:57:19.586812 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:19.587057 kubelet[2574]: W0116 23:57:19.586852 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:19.587057 kubelet[2574]: E0116 23:57:19.586888 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:19.587695 kubelet[2574]: E0116 23:57:19.587262 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:19.587695 kubelet[2574]: W0116 23:57:19.587276 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:19.587695 kubelet[2574]: E0116 23:57:19.587296 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:19.588162 kubelet[2574]: E0116 23:57:19.587921 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:19.588162 kubelet[2574]: W0116 23:57:19.587961 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:19.588162 kubelet[2574]: E0116 23:57:19.587978 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:19.588635 kubelet[2574]: E0116 23:57:19.588476 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:19.588635 kubelet[2574]: W0116 23:57:19.588493 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:19.588635 kubelet[2574]: E0116 23:57:19.588515 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:19.588923 kubelet[2574]: E0116 23:57:19.588889 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:19.589060 kubelet[2574]: W0116 23:57:19.588988 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:19.589060 kubelet[2574]: E0116 23:57:19.589007 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:19.589810 kubelet[2574]: E0116 23:57:19.589661 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:19.589810 kubelet[2574]: W0116 23:57:19.589686 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:19.589810 kubelet[2574]: E0116 23:57:19.589726 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:19.590671 kubelet[2574]: E0116 23:57:19.590381 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:19.590671 kubelet[2574]: W0116 23:57:19.590394 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:19.590671 kubelet[2574]: E0116 23:57:19.590438 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:19.591038 kubelet[2574]: E0116 23:57:19.590840 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:19.591038 kubelet[2574]: W0116 23:57:19.590851 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:19.591038 kubelet[2574]: E0116 23:57:19.590861 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:19.591388 kubelet[2574]: E0116 23:57:19.591375 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:19.591582 kubelet[2574]: W0116 23:57:19.591452 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:19.591582 kubelet[2574]: E0116 23:57:19.591494 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:19.591928 kubelet[2574]: E0116 23:57:19.591821 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:19.591928 kubelet[2574]: W0116 23:57:19.591833 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:19.591928 kubelet[2574]: E0116 23:57:19.591843 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:19.592299 kubelet[2574]: E0116 23:57:19.592202 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:19.592299 kubelet[2574]: W0116 23:57:19.592214 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:19.592446 kubelet[2574]: E0116 23:57:19.592373 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:19.592754 kubelet[2574]: E0116 23:57:19.592719 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:19.592754 kubelet[2574]: W0116 23:57:19.592739 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:19.592835 kubelet[2574]: E0116 23:57:19.592761 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:19.593000 kubelet[2574]: E0116 23:57:19.592985 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:19.593041 kubelet[2574]: W0116 23:57:19.592998 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:19.593041 kubelet[2574]: E0116 23:57:19.593028 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:19.593309 kubelet[2574]: E0116 23:57:19.593291 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:19.593309 kubelet[2574]: W0116 23:57:19.593307 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:19.593375 kubelet[2574]: E0116 23:57:19.593317 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:19.593571 kubelet[2574]: E0116 23:57:19.593553 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:19.593571 kubelet[2574]: W0116 23:57:19.593569 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:19.593622 kubelet[2574]: E0116 23:57:19.593579 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:19.613921 kubelet[2574]: E0116 23:57:19.613876 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:19.613921 kubelet[2574]: W0116 23:57:19.613906 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:19.613921 kubelet[2574]: E0116 23:57:19.613951 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:19.617932 kubelet[2574]: E0116 23:57:19.615369 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:19.617932 kubelet[2574]: W0116 23:57:19.615401 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:19.617932 kubelet[2574]: E0116 23:57:19.615419 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:19.617932 kubelet[2574]: E0116 23:57:19.616732 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:19.617932 kubelet[2574]: W0116 23:57:19.616745 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:19.617932 kubelet[2574]: E0116 23:57:19.616761 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:19.617932 kubelet[2574]: E0116 23:57:19.616957 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:19.617932 kubelet[2574]: W0116 23:57:19.616966 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:19.617932 kubelet[2574]: E0116 23:57:19.616975 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:19.617932 kubelet[2574]: E0116 23:57:19.617199 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:19.618862 kubelet[2574]: W0116 23:57:19.617209 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:19.618862 kubelet[2574]: E0116 23:57:19.617219 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:19.618862 kubelet[2574]: E0116 23:57:19.617498 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:19.618862 kubelet[2574]: W0116 23:57:19.617508 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:19.618862 kubelet[2574]: E0116 23:57:19.617519 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:19.618862 kubelet[2574]: E0116 23:57:19.618195 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:19.618862 kubelet[2574]: W0116 23:57:19.618207 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:19.618862 kubelet[2574]: E0116 23:57:19.618218 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:19.618862 kubelet[2574]: E0116 23:57:19.618750 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:19.618862 kubelet[2574]: W0116 23:57:19.618762 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:19.619085 kubelet[2574]: E0116 23:57:19.618773 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:19.619664 kubelet[2574]: E0116 23:57:19.619627 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:19.619664 kubelet[2574]: W0116 23:57:19.619642 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:19.619664 kubelet[2574]: E0116 23:57:19.619653 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:19.619922 kubelet[2574]: E0116 23:57:19.619848 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:19.619922 kubelet[2574]: W0116 23:57:19.619857 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:19.619922 kubelet[2574]: E0116 23:57:19.619866 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:19.620195 kubelet[2574]: E0116 23:57:19.620027 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:19.620195 kubelet[2574]: W0116 23:57:19.620038 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:19.620195 kubelet[2574]: E0116 23:57:19.620046 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:19.620557 kubelet[2574]: E0116 23:57:19.620492 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:19.620557 kubelet[2574]: W0116 23:57:19.620504 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:19.620557 kubelet[2574]: E0116 23:57:19.620515 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:19.621305 kubelet[2574]: E0116 23:57:19.620942 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:19.621305 kubelet[2574]: W0116 23:57:19.620960 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:19.621305 kubelet[2574]: E0116 23:57:19.620973 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:19.622654 kubelet[2574]: E0116 23:57:19.622505 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:19.622654 kubelet[2574]: W0116 23:57:19.622524 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:19.622654 kubelet[2574]: E0116 23:57:19.622536 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:19.622773 kubelet[2574]: E0116 23:57:19.622764 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:19.622795 kubelet[2574]: W0116 23:57:19.622773 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:19.622795 kubelet[2574]: E0116 23:57:19.622782 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:19.623231 kubelet[2574]: E0116 23:57:19.622919 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:19.623231 kubelet[2574]: W0116 23:57:19.622937 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:19.623231 kubelet[2574]: E0116 23:57:19.622945 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:19.625766 kubelet[2574]: E0116 23:57:19.625734 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:19.625766 kubelet[2574]: W0116 23:57:19.625759 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:19.625867 kubelet[2574]: E0116 23:57:19.625773 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:19.626890 kubelet[2574]: E0116 23:57:19.626368 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:19.626890 kubelet[2574]: W0116 23:57:19.626382 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:19.626890 kubelet[2574]: E0116 23:57:19.626488 2574 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:19.731210 containerd[1486]: time="2026-01-16T23:57:19.730284841Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:57:19.731965 containerd[1486]: time="2026-01-16T23:57:19.731915082Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Jan 16 23:57:19.734810 containerd[1486]: time="2026-01-16T23:57:19.734750583Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:57:19.738708 containerd[1486]: time="2026-01-16T23:57:19.738670499Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:57:19.740090 containerd[1486]: time="2026-01-16T23:57:19.739392215Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.44076556s" Jan 16 23:57:19.740090 containerd[1486]: time="2026-01-16T23:57:19.739598465Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Jan 16 23:57:19.744278 containerd[1486]: time="2026-01-16T23:57:19.744211815Z" level=info msg="CreateContainer within sandbox \"be7db9b67e61455d5b4c284e0e2c8c73dcbcf3ef497c3c1aa1ba61184f808fa6\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 16 23:57:19.761896 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3887543290.mount: Deactivated successfully. Jan 16 23:57:19.765719 containerd[1486]: time="2026-01-16T23:57:19.765670443Z" level=info msg="CreateContainer within sandbox \"be7db9b67e61455d5b4c284e0e2c8c73dcbcf3ef497c3c1aa1ba61184f808fa6\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e73706568e6f84bd41f80f979d65eafb86b29890397e9c9225a5ce8afb234366\"" Jan 16 23:57:19.768052 containerd[1486]: time="2026-01-16T23:57:19.768014920Z" level=info msg="StartContainer for \"e73706568e6f84bd41f80f979d65eafb86b29890397e9c9225a5ce8afb234366\"" Jan 16 23:57:19.803733 systemd[1]: Started cri-containerd-e73706568e6f84bd41f80f979d65eafb86b29890397e9c9225a5ce8afb234366.scope - libcontainer container e73706568e6f84bd41f80f979d65eafb86b29890397e9c9225a5ce8afb234366. Jan 16 23:57:19.836546 containerd[1486]: time="2026-01-16T23:57:19.836200516Z" level=info msg="StartContainer for \"e73706568e6f84bd41f80f979d65eafb86b29890397e9c9225a5ce8afb234366\" returns successfully" Jan 16 23:57:19.858137 systemd[1]: cri-containerd-e73706568e6f84bd41f80f979d65eafb86b29890397e9c9225a5ce8afb234366.scope: Deactivated successfully. Jan 16 23:57:19.879883 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e73706568e6f84bd41f80f979d65eafb86b29890397e9c9225a5ce8afb234366-rootfs.mount: Deactivated successfully. Jan 16 23:57:19.968387 containerd[1486]: time="2026-01-16T23:57:19.968261612Z" level=info msg="shim disconnected" id=e73706568e6f84bd41f80f979d65eafb86b29890397e9c9225a5ce8afb234366 namespace=k8s.io Jan 16 23:57:19.968387 containerd[1486]: time="2026-01-16T23:57:19.968349977Z" level=warning msg="cleaning up after shim disconnected" id=e73706568e6f84bd41f80f979d65eafb86b29890397e9c9225a5ce8afb234366 namespace=k8s.io Jan 16 23:57:19.968387 containerd[1486]: time="2026-01-16T23:57:19.968362537Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 16 23:57:19.981872 containerd[1486]: time="2026-01-16T23:57:19.981801487Z" level=warning msg="cleanup warnings time=\"2026-01-16T23:57:19Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 16 23:57:20.380440 kubelet[2574]: E0116 23:57:20.379856 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b76rd" podUID="ebb15273-01f0-4342-86a4-e67c5f3e53d0" Jan 16 23:57:20.537529 containerd[1486]: time="2026-01-16T23:57:20.537414292Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 16 23:57:22.379469 kubelet[2574]: E0116 23:57:22.378900 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b76rd" podUID="ebb15273-01f0-4342-86a4-e67c5f3e53d0" Jan 16 23:57:23.068435 containerd[1486]: time="2026-01-16T23:57:23.068373807Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:57:23.070220 containerd[1486]: time="2026-01-16T23:57:23.070167970Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Jan 16 23:57:23.070651 containerd[1486]: time="2026-01-16T23:57:23.070403261Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:57:23.072900 containerd[1486]: time="2026-01-16T23:57:23.072845535Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:57:23.073756 containerd[1486]: time="2026-01-16T23:57:23.073621371Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.536161677s" Jan 16 23:57:23.073756 containerd[1486]: time="2026-01-16T23:57:23.073659333Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Jan 16 23:57:23.079545 containerd[1486]: time="2026-01-16T23:57:23.079419602Z" level=info msg="CreateContainer within sandbox \"be7db9b67e61455d5b4c284e0e2c8c73dcbcf3ef497c3c1aa1ba61184f808fa6\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 16 23:57:23.103060 containerd[1486]: time="2026-01-16T23:57:23.102987541Z" level=info msg="CreateContainer within sandbox \"be7db9b67e61455d5b4c284e0e2c8c73dcbcf3ef497c3c1aa1ba61184f808fa6\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f0ea7f3b277ddaba1a47e43ed42e14d01de41bb5c2652d0a013d7e40033f3c52\"" Jan 16 23:57:23.103815 containerd[1486]: time="2026-01-16T23:57:23.103664773Z" level=info msg="StartContainer for \"f0ea7f3b277ddaba1a47e43ed42e14d01de41bb5c2652d0a013d7e40033f3c52\"" Jan 16 23:57:23.153829 systemd[1]: Started cri-containerd-f0ea7f3b277ddaba1a47e43ed42e14d01de41bb5c2652d0a013d7e40033f3c52.scope - libcontainer container f0ea7f3b277ddaba1a47e43ed42e14d01de41bb5c2652d0a013d7e40033f3c52. Jan 16 23:57:23.187777 containerd[1486]: time="2026-01-16T23:57:23.187669612Z" level=info msg="StartContainer for \"f0ea7f3b277ddaba1a47e43ed42e14d01de41bb5c2652d0a013d7e40033f3c52\" returns successfully" Jan 16 23:57:23.745938 systemd[1]: cri-containerd-f0ea7f3b277ddaba1a47e43ed42e14d01de41bb5c2652d0a013d7e40033f3c52.scope: Deactivated successfully. Jan 16 23:57:23.826349 kubelet[2574]: I0116 23:57:23.823378 2574 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 16 23:57:23.864009 containerd[1486]: time="2026-01-16T23:57:23.863838595Z" level=info msg="shim disconnected" id=f0ea7f3b277ddaba1a47e43ed42e14d01de41bb5c2652d0a013d7e40033f3c52 namespace=k8s.io Jan 16 23:57:23.864547 containerd[1486]: time="2026-01-16T23:57:23.864518427Z" level=warning msg="cleaning up after shim disconnected" id=f0ea7f3b277ddaba1a47e43ed42e14d01de41bb5c2652d0a013d7e40033f3c52 namespace=k8s.io Jan 16 23:57:23.864678 containerd[1486]: time="2026-01-16T23:57:23.864662113Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 16 23:57:23.898923 systemd[1]: Created slice kubepods-burstable-pod2e8f4602_9cf1_4251_be4e_4def80a11ec7.slice - libcontainer container kubepods-burstable-pod2e8f4602_9cf1_4251_be4e_4def80a11ec7.slice. Jan 16 23:57:23.908486 containerd[1486]: time="2026-01-16T23:57:23.906916445Z" level=warning msg="cleanup warnings time=\"2026-01-16T23:57:23Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 16 23:57:23.914479 systemd[1]: Created slice kubepods-besteffort-podb69575f7_9998_4116_85c5_3f82382a6495.slice - libcontainer container kubepods-besteffort-podb69575f7_9998_4116_85c5_3f82382a6495.slice. Jan 16 23:57:23.927256 systemd[1]: Created slice kubepods-besteffort-podab55bbc8_2f84_4b63_ae7a_3f7a0c596089.slice - libcontainer container kubepods-besteffort-podab55bbc8_2f84_4b63_ae7a_3f7a0c596089.slice. Jan 16 23:57:23.946882 kubelet[2574]: I0116 23:57:23.946825 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b69575f7-9998-4116-85c5-3f82382a6495-whisker-ca-bundle\") pod \"whisker-848475bf4c-wx5qm\" (UID: \"b69575f7-9998-4116-85c5-3f82382a6495\") " pod="calico-system/whisker-848475bf4c-wx5qm" Jan 16 23:57:23.946882 kubelet[2574]: I0116 23:57:23.946872 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5v9d\" (UniqueName: \"kubernetes.io/projected/b69575f7-9998-4116-85c5-3f82382a6495-kube-api-access-x5v9d\") pod \"whisker-848475bf4c-wx5qm\" (UID: \"b69575f7-9998-4116-85c5-3f82382a6495\") " pod="calico-system/whisker-848475bf4c-wx5qm" Jan 16 23:57:23.946882 kubelet[2574]: I0116 23:57:23.946889 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ph44k\" (UniqueName: \"kubernetes.io/projected/2e8f4602-9cf1-4251-be4e-4def80a11ec7-kube-api-access-ph44k\") pod \"coredns-674b8bbfcf-m8gpg\" (UID: \"2e8f4602-9cf1-4251-be4e-4def80a11ec7\") " pod="kube-system/coredns-674b8bbfcf-m8gpg" Jan 16 23:57:23.947058 kubelet[2574]: I0116 23:57:23.946910 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/494d2d41-870f-485e-a8b2-cbb0fecf4357-goldmane-ca-bundle\") pod \"goldmane-666569f655-hkcjr\" (UID: \"494d2d41-870f-485e-a8b2-cbb0fecf4357\") " pod="calico-system/goldmane-666569f655-hkcjr" Jan 16 23:57:23.947058 kubelet[2574]: I0116 23:57:23.946929 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2216c59e-7647-44aa-810f-40503d382780-calico-apiserver-certs\") pod \"calico-apiserver-7c6f969f4-6hcrh\" (UID: \"2216c59e-7647-44aa-810f-40503d382780\") " pod="calico-apiserver/calico-apiserver-7c6f969f4-6hcrh" Jan 16 23:57:23.947058 kubelet[2574]: I0116 23:57:23.946945 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvxng\" (UniqueName: \"kubernetes.io/projected/2216c59e-7647-44aa-810f-40503d382780-kube-api-access-wvxng\") pod \"calico-apiserver-7c6f969f4-6hcrh\" (UID: \"2216c59e-7647-44aa-810f-40503d382780\") " pod="calico-apiserver/calico-apiserver-7c6f969f4-6hcrh" Jan 16 23:57:23.947058 kubelet[2574]: I0116 23:57:23.946964 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/494d2d41-870f-485e-a8b2-cbb0fecf4357-goldmane-key-pair\") pod \"goldmane-666569f655-hkcjr\" (UID: \"494d2d41-870f-485e-a8b2-cbb0fecf4357\") " pod="calico-system/goldmane-666569f655-hkcjr" Jan 16 23:57:23.947058 kubelet[2574]: I0116 23:57:23.946979 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b69575f7-9998-4116-85c5-3f82382a6495-whisker-backend-key-pair\") pod \"whisker-848475bf4c-wx5qm\" (UID: \"b69575f7-9998-4116-85c5-3f82382a6495\") " pod="calico-system/whisker-848475bf4c-wx5qm" Jan 16 23:57:23.947180 kubelet[2574]: I0116 23:57:23.946993 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7c95011c-e199-44d0-b1e0-17f58fab750a-config-volume\") pod \"coredns-674b8bbfcf-cvb6h\" (UID: \"7c95011c-e199-44d0-b1e0-17f58fab750a\") " pod="kube-system/coredns-674b8bbfcf-cvb6h" Jan 16 23:57:23.947180 kubelet[2574]: I0116 23:57:23.947010 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwlqj\" (UniqueName: \"kubernetes.io/projected/7c95011c-e199-44d0-b1e0-17f58fab750a-kube-api-access-vwlqj\") pod \"coredns-674b8bbfcf-cvb6h\" (UID: \"7c95011c-e199-44d0-b1e0-17f58fab750a\") " pod="kube-system/coredns-674b8bbfcf-cvb6h" Jan 16 23:57:23.947180 kubelet[2574]: I0116 23:57:23.947024 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab55bbc8-2f84-4b63-ae7a-3f7a0c596089-tigera-ca-bundle\") pod \"calico-kube-controllers-866b5b959f-q6rnd\" (UID: \"ab55bbc8-2f84-4b63-ae7a-3f7a0c596089\") " pod="calico-system/calico-kube-controllers-866b5b959f-q6rnd" Jan 16 23:57:23.947180 kubelet[2574]: I0116 23:57:23.947040 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rlx7\" (UniqueName: \"kubernetes.io/projected/ab55bbc8-2f84-4b63-ae7a-3f7a0c596089-kube-api-access-9rlx7\") pod \"calico-kube-controllers-866b5b959f-q6rnd\" (UID: \"ab55bbc8-2f84-4b63-ae7a-3f7a0c596089\") " pod="calico-system/calico-kube-controllers-866b5b959f-q6rnd" Jan 16 23:57:23.947180 kubelet[2574]: I0116 23:57:23.947057 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e8f4602-9cf1-4251-be4e-4def80a11ec7-config-volume\") pod \"coredns-674b8bbfcf-m8gpg\" (UID: \"2e8f4602-9cf1-4251-be4e-4def80a11ec7\") " pod="kube-system/coredns-674b8bbfcf-m8gpg" Jan 16 23:57:23.947324 kubelet[2574]: I0116 23:57:23.947079 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/494d2d41-870f-485e-a8b2-cbb0fecf4357-config\") pod \"goldmane-666569f655-hkcjr\" (UID: \"494d2d41-870f-485e-a8b2-cbb0fecf4357\") " pod="calico-system/goldmane-666569f655-hkcjr" Jan 16 23:57:23.947324 kubelet[2574]: I0116 23:57:23.947098 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpxvc\" (UniqueName: \"kubernetes.io/projected/28527141-9485-40ed-9795-772c961207d3-kube-api-access-lpxvc\") pod \"calico-apiserver-7c6f969f4-kxjbr\" (UID: \"28527141-9485-40ed-9795-772c961207d3\") " pod="calico-apiserver/calico-apiserver-7c6f969f4-kxjbr" Jan 16 23:57:23.947324 kubelet[2574]: I0116 23:57:23.947115 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58xrg\" (UniqueName: \"kubernetes.io/projected/494d2d41-870f-485e-a8b2-cbb0fecf4357-kube-api-access-58xrg\") pod \"goldmane-666569f655-hkcjr\" (UID: \"494d2d41-870f-485e-a8b2-cbb0fecf4357\") " pod="calico-system/goldmane-666569f655-hkcjr" Jan 16 23:57:23.947324 kubelet[2574]: I0116 23:57:23.947132 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/28527141-9485-40ed-9795-772c961207d3-calico-apiserver-certs\") pod \"calico-apiserver-7c6f969f4-kxjbr\" (UID: \"28527141-9485-40ed-9795-772c961207d3\") " pod="calico-apiserver/calico-apiserver-7c6f969f4-kxjbr" Jan 16 23:57:23.947752 systemd[1]: Created slice kubepods-besteffort-pod2216c59e_7647_44aa_810f_40503d382780.slice - libcontainer container kubepods-besteffort-pod2216c59e_7647_44aa_810f_40503d382780.slice. Jan 16 23:57:23.959612 systemd[1]: Created slice kubepods-burstable-pod7c95011c_e199_44d0_b1e0_17f58fab750a.slice - libcontainer container kubepods-burstable-pod7c95011c_e199_44d0_b1e0_17f58fab750a.slice. Jan 16 23:57:23.971436 systemd[1]: Created slice kubepods-besteffort-pod28527141_9485_40ed_9795_772c961207d3.slice - libcontainer container kubepods-besteffort-pod28527141_9485_40ed_9795_772c961207d3.slice. Jan 16 23:57:23.981124 systemd[1]: Created slice kubepods-besteffort-pod494d2d41_870f_485e_a8b2_cbb0fecf4357.slice - libcontainer container kubepods-besteffort-pod494d2d41_870f_485e_a8b2_cbb0fecf4357.slice. Jan 16 23:57:24.106936 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f0ea7f3b277ddaba1a47e43ed42e14d01de41bb5c2652d0a013d7e40033f3c52-rootfs.mount: Deactivated successfully. Jan 16 23:57:24.207317 containerd[1486]: time="2026-01-16T23:57:24.207207475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-m8gpg,Uid:2e8f4602-9cf1-4251-be4e-4def80a11ec7,Namespace:kube-system,Attempt:0,}" Jan 16 23:57:24.237476 containerd[1486]: time="2026-01-16T23:57:24.236848438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-866b5b959f-q6rnd,Uid:ab55bbc8-2f84-4b63-ae7a-3f7a0c596089,Namespace:calico-system,Attempt:0,}" Jan 16 23:57:24.237764 containerd[1486]: time="2026-01-16T23:57:24.237738919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-848475bf4c-wx5qm,Uid:b69575f7-9998-4116-85c5-3f82382a6495,Namespace:calico-system,Attempt:0,}" Jan 16 23:57:24.254823 containerd[1486]: time="2026-01-16T23:57:24.254708819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c6f969f4-6hcrh,Uid:2216c59e-7647-44aa-810f-40503d382780,Namespace:calico-apiserver,Attempt:0,}" Jan 16 23:57:24.274186 containerd[1486]: time="2026-01-16T23:57:24.274130912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cvb6h,Uid:7c95011c-e199-44d0-b1e0-17f58fab750a,Namespace:kube-system,Attempt:0,}" Jan 16 23:57:24.278492 containerd[1486]: time="2026-01-16T23:57:24.278436310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c6f969f4-kxjbr,Uid:28527141-9485-40ed-9795-772c961207d3,Namespace:calico-apiserver,Attempt:0,}" Jan 16 23:57:24.288060 containerd[1486]: time="2026-01-16T23:57:24.287672335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-hkcjr,Uid:494d2d41-870f-485e-a8b2-cbb0fecf4357,Namespace:calico-system,Attempt:0,}" Jan 16 23:57:24.390276 systemd[1]: Created slice kubepods-besteffort-podebb15273_01f0_4342_86a4_e67c5f3e53d0.slice - libcontainer container kubepods-besteffort-podebb15273_01f0_4342_86a4_e67c5f3e53d0.slice. Jan 16 23:57:24.398006 containerd[1486]: time="2026-01-16T23:57:24.397610270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b76rd,Uid:ebb15273-01f0-4342-86a4-e67c5f3e53d0,Namespace:calico-system,Attempt:0,}" Jan 16 23:57:24.410081 containerd[1486]: time="2026-01-16T23:57:24.409350210Z" level=error msg="Failed to destroy network for sandbox \"ef4e9e8ff9534c99c1b81587db4ca0d4478f6471de58d2827288e0d5a4879ac2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:24.411565 containerd[1486]: time="2026-01-16T23:57:24.411486228Z" level=error msg="encountered an error cleaning up failed sandbox \"ef4e9e8ff9534c99c1b81587db4ca0d4478f6471de58d2827288e0d5a4879ac2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:24.411824 containerd[1486]: time="2026-01-16T23:57:24.411599473Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-m8gpg,Uid:2e8f4602-9cf1-4251-be4e-4def80a11ec7,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ef4e9e8ff9534c99c1b81587db4ca0d4478f6471de58d2827288e0d5a4879ac2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:24.412468 kubelet[2574]: E0116 23:57:24.412082 2574 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef4e9e8ff9534c99c1b81587db4ca0d4478f6471de58d2827288e0d5a4879ac2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:24.412468 kubelet[2574]: E0116 23:57:24.412159 2574 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef4e9e8ff9534c99c1b81587db4ca0d4478f6471de58d2827288e0d5a4879ac2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-m8gpg" Jan 16 23:57:24.412468 kubelet[2574]: E0116 23:57:24.412185 2574 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef4e9e8ff9534c99c1b81587db4ca0d4478f6471de58d2827288e0d5a4879ac2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-m8gpg" Jan 16 23:57:24.412610 kubelet[2574]: E0116 23:57:24.412244 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-m8gpg_kube-system(2e8f4602-9cf1-4251-be4e-4def80a11ec7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-m8gpg_kube-system(2e8f4602-9cf1-4251-be4e-4def80a11ec7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ef4e9e8ff9534c99c1b81587db4ca0d4478f6471de58d2827288e0d5a4879ac2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-m8gpg" podUID="2e8f4602-9cf1-4251-be4e-4def80a11ec7" Jan 16 23:57:24.445672 containerd[1486]: time="2026-01-16T23:57:24.445598756Z" level=error msg="Failed to destroy network for sandbox \"a12f5242447c8b03d19df8cbcc7b691518162461033008b0db4011134517a576\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:24.448442 containerd[1486]: time="2026-01-16T23:57:24.448214637Z" level=error msg="encountered an error cleaning up failed sandbox \"a12f5242447c8b03d19df8cbcc7b691518162461033008b0db4011134517a576\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:24.448838 containerd[1486]: time="2026-01-16T23:57:24.448795543Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-866b5b959f-q6rnd,Uid:ab55bbc8-2f84-4b63-ae7a-3f7a0c596089,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a12f5242447c8b03d19df8cbcc7b691518162461033008b0db4011134517a576\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:24.449401 kubelet[2574]: E0116 23:57:24.449360 2574 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a12f5242447c8b03d19df8cbcc7b691518162461033008b0db4011134517a576\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:24.449485 kubelet[2574]: E0116 23:57:24.449428 2574 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a12f5242447c8b03d19df8cbcc7b691518162461033008b0db4011134517a576\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-866b5b959f-q6rnd" Jan 16 23:57:24.449757 kubelet[2574]: E0116 23:57:24.449450 2574 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a12f5242447c8b03d19df8cbcc7b691518162461033008b0db4011134517a576\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-866b5b959f-q6rnd" Jan 16 23:57:24.449849 kubelet[2574]: E0116 23:57:24.449820 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-866b5b959f-q6rnd_calico-system(ab55bbc8-2f84-4b63-ae7a-3f7a0c596089)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-866b5b959f-q6rnd_calico-system(ab55bbc8-2f84-4b63-ae7a-3f7a0c596089)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a12f5242447c8b03d19df8cbcc7b691518162461033008b0db4011134517a576\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-866b5b959f-q6rnd" podUID="ab55bbc8-2f84-4b63-ae7a-3f7a0c596089" Jan 16 23:57:24.509679 containerd[1486]: time="2026-01-16T23:57:24.509628941Z" level=error msg="Failed to destroy network for sandbox \"6e16001728dd14416be70318e3a602cadbd5c077cee8928c4ea87748e0eac60b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:24.510508 containerd[1486]: time="2026-01-16T23:57:24.510333093Z" level=error msg="encountered an error cleaning up failed sandbox \"6e16001728dd14416be70318e3a602cadbd5c077cee8928c4ea87748e0eac60b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:24.510508 containerd[1486]: time="2026-01-16T23:57:24.510391776Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-848475bf4c-wx5qm,Uid:b69575f7-9998-4116-85c5-3f82382a6495,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6e16001728dd14416be70318e3a602cadbd5c077cee8928c4ea87748e0eac60b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:24.510733 kubelet[2574]: E0116 23:57:24.510676 2574 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e16001728dd14416be70318e3a602cadbd5c077cee8928c4ea87748e0eac60b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:24.510868 kubelet[2574]: E0116 23:57:24.510783 2574 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e16001728dd14416be70318e3a602cadbd5c077cee8928c4ea87748e0eac60b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-848475bf4c-wx5qm" Jan 16 23:57:24.510868 kubelet[2574]: E0116 23:57:24.510813 2574 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e16001728dd14416be70318e3a602cadbd5c077cee8928c4ea87748e0eac60b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-848475bf4c-wx5qm" Jan 16 23:57:24.511019 kubelet[2574]: E0116 23:57:24.510880 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-848475bf4c-wx5qm_calico-system(b69575f7-9998-4116-85c5-3f82382a6495)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-848475bf4c-wx5qm_calico-system(b69575f7-9998-4116-85c5-3f82382a6495)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6e16001728dd14416be70318e3a602cadbd5c077cee8928c4ea87748e0eac60b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-848475bf4c-wx5qm" podUID="b69575f7-9998-4116-85c5-3f82382a6495" Jan 16 23:57:24.526293 containerd[1486]: time="2026-01-16T23:57:24.525008568Z" level=error msg="Failed to destroy network for sandbox \"32c6585aca5f8c5c5cdc4eaae41999928b2b18636e66ffc11d735015927059eb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:24.526293 containerd[1486]: time="2026-01-16T23:57:24.526143060Z" level=error msg="encountered an error cleaning up failed sandbox \"32c6585aca5f8c5c5cdc4eaae41999928b2b18636e66ffc11d735015927059eb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:24.526293 containerd[1486]: time="2026-01-16T23:57:24.526219983Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c6f969f4-6hcrh,Uid:2216c59e-7647-44aa-810f-40503d382780,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"32c6585aca5f8c5c5cdc4eaae41999928b2b18636e66ffc11d735015927059eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:24.527239 kubelet[2574]: E0116 23:57:24.526745 2574 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32c6585aca5f8c5c5cdc4eaae41999928b2b18636e66ffc11d735015927059eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:24.527239 kubelet[2574]: E0116 23:57:24.526808 2574 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32c6585aca5f8c5c5cdc4eaae41999928b2b18636e66ffc11d735015927059eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c6f969f4-6hcrh" Jan 16 23:57:24.527239 kubelet[2574]: E0116 23:57:24.526827 2574 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32c6585aca5f8c5c5cdc4eaae41999928b2b18636e66ffc11d735015927059eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c6f969f4-6hcrh" Jan 16 23:57:24.527401 kubelet[2574]: E0116 23:57:24.526889 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7c6f969f4-6hcrh_calico-apiserver(2216c59e-7647-44aa-810f-40503d382780)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7c6f969f4-6hcrh_calico-apiserver(2216c59e-7647-44aa-810f-40503d382780)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"32c6585aca5f8c5c5cdc4eaae41999928b2b18636e66ffc11d735015927059eb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7c6f969f4-6hcrh" podUID="2216c59e-7647-44aa-810f-40503d382780" Jan 16 23:57:24.543066 containerd[1486]: time="2026-01-16T23:57:24.543012436Z" level=error msg="Failed to destroy network for sandbox \"edf380fdd62aae19d478c347e1946317eb45e6e825abc4b36b90ad18e83ed7f3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:24.545258 containerd[1486]: time="2026-01-16T23:57:24.545218377Z" level=error msg="encountered an error cleaning up failed sandbox \"edf380fdd62aae19d478c347e1946317eb45e6e825abc4b36b90ad18e83ed7f3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:24.545535 containerd[1486]: time="2026-01-16T23:57:24.545512471Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c6f969f4-kxjbr,Uid:28527141-9485-40ed-9795-772c961207d3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"edf380fdd62aae19d478c347e1946317eb45e6e825abc4b36b90ad18e83ed7f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:24.545932 kubelet[2574]: I0116 23:57:24.545894 2574 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32c6585aca5f8c5c5cdc4eaae41999928b2b18636e66ffc11d735015927059eb" Jan 16 23:57:24.547172 kubelet[2574]: E0116 23:57:24.547114 2574 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"edf380fdd62aae19d478c347e1946317eb45e6e825abc4b36b90ad18e83ed7f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:24.547172 kubelet[2574]: E0116 23:57:24.547179 2574 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"edf380fdd62aae19d478c347e1946317eb45e6e825abc4b36b90ad18e83ed7f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c6f969f4-kxjbr" Jan 16 23:57:24.547172 kubelet[2574]: E0116 23:57:24.547199 2574 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"edf380fdd62aae19d478c347e1946317eb45e6e825abc4b36b90ad18e83ed7f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c6f969f4-kxjbr" Jan 16 23:57:24.547436 kubelet[2574]: E0116 23:57:24.547261 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7c6f969f4-kxjbr_calico-apiserver(28527141-9485-40ed-9795-772c961207d3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7c6f969f4-kxjbr_calico-apiserver(28527141-9485-40ed-9795-772c961207d3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"edf380fdd62aae19d478c347e1946317eb45e6e825abc4b36b90ad18e83ed7f3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7c6f969f4-kxjbr" podUID="28527141-9485-40ed-9795-772c961207d3" Jan 16 23:57:24.547663 containerd[1486]: time="2026-01-16T23:57:24.547632808Z" level=info msg="StopPodSandbox for \"32c6585aca5f8c5c5cdc4eaae41999928b2b18636e66ffc11d735015927059eb\"" Jan 16 23:57:24.548615 containerd[1486]: time="2026-01-16T23:57:24.548581972Z" level=info msg="Ensure that sandbox 32c6585aca5f8c5c5cdc4eaae41999928b2b18636e66ffc11d735015927059eb in task-service has been cleanup successfully" Jan 16 23:57:24.549658 kubelet[2574]: I0116 23:57:24.549417 2574 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e16001728dd14416be70318e3a602cadbd5c077cee8928c4ea87748e0eac60b" Jan 16 23:57:24.552138 containerd[1486]: time="2026-01-16T23:57:24.551737077Z" level=info msg="StopPodSandbox for \"6e16001728dd14416be70318e3a602cadbd5c077cee8928c4ea87748e0eac60b\"" Jan 16 23:57:24.552138 containerd[1486]: time="2026-01-16T23:57:24.551918005Z" level=info msg="Ensure that sandbox 6e16001728dd14416be70318e3a602cadbd5c077cee8928c4ea87748e0eac60b in task-service has been cleanup successfully" Jan 16 23:57:24.552834 kubelet[2574]: I0116 23:57:24.552789 2574 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a12f5242447c8b03d19df8cbcc7b691518162461033008b0db4011134517a576" Jan 16 23:57:24.554287 containerd[1486]: time="2026-01-16T23:57:24.554143187Z" level=info msg="StopPodSandbox for \"a12f5242447c8b03d19df8cbcc7b691518162461033008b0db4011134517a576\"" Jan 16 23:57:24.559791 containerd[1486]: time="2026-01-16T23:57:24.559750605Z" level=info msg="Ensure that sandbox a12f5242447c8b03d19df8cbcc7b691518162461033008b0db4011134517a576 in task-service has been cleanup successfully" Jan 16 23:57:24.561345 kubelet[2574]: I0116 23:57:24.561190 2574 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef4e9e8ff9534c99c1b81587db4ca0d4478f6471de58d2827288e0d5a4879ac2" Jan 16 23:57:24.563835 containerd[1486]: time="2026-01-16T23:57:24.563144161Z" level=info msg="StopPodSandbox for \"ef4e9e8ff9534c99c1b81587db4ca0d4478f6471de58d2827288e0d5a4879ac2\"" Jan 16 23:57:24.563835 containerd[1486]: time="2026-01-16T23:57:24.563382932Z" level=info msg="Ensure that sandbox ef4e9e8ff9534c99c1b81587db4ca0d4478f6471de58d2827288e0d5a4879ac2 in task-service has been cleanup successfully" Jan 16 23:57:24.577836 containerd[1486]: time="2026-01-16T23:57:24.576879513Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 16 23:57:24.639486 containerd[1486]: time="2026-01-16T23:57:24.639418548Z" level=error msg="Failed to destroy network for sandbox \"a0d98276a45f0f336105b9bd5f8f2a1a9759b9f48476cf77f99a806378b7ec77\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:24.652567 containerd[1486]: time="2026-01-16T23:57:24.650740509Z" level=error msg="encountered an error cleaning up failed sandbox \"a0d98276a45f0f336105b9bd5f8f2a1a9759b9f48476cf77f99a806378b7ec77\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:24.658050 containerd[1486]: time="2026-01-16T23:57:24.656495454Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cvb6h,Uid:7c95011c-e199-44d0-b1e0-17f58fab750a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a0d98276a45f0f336105b9bd5f8f2a1a9759b9f48476cf77f99a806378b7ec77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:24.667377 kubelet[2574]: E0116 23:57:24.667045 2574 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0d98276a45f0f336105b9bd5f8f2a1a9759b9f48476cf77f99a806378b7ec77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:24.667377 kubelet[2574]: E0116 23:57:24.667120 2574 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0d98276a45f0f336105b9bd5f8f2a1a9759b9f48476cf77f99a806378b7ec77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-cvb6h" Jan 16 23:57:24.667377 kubelet[2574]: E0116 23:57:24.667152 2574 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0d98276a45f0f336105b9bd5f8f2a1a9759b9f48476cf77f99a806378b7ec77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-cvb6h" Jan 16 23:57:24.667585 kubelet[2574]: E0116 23:57:24.667196 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-cvb6h_kube-system(7c95011c-e199-44d0-b1e0-17f58fab750a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-cvb6h_kube-system(7c95011c-e199-44d0-b1e0-17f58fab750a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a0d98276a45f0f336105b9bd5f8f2a1a9759b9f48476cf77f99a806378b7ec77\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-cvb6h" podUID="7c95011c-e199-44d0-b1e0-17f58fab750a" Jan 16 23:57:24.713698 containerd[1486]: time="2026-01-16T23:57:24.713578478Z" level=error msg="Failed to destroy network for sandbox \"260549f60cb62393268f6c3fe68998d9cb1cd041a9deff8faaa3dfa07e011ec7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:24.713698 containerd[1486]: time="2026-01-16T23:57:24.715022625Z" level=error msg="encountered an error cleaning up failed sandbox \"260549f60cb62393268f6c3fe68998d9cb1cd041a9deff8faaa3dfa07e011ec7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:24.713698 containerd[1486]: time="2026-01-16T23:57:24.715110509Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-hkcjr,Uid:494d2d41-870f-485e-a8b2-cbb0fecf4357,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"260549f60cb62393268f6c3fe68998d9cb1cd041a9deff8faaa3dfa07e011ec7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:24.719019 containerd[1486]: time="2026-01-16T23:57:24.718953406Z" level=error msg="StopPodSandbox for \"6e16001728dd14416be70318e3a602cadbd5c077cee8928c4ea87748e0eac60b\" failed" error="failed to destroy network for sandbox \"6e16001728dd14416be70318e3a602cadbd5c077cee8928c4ea87748e0eac60b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:24.719638 kubelet[2574]: E0116 23:57:24.719194 2574 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"260549f60cb62393268f6c3fe68998d9cb1cd041a9deff8faaa3dfa07e011ec7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:24.719638 kubelet[2574]: E0116 23:57:24.719263 2574 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"260549f60cb62393268f6c3fe68998d9cb1cd041a9deff8faaa3dfa07e011ec7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-hkcjr" Jan 16 23:57:24.719638 kubelet[2574]: E0116 23:57:24.719283 2574 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"260549f60cb62393268f6c3fe68998d9cb1cd041a9deff8faaa3dfa07e011ec7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-hkcjr" Jan 16 23:57:24.719843 kubelet[2574]: E0116 23:57:24.719366 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-hkcjr_calico-system(494d2d41-870f-485e-a8b2-cbb0fecf4357)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-hkcjr_calico-system(494d2d41-870f-485e-a8b2-cbb0fecf4357)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"260549f60cb62393268f6c3fe68998d9cb1cd041a9deff8faaa3dfa07e011ec7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-hkcjr" podUID="494d2d41-870f-485e-a8b2-cbb0fecf4357" Jan 16 23:57:24.719843 kubelet[2574]: E0116 23:57:24.719194 2574 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6e16001728dd14416be70318e3a602cadbd5c077cee8928c4ea87748e0eac60b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6e16001728dd14416be70318e3a602cadbd5c077cee8928c4ea87748e0eac60b" Jan 16 23:57:24.719843 kubelet[2574]: E0116 23:57:24.719540 2574 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6e16001728dd14416be70318e3a602cadbd5c077cee8928c4ea87748e0eac60b"} Jan 16 23:57:24.719843 kubelet[2574]: E0116 23:57:24.719597 2574 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b69575f7-9998-4116-85c5-3f82382a6495\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6e16001728dd14416be70318e3a602cadbd5c077cee8928c4ea87748e0eac60b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 16 23:57:24.720018 kubelet[2574]: E0116 23:57:24.719616 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b69575f7-9998-4116-85c5-3f82382a6495\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6e16001728dd14416be70318e3a602cadbd5c077cee8928c4ea87748e0eac60b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-848475bf4c-wx5qm" podUID="b69575f7-9998-4116-85c5-3f82382a6495" Jan 16 23:57:24.721465 containerd[1486]: time="2026-01-16T23:57:24.720714207Z" level=error msg="StopPodSandbox for \"ef4e9e8ff9534c99c1b81587db4ca0d4478f6471de58d2827288e0d5a4879ac2\" failed" error="failed to destroy network for sandbox \"ef4e9e8ff9534c99c1b81587db4ca0d4478f6471de58d2827288e0d5a4879ac2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:24.721465 containerd[1486]: time="2026-01-16T23:57:24.721119185Z" level=error msg="StopPodSandbox for \"32c6585aca5f8c5c5cdc4eaae41999928b2b18636e66ffc11d735015927059eb\" failed" error="failed to destroy network for sandbox \"32c6585aca5f8c5c5cdc4eaae41999928b2b18636e66ffc11d735015927059eb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:24.722483 kubelet[2574]: E0116 23:57:24.721673 2574 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ef4e9e8ff9534c99c1b81587db4ca0d4478f6471de58d2827288e0d5a4879ac2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ef4e9e8ff9534c99c1b81587db4ca0d4478f6471de58d2827288e0d5a4879ac2" Jan 16 23:57:24.722483 kubelet[2574]: E0116 23:57:24.721746 2574 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ef4e9e8ff9534c99c1b81587db4ca0d4478f6471de58d2827288e0d5a4879ac2"} Jan 16 23:57:24.722483 kubelet[2574]: E0116 23:57:24.721781 2574 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2e8f4602-9cf1-4251-be4e-4def80a11ec7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ef4e9e8ff9534c99c1b81587db4ca0d4478f6471de58d2827288e0d5a4879ac2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 16 23:57:24.722483 kubelet[2574]: E0116 23:57:24.721801 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2e8f4602-9cf1-4251-be4e-4def80a11ec7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ef4e9e8ff9534c99c1b81587db4ca0d4478f6471de58d2827288e0d5a4879ac2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-m8gpg" podUID="2e8f4602-9cf1-4251-be4e-4def80a11ec7" Jan 16 23:57:24.722688 kubelet[2574]: E0116 23:57:24.721644 2574 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"32c6585aca5f8c5c5cdc4eaae41999928b2b18636e66ffc11d735015927059eb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="32c6585aca5f8c5c5cdc4eaae41999928b2b18636e66ffc11d735015927059eb" Jan 16 23:57:24.722688 kubelet[2574]: E0116 23:57:24.721868 2574 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"32c6585aca5f8c5c5cdc4eaae41999928b2b18636e66ffc11d735015927059eb"} Jan 16 23:57:24.722688 kubelet[2574]: E0116 23:57:24.721887 2574 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2216c59e-7647-44aa-810f-40503d382780\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"32c6585aca5f8c5c5cdc4eaae41999928b2b18636e66ffc11d735015927059eb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 16 23:57:24.722688 kubelet[2574]: E0116 23:57:24.721904 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2216c59e-7647-44aa-810f-40503d382780\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"32c6585aca5f8c5c5cdc4eaae41999928b2b18636e66ffc11d735015927059eb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7c6f969f4-6hcrh" podUID="2216c59e-7647-44aa-810f-40503d382780" Jan 16 23:57:24.725879 containerd[1486]: time="2026-01-16T23:57:24.725802961Z" level=error msg="Failed to destroy network for sandbox \"1c0ef8d82b9c40862671915bc77825b974c300e58ee4d72de75ab0b7587f1cff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:24.726437 containerd[1486]: time="2026-01-16T23:57:24.726392628Z" level=error msg="encountered an error cleaning up failed sandbox \"1c0ef8d82b9c40862671915bc77825b974c300e58ee4d72de75ab0b7587f1cff\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:24.727478 containerd[1486]: time="2026-01-16T23:57:24.726470471Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b76rd,Uid:ebb15273-01f0-4342-86a4-e67c5f3e53d0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1c0ef8d82b9c40862671915bc77825b974c300e58ee4d72de75ab0b7587f1cff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:24.728858 kubelet[2574]: E0116 23:57:24.728808 2574 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c0ef8d82b9c40862671915bc77825b974c300e58ee4d72de75ab0b7587f1cff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:24.728992 kubelet[2574]: E0116 23:57:24.728867 2574 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c0ef8d82b9c40862671915bc77825b974c300e58ee4d72de75ab0b7587f1cff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-b76rd" Jan 16 23:57:24.728992 kubelet[2574]: E0116 23:57:24.728903 2574 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c0ef8d82b9c40862671915bc77825b974c300e58ee4d72de75ab0b7587f1cff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-b76rd" Jan 16 23:57:24.728992 kubelet[2574]: E0116 23:57:24.728945 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-b76rd_calico-system(ebb15273-01f0-4342-86a4-e67c5f3e53d0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-b76rd_calico-system(ebb15273-01f0-4342-86a4-e67c5f3e53d0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1c0ef8d82b9c40862671915bc77825b974c300e58ee4d72de75ab0b7587f1cff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-b76rd" podUID="ebb15273-01f0-4342-86a4-e67c5f3e53d0" Jan 16 23:57:24.739742 containerd[1486]: time="2026-01-16T23:57:24.739608435Z" level=error msg="StopPodSandbox for \"a12f5242447c8b03d19df8cbcc7b691518162461033008b0db4011134517a576\" failed" error="failed to destroy network for sandbox \"a12f5242447c8b03d19df8cbcc7b691518162461033008b0db4011134517a576\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:24.739930 kubelet[2574]: E0116 23:57:24.739873 2574 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a12f5242447c8b03d19df8cbcc7b691518162461033008b0db4011134517a576\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a12f5242447c8b03d19df8cbcc7b691518162461033008b0db4011134517a576" Jan 16 23:57:24.739981 kubelet[2574]: E0116 23:57:24.739951 2574 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a12f5242447c8b03d19df8cbcc7b691518162461033008b0db4011134517a576"} Jan 16 23:57:24.740008 kubelet[2574]: E0116 23:57:24.739995 2574 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ab55bbc8-2f84-4b63-ae7a-3f7a0c596089\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a12f5242447c8b03d19df8cbcc7b691518162461033008b0db4011134517a576\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 16 23:57:24.740061 kubelet[2574]: E0116 23:57:24.740017 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ab55bbc8-2f84-4b63-ae7a-3f7a0c596089\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a12f5242447c8b03d19df8cbcc7b691518162461033008b0db4011134517a576\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-866b5b959f-q6rnd" podUID="ab55bbc8-2f84-4b63-ae7a-3f7a0c596089" Jan 16 23:57:25.101602 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a12f5242447c8b03d19df8cbcc7b691518162461033008b0db4011134517a576-shm.mount: Deactivated successfully. Jan 16 23:57:25.101732 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ef4e9e8ff9534c99c1b81587db4ca0d4478f6471de58d2827288e0d5a4879ac2-shm.mount: Deactivated successfully. Jan 16 23:57:25.577636 kubelet[2574]: I0116 23:57:25.577575 2574 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c0ef8d82b9c40862671915bc77825b974c300e58ee4d72de75ab0b7587f1cff" Jan 16 23:57:25.578848 containerd[1486]: time="2026-01-16T23:57:25.578721496Z" level=info msg="StopPodSandbox for \"1c0ef8d82b9c40862671915bc77825b974c300e58ee4d72de75ab0b7587f1cff\"" Jan 16 23:57:25.579151 containerd[1486]: time="2026-01-16T23:57:25.578894304Z" level=info msg="Ensure that sandbox 1c0ef8d82b9c40862671915bc77825b974c300e58ee4d72de75ab0b7587f1cff in task-service has been cleanup successfully" Jan 16 23:57:25.582807 kubelet[2574]: I0116 23:57:25.582223 2574 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="260549f60cb62393268f6c3fe68998d9cb1cd041a9deff8faaa3dfa07e011ec7" Jan 16 23:57:25.583570 containerd[1486]: time="2026-01-16T23:57:25.583541075Z" level=info msg="StopPodSandbox for \"260549f60cb62393268f6c3fe68998d9cb1cd041a9deff8faaa3dfa07e011ec7\"" Jan 16 23:57:25.584292 containerd[1486]: time="2026-01-16T23:57:25.584038097Z" level=info msg="Ensure that sandbox 260549f60cb62393268f6c3fe68998d9cb1cd041a9deff8faaa3dfa07e011ec7 in task-service has been cleanup successfully" Jan 16 23:57:25.585211 kubelet[2574]: I0116 23:57:25.584768 2574 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0d98276a45f0f336105b9bd5f8f2a1a9759b9f48476cf77f99a806378b7ec77" Jan 16 23:57:25.586851 containerd[1486]: time="2026-01-16T23:57:25.585443561Z" level=info msg="StopPodSandbox for \"a0d98276a45f0f336105b9bd5f8f2a1a9759b9f48476cf77f99a806378b7ec77\"" Jan 16 23:57:25.587068 containerd[1486]: time="2026-01-16T23:57:25.586998912Z" level=info msg="Ensure that sandbox a0d98276a45f0f336105b9bd5f8f2a1a9759b9f48476cf77f99a806378b7ec77 in task-service has been cleanup successfully" Jan 16 23:57:25.593101 kubelet[2574]: I0116 23:57:25.593041 2574 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="edf380fdd62aae19d478c347e1946317eb45e6e825abc4b36b90ad18e83ed7f3" Jan 16 23:57:25.595023 containerd[1486]: time="2026-01-16T23:57:25.594991674Z" level=info msg="StopPodSandbox for \"edf380fdd62aae19d478c347e1946317eb45e6e825abc4b36b90ad18e83ed7f3\"" Jan 16 23:57:25.595996 containerd[1486]: time="2026-01-16T23:57:25.595688826Z" level=info msg="Ensure that sandbox edf380fdd62aae19d478c347e1946317eb45e6e825abc4b36b90ad18e83ed7f3 in task-service has been cleanup successfully" Jan 16 23:57:25.641715 containerd[1486]: time="2026-01-16T23:57:25.641435301Z" level=error msg="StopPodSandbox for \"1c0ef8d82b9c40862671915bc77825b974c300e58ee4d72de75ab0b7587f1cff\" failed" error="failed to destroy network for sandbox \"1c0ef8d82b9c40862671915bc77825b974c300e58ee4d72de75ab0b7587f1cff\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:25.642243 kubelet[2574]: E0116 23:57:25.642128 2574 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1c0ef8d82b9c40862671915bc77825b974c300e58ee4d72de75ab0b7587f1cff\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1c0ef8d82b9c40862671915bc77825b974c300e58ee4d72de75ab0b7587f1cff" Jan 16 23:57:25.642298 kubelet[2574]: E0116 23:57:25.642181 2574 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1c0ef8d82b9c40862671915bc77825b974c300e58ee4d72de75ab0b7587f1cff"} Jan 16 23:57:25.642339 kubelet[2574]: E0116 23:57:25.642308 2574 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ebb15273-01f0-4342-86a4-e67c5f3e53d0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1c0ef8d82b9c40862671915bc77825b974c300e58ee4d72de75ab0b7587f1cff\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 16 23:57:25.642613 kubelet[2574]: E0116 23:57:25.642330 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ebb15273-01f0-4342-86a4-e67c5f3e53d0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1c0ef8d82b9c40862671915bc77825b974c300e58ee4d72de75ab0b7587f1cff\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-b76rd" podUID="ebb15273-01f0-4342-86a4-e67c5f3e53d0" Jan 16 23:57:25.652170 containerd[1486]: time="2026-01-16T23:57:25.652115145Z" level=error msg="StopPodSandbox for \"a0d98276a45f0f336105b9bd5f8f2a1a9759b9f48476cf77f99a806378b7ec77\" failed" error="failed to destroy network for sandbox \"a0d98276a45f0f336105b9bd5f8f2a1a9759b9f48476cf77f99a806378b7ec77\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:25.652655 kubelet[2574]: E0116 23:57:25.652517 2574 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a0d98276a45f0f336105b9bd5f8f2a1a9759b9f48476cf77f99a806378b7ec77\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a0d98276a45f0f336105b9bd5f8f2a1a9759b9f48476cf77f99a806378b7ec77" Jan 16 23:57:25.652655 kubelet[2574]: E0116 23:57:25.652574 2574 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a0d98276a45f0f336105b9bd5f8f2a1a9759b9f48476cf77f99a806378b7ec77"} Jan 16 23:57:25.652655 kubelet[2574]: E0116 23:57:25.652608 2574 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7c95011c-e199-44d0-b1e0-17f58fab750a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a0d98276a45f0f336105b9bd5f8f2a1a9759b9f48476cf77f99a806378b7ec77\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 16 23:57:25.652655 kubelet[2574]: E0116 23:57:25.652628 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7c95011c-e199-44d0-b1e0-17f58fab750a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a0d98276a45f0f336105b9bd5f8f2a1a9759b9f48476cf77f99a806378b7ec77\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-cvb6h" podUID="7c95011c-e199-44d0-b1e0-17f58fab750a" Jan 16 23:57:25.655484 containerd[1486]: time="2026-01-16T23:57:25.655396414Z" level=error msg="StopPodSandbox for \"260549f60cb62393268f6c3fe68998d9cb1cd041a9deff8faaa3dfa07e011ec7\" failed" error="failed to destroy network for sandbox \"260549f60cb62393268f6c3fe68998d9cb1cd041a9deff8faaa3dfa07e011ec7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:25.655685 kubelet[2574]: E0116 23:57:25.655641 2574 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"260549f60cb62393268f6c3fe68998d9cb1cd041a9deff8faaa3dfa07e011ec7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="260549f60cb62393268f6c3fe68998d9cb1cd041a9deff8faaa3dfa07e011ec7" Jan 16 23:57:25.655741 kubelet[2574]: E0116 23:57:25.655691 2574 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"260549f60cb62393268f6c3fe68998d9cb1cd041a9deff8faaa3dfa07e011ec7"} Jan 16 23:57:25.655741 kubelet[2574]: E0116 23:57:25.655727 2574 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"494d2d41-870f-485e-a8b2-cbb0fecf4357\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"260549f60cb62393268f6c3fe68998d9cb1cd041a9deff8faaa3dfa07e011ec7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 16 23:57:25.655920 kubelet[2574]: E0116 23:57:25.655748 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"494d2d41-870f-485e-a8b2-cbb0fecf4357\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"260549f60cb62393268f6c3fe68998d9cb1cd041a9deff8faaa3dfa07e011ec7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-hkcjr" podUID="494d2d41-870f-485e-a8b2-cbb0fecf4357" Jan 16 23:57:25.657297 containerd[1486]: time="2026-01-16T23:57:25.657225457Z" level=error msg="StopPodSandbox for \"edf380fdd62aae19d478c347e1946317eb45e6e825abc4b36b90ad18e83ed7f3\" failed" error="failed to destroy network for sandbox \"edf380fdd62aae19d478c347e1946317eb45e6e825abc4b36b90ad18e83ed7f3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:25.657726 kubelet[2574]: E0116 23:57:25.657564 2574 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"edf380fdd62aae19d478c347e1946317eb45e6e825abc4b36b90ad18e83ed7f3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="edf380fdd62aae19d478c347e1946317eb45e6e825abc4b36b90ad18e83ed7f3" Jan 16 23:57:25.657726 kubelet[2574]: E0116 23:57:25.657622 2574 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"edf380fdd62aae19d478c347e1946317eb45e6e825abc4b36b90ad18e83ed7f3"} Jan 16 23:57:25.657726 kubelet[2574]: E0116 23:57:25.657660 2574 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"28527141-9485-40ed-9795-772c961207d3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"edf380fdd62aae19d478c347e1946317eb45e6e825abc4b36b90ad18e83ed7f3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 16 23:57:25.657726 kubelet[2574]: E0116 23:57:25.657691 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"28527141-9485-40ed-9795-772c961207d3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"edf380fdd62aae19d478c347e1946317eb45e6e825abc4b36b90ad18e83ed7f3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7c6f969f4-kxjbr" podUID="28527141-9485-40ed-9795-772c961207d3" Jan 16 23:57:29.077085 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1188247448.mount: Deactivated successfully. Jan 16 23:57:29.119169 containerd[1486]: time="2026-01-16T23:57:29.119072994Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:57:29.121372 containerd[1486]: time="2026-01-16T23:57:29.120926874Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Jan 16 23:57:29.122726 containerd[1486]: time="2026-01-16T23:57:29.122644468Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:57:29.128579 containerd[1486]: time="2026-01-16T23:57:29.127790691Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:57:29.128579 containerd[1486]: time="2026-01-16T23:57:29.128380916Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 4.551238312s" Jan 16 23:57:29.128579 containerd[1486]: time="2026-01-16T23:57:29.128418158Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Jan 16 23:57:29.150873 containerd[1486]: time="2026-01-16T23:57:29.150828646Z" level=info msg="CreateContainer within sandbox \"be7db9b67e61455d5b4c284e0e2c8c73dcbcf3ef497c3c1aa1ba61184f808fa6\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 16 23:57:29.170368 containerd[1486]: time="2026-01-16T23:57:29.170289247Z" level=info msg="CreateContainer within sandbox \"be7db9b67e61455d5b4c284e0e2c8c73dcbcf3ef497c3c1aa1ba61184f808fa6\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e535ceb6bdcdd55fbec6fe1cc5e631d9851e9ab2c515f51cb1bce71a8c58f9d3\"" Jan 16 23:57:29.172571 containerd[1486]: time="2026-01-16T23:57:29.171808593Z" level=info msg="StartContainer for \"e535ceb6bdcdd55fbec6fe1cc5e631d9851e9ab2c515f51cb1bce71a8c58f9d3\"" Jan 16 23:57:29.205715 systemd[1]: Started cri-containerd-e535ceb6bdcdd55fbec6fe1cc5e631d9851e9ab2c515f51cb1bce71a8c58f9d3.scope - libcontainer container e535ceb6bdcdd55fbec6fe1cc5e631d9851e9ab2c515f51cb1bce71a8c58f9d3. Jan 16 23:57:29.239955 containerd[1486]: time="2026-01-16T23:57:29.239909496Z" level=info msg="StartContainer for \"e535ceb6bdcdd55fbec6fe1cc5e631d9851e9ab2c515f51cb1bce71a8c58f9d3\" returns successfully" Jan 16 23:57:29.389106 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 16 23:57:29.389256 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 16 23:57:29.559071 containerd[1486]: time="2026-01-16T23:57:29.559021446Z" level=info msg="StopPodSandbox for \"6e16001728dd14416be70318e3a602cadbd5c077cee8928c4ea87748e0eac60b\"" Jan 16 23:57:29.635286 kubelet[2574]: I0116 23:57:29.635193 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-788k7" podStartSLOduration=1.155032513 podStartE2EDuration="13.635176577s" podCreationTimestamp="2026-01-16 23:57:16 +0000 UTC" firstStartedPulling="2026-01-16 23:57:16.649748718 +0000 UTC m=+26.383811546" lastFinishedPulling="2026-01-16 23:57:29.129892782 +0000 UTC m=+38.863955610" observedRunningTime="2026-01-16 23:57:29.634433265 +0000 UTC m=+39.368496093" watchObservedRunningTime="2026-01-16 23:57:29.635176577 +0000 UTC m=+39.369239365" Jan 16 23:57:29.751574 containerd[1486]: 2026-01-16 23:57:29.669 [INFO][3818] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6e16001728dd14416be70318e3a602cadbd5c077cee8928c4ea87748e0eac60b" Jan 16 23:57:29.751574 containerd[1486]: 2026-01-16 23:57:29.671 [INFO][3818] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6e16001728dd14416be70318e3a602cadbd5c077cee8928c4ea87748e0eac60b" iface="eth0" netns="/var/run/netns/cni-183e72f4-d662-1fac-b00f-0355bdcfef42" Jan 16 23:57:29.751574 containerd[1486]: 2026-01-16 23:57:29.671 [INFO][3818] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6e16001728dd14416be70318e3a602cadbd5c077cee8928c4ea87748e0eac60b" iface="eth0" netns="/var/run/netns/cni-183e72f4-d662-1fac-b00f-0355bdcfef42" Jan 16 23:57:29.751574 containerd[1486]: 2026-01-16 23:57:29.673 [INFO][3818] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6e16001728dd14416be70318e3a602cadbd5c077cee8928c4ea87748e0eac60b" iface="eth0" netns="/var/run/netns/cni-183e72f4-d662-1fac-b00f-0355bdcfef42" Jan 16 23:57:29.751574 containerd[1486]: 2026-01-16 23:57:29.673 [INFO][3818] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6e16001728dd14416be70318e3a602cadbd5c077cee8928c4ea87748e0eac60b" Jan 16 23:57:29.751574 containerd[1486]: 2026-01-16 23:57:29.673 [INFO][3818] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6e16001728dd14416be70318e3a602cadbd5c077cee8928c4ea87748e0eac60b" Jan 16 23:57:29.751574 containerd[1486]: 2026-01-16 23:57:29.726 [INFO][3825] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6e16001728dd14416be70318e3a602cadbd5c077cee8928c4ea87748e0eac60b" HandleID="k8s-pod-network.6e16001728dd14416be70318e3a602cadbd5c077cee8928c4ea87748e0eac60b" Workload="ci--4081--3--6--n--32c338e5e2-k8s-whisker--848475bf4c--wx5qm-eth0" Jan 16 23:57:29.751574 containerd[1486]: 2026-01-16 23:57:29.726 [INFO][3825] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:29.751574 containerd[1486]: 2026-01-16 23:57:29.726 [INFO][3825] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:29.751574 containerd[1486]: 2026-01-16 23:57:29.742 [WARNING][3825] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6e16001728dd14416be70318e3a602cadbd5c077cee8928c4ea87748e0eac60b" HandleID="k8s-pod-network.6e16001728dd14416be70318e3a602cadbd5c077cee8928c4ea87748e0eac60b" Workload="ci--4081--3--6--n--32c338e5e2-k8s-whisker--848475bf4c--wx5qm-eth0" Jan 16 23:57:29.751574 containerd[1486]: 2026-01-16 23:57:29.742 [INFO][3825] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6e16001728dd14416be70318e3a602cadbd5c077cee8928c4ea87748e0eac60b" HandleID="k8s-pod-network.6e16001728dd14416be70318e3a602cadbd5c077cee8928c4ea87748e0eac60b" Workload="ci--4081--3--6--n--32c338e5e2-k8s-whisker--848475bf4c--wx5qm-eth0" Jan 16 23:57:29.751574 containerd[1486]: 2026-01-16 23:57:29.744 [INFO][3825] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:29.751574 containerd[1486]: 2026-01-16 23:57:29.748 [INFO][3818] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6e16001728dd14416be70318e3a602cadbd5c077cee8928c4ea87748e0eac60b" Jan 16 23:57:29.752164 containerd[1486]: time="2026-01-16T23:57:29.752130792Z" level=info msg="TearDown network for sandbox \"6e16001728dd14416be70318e3a602cadbd5c077cee8928c4ea87748e0eac60b\" successfully" Jan 16 23:57:29.752202 containerd[1486]: time="2026-01-16T23:57:29.752166233Z" level=info msg="StopPodSandbox for \"6e16001728dd14416be70318e3a602cadbd5c077cee8928c4ea87748e0eac60b\" returns successfully" Jan 16 23:57:29.798867 kubelet[2574]: I0116 23:57:29.797135 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b69575f7-9998-4116-85c5-3f82382a6495-whisker-backend-key-pair\") pod \"b69575f7-9998-4116-85c5-3f82382a6495\" (UID: \"b69575f7-9998-4116-85c5-3f82382a6495\") " Jan 16 23:57:29.798867 kubelet[2574]: I0116 23:57:29.797233 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5v9d\" (UniqueName: \"kubernetes.io/projected/b69575f7-9998-4116-85c5-3f82382a6495-kube-api-access-x5v9d\") pod \"b69575f7-9998-4116-85c5-3f82382a6495\" (UID: \"b69575f7-9998-4116-85c5-3f82382a6495\") " Jan 16 23:57:29.798867 kubelet[2574]: I0116 23:57:29.797302 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b69575f7-9998-4116-85c5-3f82382a6495-whisker-ca-bundle\") pod \"b69575f7-9998-4116-85c5-3f82382a6495\" (UID: \"b69575f7-9998-4116-85c5-3f82382a6495\") " Jan 16 23:57:29.798867 kubelet[2574]: I0116 23:57:29.798115 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b69575f7-9998-4116-85c5-3f82382a6495-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "b69575f7-9998-4116-85c5-3f82382a6495" (UID: "b69575f7-9998-4116-85c5-3f82382a6495"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 16 23:57:29.809436 kubelet[2574]: I0116 23:57:29.808916 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b69575f7-9998-4116-85c5-3f82382a6495-kube-api-access-x5v9d" (OuterVolumeSpecName: "kube-api-access-x5v9d") pod "b69575f7-9998-4116-85c5-3f82382a6495" (UID: "b69575f7-9998-4116-85c5-3f82382a6495"). InnerVolumeSpecName "kube-api-access-x5v9d". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 16 23:57:29.809436 kubelet[2574]: I0116 23:57:29.809181 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b69575f7-9998-4116-85c5-3f82382a6495-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "b69575f7-9998-4116-85c5-3f82382a6495" (UID: "b69575f7-9998-4116-85c5-3f82382a6495"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 16 23:57:29.898343 kubelet[2574]: I0116 23:57:29.898251 2574 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b69575f7-9998-4116-85c5-3f82382a6495-whisker-ca-bundle\") on node \"ci-4081-3-6-n-32c338e5e2\" DevicePath \"\"" Jan 16 23:57:29.898343 kubelet[2574]: I0116 23:57:29.898291 2574 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b69575f7-9998-4116-85c5-3f82382a6495-whisker-backend-key-pair\") on node \"ci-4081-3-6-n-32c338e5e2\" DevicePath \"\"" Jan 16 23:57:29.898343 kubelet[2574]: I0116 23:57:29.898300 2574 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-x5v9d\" (UniqueName: \"kubernetes.io/projected/b69575f7-9998-4116-85c5-3f82382a6495-kube-api-access-x5v9d\") on node \"ci-4081-3-6-n-32c338e5e2\" DevicePath \"\"" Jan 16 23:57:30.080407 systemd[1]: run-netns-cni\x2d183e72f4\x2dd662\x2d1fac\x2db00f\x2d0355bdcfef42.mount: Deactivated successfully. Jan 16 23:57:30.080870 systemd[1]: var-lib-kubelet-pods-b69575f7\x2d9998\x2d4116\x2d85c5\x2d3f82382a6495-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dx5v9d.mount: Deactivated successfully. Jan 16 23:57:30.081091 systemd[1]: var-lib-kubelet-pods-b69575f7\x2d9998\x2d4116\x2d85c5\x2d3f82382a6495-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 16 23:57:30.394523 systemd[1]: Removed slice kubepods-besteffort-podb69575f7_9998_4116_85c5_3f82382a6495.slice - libcontainer container kubepods-besteffort-podb69575f7_9998_4116_85c5_3f82382a6495.slice. Jan 16 23:57:30.614232 kubelet[2574]: I0116 23:57:30.614127 2574 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 16 23:57:30.699010 systemd[1]: Created slice kubepods-besteffort-pod40cdc5e9_1abb_47b0_ad9d_ea94f986178b.slice - libcontainer container kubepods-besteffort-pod40cdc5e9_1abb_47b0_ad9d_ea94f986178b.slice. Jan 16 23:57:30.806799 kubelet[2574]: I0116 23:57:30.806732 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40cdc5e9-1abb-47b0-ad9d-ea94f986178b-whisker-ca-bundle\") pod \"whisker-58dd6fc975-fz8dg\" (UID: \"40cdc5e9-1abb-47b0-ad9d-ea94f986178b\") " pod="calico-system/whisker-58dd6fc975-fz8dg" Jan 16 23:57:30.806799 kubelet[2574]: I0116 23:57:30.806797 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/40cdc5e9-1abb-47b0-ad9d-ea94f986178b-whisker-backend-key-pair\") pod \"whisker-58dd6fc975-fz8dg\" (UID: \"40cdc5e9-1abb-47b0-ad9d-ea94f986178b\") " pod="calico-system/whisker-58dd6fc975-fz8dg" Jan 16 23:57:30.807555 kubelet[2574]: I0116 23:57:30.806838 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ph5z8\" (UniqueName: \"kubernetes.io/projected/40cdc5e9-1abb-47b0-ad9d-ea94f986178b-kube-api-access-ph5z8\") pod \"whisker-58dd6fc975-fz8dg\" (UID: \"40cdc5e9-1abb-47b0-ad9d-ea94f986178b\") " pod="calico-system/whisker-58dd6fc975-fz8dg" Jan 16 23:57:31.004890 containerd[1486]: time="2026-01-16T23:57:31.004112600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-58dd6fc975-fz8dg,Uid:40cdc5e9-1abb-47b0-ad9d-ea94f986178b,Namespace:calico-system,Attempt:0,}" Jan 16 23:57:31.248808 systemd-networkd[1380]: cali32854f07066: Link UP Jan 16 23:57:31.250172 systemd-networkd[1380]: cali32854f07066: Gained carrier Jan 16 23:57:31.273547 containerd[1486]: 2026-01-16 23:57:31.084 [INFO][3938] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 16 23:57:31.273547 containerd[1486]: 2026-01-16 23:57:31.111 [INFO][3938] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--32c338e5e2-k8s-whisker--58dd6fc975--fz8dg-eth0 whisker-58dd6fc975- calico-system 40cdc5e9-1abb-47b0-ad9d-ea94f986178b 924 0 2026-01-16 23:57:30 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:58dd6fc975 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-6-n-32c338e5e2 whisker-58dd6fc975-fz8dg eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali32854f07066 [] [] }} ContainerID="55da6326587ecff8e1fe37965a107edb0344517b6cbccb279380cc9d31202c77" Namespace="calico-system" Pod="whisker-58dd6fc975-fz8dg" WorkloadEndpoint="ci--4081--3--6--n--32c338e5e2-k8s-whisker--58dd6fc975--fz8dg-" Jan 16 23:57:31.273547 containerd[1486]: 2026-01-16 23:57:31.112 [INFO][3938] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="55da6326587ecff8e1fe37965a107edb0344517b6cbccb279380cc9d31202c77" Namespace="calico-system" Pod="whisker-58dd6fc975-fz8dg" WorkloadEndpoint="ci--4081--3--6--n--32c338e5e2-k8s-whisker--58dd6fc975--fz8dg-eth0" Jan 16 23:57:31.273547 containerd[1486]: 2026-01-16 23:57:31.166 [INFO][3949] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="55da6326587ecff8e1fe37965a107edb0344517b6cbccb279380cc9d31202c77" HandleID="k8s-pod-network.55da6326587ecff8e1fe37965a107edb0344517b6cbccb279380cc9d31202c77" Workload="ci--4081--3--6--n--32c338e5e2-k8s-whisker--58dd6fc975--fz8dg-eth0" Jan 16 23:57:31.273547 containerd[1486]: 2026-01-16 23:57:31.167 [INFO][3949] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="55da6326587ecff8e1fe37965a107edb0344517b6cbccb279380cc9d31202c77" HandleID="k8s-pod-network.55da6326587ecff8e1fe37965a107edb0344517b6cbccb279380cc9d31202c77" Workload="ci--4081--3--6--n--32c338e5e2-k8s-whisker--58dd6fc975--fz8dg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000330ac0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-32c338e5e2", "pod":"whisker-58dd6fc975-fz8dg", "timestamp":"2026-01-16 23:57:31.16686293 +0000 UTC"}, Hostname:"ci-4081-3-6-n-32c338e5e2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 23:57:31.273547 containerd[1486]: 2026-01-16 23:57:31.167 [INFO][3949] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:31.273547 containerd[1486]: 2026-01-16 23:57:31.167 [INFO][3949] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:31.273547 containerd[1486]: 2026-01-16 23:57:31.167 [INFO][3949] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-32c338e5e2' Jan 16 23:57:31.273547 containerd[1486]: 2026-01-16 23:57:31.179 [INFO][3949] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.55da6326587ecff8e1fe37965a107edb0344517b6cbccb279380cc9d31202c77" host="ci-4081-3-6-n-32c338e5e2" Jan 16 23:57:31.273547 containerd[1486]: 2026-01-16 23:57:31.190 [INFO][3949] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-32c338e5e2" Jan 16 23:57:31.273547 containerd[1486]: 2026-01-16 23:57:31.196 [INFO][3949] ipam/ipam.go 511: Trying affinity for 192.168.58.0/26 host="ci-4081-3-6-n-32c338e5e2" Jan 16 23:57:31.273547 containerd[1486]: 2026-01-16 23:57:31.200 [INFO][3949] ipam/ipam.go 158: Attempting to load block cidr=192.168.58.0/26 host="ci-4081-3-6-n-32c338e5e2" Jan 16 23:57:31.273547 containerd[1486]: 2026-01-16 23:57:31.206 [INFO][3949] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.58.0/26 host="ci-4081-3-6-n-32c338e5e2" Jan 16 23:57:31.273547 containerd[1486]: 2026-01-16 23:57:31.206 [INFO][3949] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.58.0/26 handle="k8s-pod-network.55da6326587ecff8e1fe37965a107edb0344517b6cbccb279380cc9d31202c77" host="ci-4081-3-6-n-32c338e5e2" Jan 16 23:57:31.273547 containerd[1486]: 2026-01-16 23:57:31.213 [INFO][3949] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.55da6326587ecff8e1fe37965a107edb0344517b6cbccb279380cc9d31202c77 Jan 16 23:57:31.273547 containerd[1486]: 2026-01-16 23:57:31.220 [INFO][3949] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.58.0/26 handle="k8s-pod-network.55da6326587ecff8e1fe37965a107edb0344517b6cbccb279380cc9d31202c77" host="ci-4081-3-6-n-32c338e5e2" Jan 16 23:57:31.273547 containerd[1486]: 2026-01-16 23:57:31.229 [INFO][3949] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.58.1/26] block=192.168.58.0/26 handle="k8s-pod-network.55da6326587ecff8e1fe37965a107edb0344517b6cbccb279380cc9d31202c77" host="ci-4081-3-6-n-32c338e5e2" Jan 16 23:57:31.273547 containerd[1486]: 2026-01-16 23:57:31.229 [INFO][3949] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.58.1/26] handle="k8s-pod-network.55da6326587ecff8e1fe37965a107edb0344517b6cbccb279380cc9d31202c77" host="ci-4081-3-6-n-32c338e5e2" Jan 16 23:57:31.273547 containerd[1486]: 2026-01-16 23:57:31.229 [INFO][3949] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:31.273547 containerd[1486]: 2026-01-16 23:57:31.230 [INFO][3949] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.58.1/26] IPv6=[] ContainerID="55da6326587ecff8e1fe37965a107edb0344517b6cbccb279380cc9d31202c77" HandleID="k8s-pod-network.55da6326587ecff8e1fe37965a107edb0344517b6cbccb279380cc9d31202c77" Workload="ci--4081--3--6--n--32c338e5e2-k8s-whisker--58dd6fc975--fz8dg-eth0" Jan 16 23:57:31.274286 containerd[1486]: 2026-01-16 23:57:31.234 [INFO][3938] cni-plugin/k8s.go 418: Populated endpoint ContainerID="55da6326587ecff8e1fe37965a107edb0344517b6cbccb279380cc9d31202c77" Namespace="calico-system" Pod="whisker-58dd6fc975-fz8dg" WorkloadEndpoint="ci--4081--3--6--n--32c338e5e2-k8s-whisker--58dd6fc975--fz8dg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--32c338e5e2-k8s-whisker--58dd6fc975--fz8dg-eth0", GenerateName:"whisker-58dd6fc975-", Namespace:"calico-system", SelfLink:"", UID:"40cdc5e9-1abb-47b0-ad9d-ea94f986178b", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 57, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"58dd6fc975", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-32c338e5e2", ContainerID:"", Pod:"whisker-58dd6fc975-fz8dg", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.58.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali32854f07066", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:57:31.274286 containerd[1486]: 2026-01-16 23:57:31.234 [INFO][3938] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.58.1/32] ContainerID="55da6326587ecff8e1fe37965a107edb0344517b6cbccb279380cc9d31202c77" Namespace="calico-system" Pod="whisker-58dd6fc975-fz8dg" WorkloadEndpoint="ci--4081--3--6--n--32c338e5e2-k8s-whisker--58dd6fc975--fz8dg-eth0" Jan 16 23:57:31.274286 containerd[1486]: 2026-01-16 23:57:31.235 [INFO][3938] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali32854f07066 ContainerID="55da6326587ecff8e1fe37965a107edb0344517b6cbccb279380cc9d31202c77" Namespace="calico-system" Pod="whisker-58dd6fc975-fz8dg" WorkloadEndpoint="ci--4081--3--6--n--32c338e5e2-k8s-whisker--58dd6fc975--fz8dg-eth0" Jan 16 23:57:31.274286 containerd[1486]: 2026-01-16 23:57:31.249 [INFO][3938] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="55da6326587ecff8e1fe37965a107edb0344517b6cbccb279380cc9d31202c77" Namespace="calico-system" Pod="whisker-58dd6fc975-fz8dg" WorkloadEndpoint="ci--4081--3--6--n--32c338e5e2-k8s-whisker--58dd6fc975--fz8dg-eth0" Jan 16 23:57:31.274286 containerd[1486]: 2026-01-16 23:57:31.252 [INFO][3938] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="55da6326587ecff8e1fe37965a107edb0344517b6cbccb279380cc9d31202c77" Namespace="calico-system" Pod="whisker-58dd6fc975-fz8dg" WorkloadEndpoint="ci--4081--3--6--n--32c338e5e2-k8s-whisker--58dd6fc975--fz8dg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--32c338e5e2-k8s-whisker--58dd6fc975--fz8dg-eth0", GenerateName:"whisker-58dd6fc975-", Namespace:"calico-system", SelfLink:"", UID:"40cdc5e9-1abb-47b0-ad9d-ea94f986178b", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 57, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"58dd6fc975", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-32c338e5e2", ContainerID:"55da6326587ecff8e1fe37965a107edb0344517b6cbccb279380cc9d31202c77", Pod:"whisker-58dd6fc975-fz8dg", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.58.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali32854f07066", MAC:"f2:24:10:85:5e:fe", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:57:31.274286 containerd[1486]: 2026-01-16 23:57:31.268 [INFO][3938] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="55da6326587ecff8e1fe37965a107edb0344517b6cbccb279380cc9d31202c77" Namespace="calico-system" Pod="whisker-58dd6fc975-fz8dg" WorkloadEndpoint="ci--4081--3--6--n--32c338e5e2-k8s-whisker--58dd6fc975--fz8dg-eth0" Jan 16 23:57:31.323123 containerd[1486]: time="2026-01-16T23:57:31.322786171Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 23:57:31.323123 containerd[1486]: time="2026-01-16T23:57:31.322847334Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 23:57:31.323123 containerd[1486]: time="2026-01-16T23:57:31.322858654Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:57:31.323123 containerd[1486]: time="2026-01-16T23:57:31.322951978Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:57:31.367890 systemd[1]: Started cri-containerd-55da6326587ecff8e1fe37965a107edb0344517b6cbccb279380cc9d31202c77.scope - libcontainer container 55da6326587ecff8e1fe37965a107edb0344517b6cbccb279380cc9d31202c77. Jan 16 23:57:31.440040 containerd[1486]: time="2026-01-16T23:57:31.439988493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-58dd6fc975-fz8dg,Uid:40cdc5e9-1abb-47b0-ad9d-ea94f986178b,Namespace:calico-system,Attempt:0,} returns sandbox id \"55da6326587ecff8e1fe37965a107edb0344517b6cbccb279380cc9d31202c77\"" Jan 16 23:57:31.452726 containerd[1486]: time="2026-01-16T23:57:31.452333335Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 16 23:57:31.810335 containerd[1486]: time="2026-01-16T23:57:31.810091361Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:57:31.812519 containerd[1486]: time="2026-01-16T23:57:31.812313335Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 16 23:57:31.812519 containerd[1486]: time="2026-01-16T23:57:31.812400459Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 16 23:57:31.814210 kubelet[2574]: E0116 23:57:31.814136 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 23:57:31.814623 kubelet[2574]: E0116 23:57:31.814220 2574 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 23:57:31.821942 kubelet[2574]: E0116 23:57:31.821866 2574 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e3f02b4406bf4653bd0ff6a5488c4241,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ph5z8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-58dd6fc975-fz8dg_calico-system(40cdc5e9-1abb-47b0-ad9d-ea94f986178b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 16 23:57:31.825584 containerd[1486]: time="2026-01-16T23:57:31.825522615Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 16 23:57:32.163882 containerd[1486]: time="2026-01-16T23:57:32.163581062Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:57:32.165859 containerd[1486]: time="2026-01-16T23:57:32.165661869Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 16 23:57:32.165859 containerd[1486]: time="2026-01-16T23:57:32.165675710Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 16 23:57:32.166072 kubelet[2574]: E0116 23:57:32.166031 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 23:57:32.166142 kubelet[2574]: E0116 23:57:32.166082 2574 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 23:57:32.166236 kubelet[2574]: E0116 23:57:32.166191 2574 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ph5z8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-58dd6fc975-fz8dg_calico-system(40cdc5e9-1abb-47b0-ad9d-ea94f986178b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 16 23:57:32.167732 kubelet[2574]: E0116 23:57:32.167658 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58dd6fc975-fz8dg" podUID="40cdc5e9-1abb-47b0-ad9d-ea94f986178b" Jan 16 23:57:32.386383 kubelet[2574]: I0116 23:57:32.386179 2574 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b69575f7-9998-4116-85c5-3f82382a6495" path="/var/lib/kubelet/pods/b69575f7-9998-4116-85c5-3f82382a6495/volumes" Jan 16 23:57:32.624966 kubelet[2574]: E0116 23:57:32.624891 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58dd6fc975-fz8dg" podUID="40cdc5e9-1abb-47b0-ad9d-ea94f986178b" Jan 16 23:57:32.775787 systemd-networkd[1380]: cali32854f07066: Gained IPv6LL Jan 16 23:57:37.380062 containerd[1486]: time="2026-01-16T23:57:37.379972887Z" level=info msg="StopPodSandbox for \"260549f60cb62393268f6c3fe68998d9cb1cd041a9deff8faaa3dfa07e011ec7\"" Jan 16 23:57:37.381782 containerd[1486]: time="2026-01-16T23:57:37.381065600Z" level=info msg="StopPodSandbox for \"1c0ef8d82b9c40862671915bc77825b974c300e58ee4d72de75ab0b7587f1cff\"" Jan 16 23:57:37.383082 containerd[1486]: time="2026-01-16T23:57:37.383003806Z" level=info msg="StopPodSandbox for \"ef4e9e8ff9534c99c1b81587db4ca0d4478f6471de58d2827288e0d5a4879ac2\"" Jan 16 23:57:37.538876 containerd[1486]: 2026-01-16 23:57:37.459 [INFO][4136] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="260549f60cb62393268f6c3fe68998d9cb1cd041a9deff8faaa3dfa07e011ec7" Jan 16 23:57:37.538876 containerd[1486]: 2026-01-16 23:57:37.460 [INFO][4136] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="260549f60cb62393268f6c3fe68998d9cb1cd041a9deff8faaa3dfa07e011ec7" iface="eth0" netns="/var/run/netns/cni-3823aa54-6f4a-1867-21fe-0ec1d7afa581" Jan 16 23:57:37.538876 containerd[1486]: 2026-01-16 23:57:37.460 [INFO][4136] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="260549f60cb62393268f6c3fe68998d9cb1cd041a9deff8faaa3dfa07e011ec7" iface="eth0" netns="/var/run/netns/cni-3823aa54-6f4a-1867-21fe-0ec1d7afa581" Jan 16 23:57:37.538876 containerd[1486]: 2026-01-16 23:57:37.460 [INFO][4136] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="260549f60cb62393268f6c3fe68998d9cb1cd041a9deff8faaa3dfa07e011ec7" iface="eth0" netns="/var/run/netns/cni-3823aa54-6f4a-1867-21fe-0ec1d7afa581" Jan 16 23:57:37.538876 containerd[1486]: 2026-01-16 23:57:37.460 [INFO][4136] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="260549f60cb62393268f6c3fe68998d9cb1cd041a9deff8faaa3dfa07e011ec7" Jan 16 23:57:37.538876 containerd[1486]: 2026-01-16 23:57:37.460 [INFO][4136] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="260549f60cb62393268f6c3fe68998d9cb1cd041a9deff8faaa3dfa07e011ec7" Jan 16 23:57:37.538876 containerd[1486]: 2026-01-16 23:57:37.509 [INFO][4162] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="260549f60cb62393268f6c3fe68998d9cb1cd041a9deff8faaa3dfa07e011ec7" HandleID="k8s-pod-network.260549f60cb62393268f6c3fe68998d9cb1cd041a9deff8faaa3dfa07e011ec7" Workload="ci--4081--3--6--n--32c338e5e2-k8s-goldmane--666569f655--hkcjr-eth0" Jan 16 23:57:37.538876 containerd[1486]: 2026-01-16 23:57:37.512 [INFO][4162] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:37.538876 containerd[1486]: 2026-01-16 23:57:37.512 [INFO][4162] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:37.538876 containerd[1486]: 2026-01-16 23:57:37.526 [WARNING][4162] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="260549f60cb62393268f6c3fe68998d9cb1cd041a9deff8faaa3dfa07e011ec7" HandleID="k8s-pod-network.260549f60cb62393268f6c3fe68998d9cb1cd041a9deff8faaa3dfa07e011ec7" Workload="ci--4081--3--6--n--32c338e5e2-k8s-goldmane--666569f655--hkcjr-eth0" Jan 16 23:57:37.538876 containerd[1486]: 2026-01-16 23:57:37.526 [INFO][4162] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="260549f60cb62393268f6c3fe68998d9cb1cd041a9deff8faaa3dfa07e011ec7" HandleID="k8s-pod-network.260549f60cb62393268f6c3fe68998d9cb1cd041a9deff8faaa3dfa07e011ec7" Workload="ci--4081--3--6--n--32c338e5e2-k8s-goldmane--666569f655--hkcjr-eth0" Jan 16 23:57:37.538876 containerd[1486]: 2026-01-16 23:57:37.529 [INFO][4162] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:37.538876 containerd[1486]: 2026-01-16 23:57:37.532 [INFO][4136] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="260549f60cb62393268f6c3fe68998d9cb1cd041a9deff8faaa3dfa07e011ec7" Jan 16 23:57:37.541822 containerd[1486]: time="2026-01-16T23:57:37.541480832Z" level=info msg="TearDown network for sandbox \"260549f60cb62393268f6c3fe68998d9cb1cd041a9deff8faaa3dfa07e011ec7\" successfully" Jan 16 23:57:37.541822 containerd[1486]: time="2026-01-16T23:57:37.541519949Z" level=info msg="StopPodSandbox for \"260549f60cb62393268f6c3fe68998d9cb1cd041a9deff8faaa3dfa07e011ec7\" returns successfully" Jan 16 23:57:37.544395 systemd[1]: run-netns-cni\x2d3823aa54\x2d6f4a\x2d1867\x2d21fe\x2d0ec1d7afa581.mount: Deactivated successfully. Jan 16 23:57:37.546678 containerd[1486]: time="2026-01-16T23:57:37.546119304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-hkcjr,Uid:494d2d41-870f-485e-a8b2-cbb0fecf4357,Namespace:calico-system,Attempt:1,}" Jan 16 23:57:37.579681 containerd[1486]: 2026-01-16 23:57:37.494 [INFO][4147] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1c0ef8d82b9c40862671915bc77825b974c300e58ee4d72de75ab0b7587f1cff" Jan 16 23:57:37.579681 containerd[1486]: 2026-01-16 23:57:37.495 [INFO][4147] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1c0ef8d82b9c40862671915bc77825b974c300e58ee4d72de75ab0b7587f1cff" iface="eth0" netns="/var/run/netns/cni-21a21416-fe53-36ee-fd56-619d6dbedd75" Jan 16 23:57:37.579681 containerd[1486]: 2026-01-16 23:57:37.495 [INFO][4147] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1c0ef8d82b9c40862671915bc77825b974c300e58ee4d72de75ab0b7587f1cff" iface="eth0" netns="/var/run/netns/cni-21a21416-fe53-36ee-fd56-619d6dbedd75" Jan 16 23:57:37.579681 containerd[1486]: 2026-01-16 23:57:37.495 [INFO][4147] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1c0ef8d82b9c40862671915bc77825b974c300e58ee4d72de75ab0b7587f1cff" iface="eth0" netns="/var/run/netns/cni-21a21416-fe53-36ee-fd56-619d6dbedd75" Jan 16 23:57:37.579681 containerd[1486]: 2026-01-16 23:57:37.495 [INFO][4147] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1c0ef8d82b9c40862671915bc77825b974c300e58ee4d72de75ab0b7587f1cff" Jan 16 23:57:37.579681 containerd[1486]: 2026-01-16 23:57:37.495 [INFO][4147] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1c0ef8d82b9c40862671915bc77825b974c300e58ee4d72de75ab0b7587f1cff" Jan 16 23:57:37.579681 containerd[1486]: 2026-01-16 23:57:37.539 [INFO][4174] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1c0ef8d82b9c40862671915bc77825b974c300e58ee4d72de75ab0b7587f1cff" HandleID="k8s-pod-network.1c0ef8d82b9c40862671915bc77825b974c300e58ee4d72de75ab0b7587f1cff" Workload="ci--4081--3--6--n--32c338e5e2-k8s-csi--node--driver--b76rd-eth0" Jan 16 23:57:37.579681 containerd[1486]: 2026-01-16 23:57:37.539 [INFO][4174] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:37.579681 containerd[1486]: 2026-01-16 23:57:37.539 [INFO][4174] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:37.579681 containerd[1486]: 2026-01-16 23:57:37.559 [WARNING][4174] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1c0ef8d82b9c40862671915bc77825b974c300e58ee4d72de75ab0b7587f1cff" HandleID="k8s-pod-network.1c0ef8d82b9c40862671915bc77825b974c300e58ee4d72de75ab0b7587f1cff" Workload="ci--4081--3--6--n--32c338e5e2-k8s-csi--node--driver--b76rd-eth0" Jan 16 23:57:37.579681 containerd[1486]: 2026-01-16 23:57:37.559 [INFO][4174] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1c0ef8d82b9c40862671915bc77825b974c300e58ee4d72de75ab0b7587f1cff" HandleID="k8s-pod-network.1c0ef8d82b9c40862671915bc77825b974c300e58ee4d72de75ab0b7587f1cff" Workload="ci--4081--3--6--n--32c338e5e2-k8s-csi--node--driver--b76rd-eth0" Jan 16 23:57:37.579681 containerd[1486]: 2026-01-16 23:57:37.563 [INFO][4174] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:37.579681 containerd[1486]: 2026-01-16 23:57:37.571 [INFO][4147] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1c0ef8d82b9c40862671915bc77825b974c300e58ee4d72de75ab0b7587f1cff" Jan 16 23:57:37.584132 systemd[1]: run-netns-cni\x2d21a21416\x2dfe53\x2d36ee\x2dfd56\x2d619d6dbedd75.mount: Deactivated successfully. Jan 16 23:57:37.584954 containerd[1486]: time="2026-01-16T23:57:37.584405266Z" level=info msg="TearDown network for sandbox \"1c0ef8d82b9c40862671915bc77825b974c300e58ee4d72de75ab0b7587f1cff\" successfully" Jan 16 23:57:37.584954 containerd[1486]: time="2026-01-16T23:57:37.584444223Z" level=info msg="StopPodSandbox for \"1c0ef8d82b9c40862671915bc77825b974c300e58ee4d72de75ab0b7587f1cff\" returns successfully" Jan 16 23:57:37.587450 containerd[1486]: time="2026-01-16T23:57:37.586683125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b76rd,Uid:ebb15273-01f0-4342-86a4-e67c5f3e53d0,Namespace:calico-system,Attempt:1,}" Jan 16 23:57:37.606715 containerd[1486]: 2026-01-16 23:57:37.485 [INFO][4148] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ef4e9e8ff9534c99c1b81587db4ca0d4478f6471de58d2827288e0d5a4879ac2" Jan 16 23:57:37.606715 containerd[1486]: 2026-01-16 23:57:37.486 [INFO][4148] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ef4e9e8ff9534c99c1b81587db4ca0d4478f6471de58d2827288e0d5a4879ac2" iface="eth0" netns="/var/run/netns/cni-995c4572-4f9a-d838-9360-6bf4798659dd" Jan 16 23:57:37.606715 containerd[1486]: 2026-01-16 23:57:37.487 [INFO][4148] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ef4e9e8ff9534c99c1b81587db4ca0d4478f6471de58d2827288e0d5a4879ac2" iface="eth0" netns="/var/run/netns/cni-995c4572-4f9a-d838-9360-6bf4798659dd" Jan 16 23:57:37.606715 containerd[1486]: 2026-01-16 23:57:37.487 [INFO][4148] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ef4e9e8ff9534c99c1b81587db4ca0d4478f6471de58d2827288e0d5a4879ac2" iface="eth0" netns="/var/run/netns/cni-995c4572-4f9a-d838-9360-6bf4798659dd" Jan 16 23:57:37.606715 containerd[1486]: 2026-01-16 23:57:37.487 [INFO][4148] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ef4e9e8ff9534c99c1b81587db4ca0d4478f6471de58d2827288e0d5a4879ac2" Jan 16 23:57:37.606715 containerd[1486]: 2026-01-16 23:57:37.487 [INFO][4148] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ef4e9e8ff9534c99c1b81587db4ca0d4478f6471de58d2827288e0d5a4879ac2" Jan 16 23:57:37.606715 containerd[1486]: 2026-01-16 23:57:37.543 [INFO][4169] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ef4e9e8ff9534c99c1b81587db4ca0d4478f6471de58d2827288e0d5a4879ac2" HandleID="k8s-pod-network.ef4e9e8ff9534c99c1b81587db4ca0d4478f6471de58d2827288e0d5a4879ac2" Workload="ci--4081--3--6--n--32c338e5e2-k8s-coredns--674b8bbfcf--m8gpg-eth0" Jan 16 23:57:37.606715 containerd[1486]: 2026-01-16 23:57:37.543 [INFO][4169] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:37.606715 containerd[1486]: 2026-01-16 23:57:37.569 [INFO][4169] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:37.606715 containerd[1486]: 2026-01-16 23:57:37.592 [WARNING][4169] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ef4e9e8ff9534c99c1b81587db4ca0d4478f6471de58d2827288e0d5a4879ac2" HandleID="k8s-pod-network.ef4e9e8ff9534c99c1b81587db4ca0d4478f6471de58d2827288e0d5a4879ac2" Workload="ci--4081--3--6--n--32c338e5e2-k8s-coredns--674b8bbfcf--m8gpg-eth0" Jan 16 23:57:37.606715 containerd[1486]: 2026-01-16 23:57:37.592 [INFO][4169] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ef4e9e8ff9534c99c1b81587db4ca0d4478f6471de58d2827288e0d5a4879ac2" HandleID="k8s-pod-network.ef4e9e8ff9534c99c1b81587db4ca0d4478f6471de58d2827288e0d5a4879ac2" Workload="ci--4081--3--6--n--32c338e5e2-k8s-coredns--674b8bbfcf--m8gpg-eth0" Jan 16 23:57:37.606715 containerd[1486]: 2026-01-16 23:57:37.595 [INFO][4169] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:37.606715 containerd[1486]: 2026-01-16 23:57:37.597 [INFO][4148] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ef4e9e8ff9534c99c1b81587db4ca0d4478f6471de58d2827288e0d5a4879ac2" Jan 16 23:57:37.607546 containerd[1486]: time="2026-01-16T23:57:37.606844925Z" level=info msg="TearDown network for sandbox \"ef4e9e8ff9534c99c1b81587db4ca0d4478f6471de58d2827288e0d5a4879ac2\" successfully" Jan 16 23:57:37.607546 containerd[1486]: time="2026-01-16T23:57:37.606874563Z" level=info msg="StopPodSandbox for \"ef4e9e8ff9534c99c1b81587db4ca0d4478f6471de58d2827288e0d5a4879ac2\" returns successfully" Jan 16 23:57:37.625592 containerd[1486]: time="2026-01-16T23:57:37.625548961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-m8gpg,Uid:2e8f4602-9cf1-4251-be4e-4def80a11ec7,Namespace:kube-system,Attempt:1,}" Jan 16 23:57:37.761750 systemd-networkd[1380]: calib94ac785ab1: Link UP Jan 16 23:57:37.764887 systemd-networkd[1380]: calib94ac785ab1: Gained carrier Jan 16 23:57:37.788722 containerd[1486]: 2026-01-16 23:57:37.610 [INFO][4186] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 16 23:57:37.788722 containerd[1486]: 2026-01-16 23:57:37.628 [INFO][4186] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--32c338e5e2-k8s-goldmane--666569f655--hkcjr-eth0 goldmane-666569f655- calico-system 494d2d41-870f-485e-a8b2-cbb0fecf4357 959 0 2026-01-16 23:57:13 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-6-n-32c338e5e2 goldmane-666569f655-hkcjr eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calib94ac785ab1 [] [] }} ContainerID="ca91b9f1ef84aad4606e9fb44e711971d4a1ab9064542fd2ce40ec5805c719e8" Namespace="calico-system" Pod="goldmane-666569f655-hkcjr" WorkloadEndpoint="ci--4081--3--6--n--32c338e5e2-k8s-goldmane--666569f655--hkcjr-" Jan 16 23:57:37.788722 containerd[1486]: 2026-01-16 23:57:37.628 [INFO][4186] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ca91b9f1ef84aad4606e9fb44e711971d4a1ab9064542fd2ce40ec5805c719e8" Namespace="calico-system" Pod="goldmane-666569f655-hkcjr" WorkloadEndpoint="ci--4081--3--6--n--32c338e5e2-k8s-goldmane--666569f655--hkcjr-eth0" Jan 16 23:57:37.788722 containerd[1486]: 2026-01-16 23:57:37.673 [INFO][4208] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ca91b9f1ef84aad4606e9fb44e711971d4a1ab9064542fd2ce40ec5805c719e8" HandleID="k8s-pod-network.ca91b9f1ef84aad4606e9fb44e711971d4a1ab9064542fd2ce40ec5805c719e8" Workload="ci--4081--3--6--n--32c338e5e2-k8s-goldmane--666569f655--hkcjr-eth0" Jan 16 23:57:37.788722 containerd[1486]: 2026-01-16 23:57:37.674 [INFO][4208] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ca91b9f1ef84aad4606e9fb44e711971d4a1ab9064542fd2ce40ec5805c719e8" HandleID="k8s-pod-network.ca91b9f1ef84aad4606e9fb44e711971d4a1ab9064542fd2ce40ec5805c719e8" Workload="ci--4081--3--6--n--32c338e5e2-k8s-goldmane--666569f655--hkcjr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d35a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-32c338e5e2", "pod":"goldmane-666569f655-hkcjr", "timestamp":"2026-01-16 23:57:37.673882486 +0000 UTC"}, Hostname:"ci-4081-3-6-n-32c338e5e2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 23:57:37.788722 containerd[1486]: 2026-01-16 23:57:37.674 [INFO][4208] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:37.788722 containerd[1486]: 2026-01-16 23:57:37.674 [INFO][4208] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:37.788722 containerd[1486]: 2026-01-16 23:57:37.674 [INFO][4208] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-32c338e5e2' Jan 16 23:57:37.788722 containerd[1486]: 2026-01-16 23:57:37.686 [INFO][4208] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ca91b9f1ef84aad4606e9fb44e711971d4a1ab9064542fd2ce40ec5805c719e8" host="ci-4081-3-6-n-32c338e5e2" Jan 16 23:57:37.788722 containerd[1486]: 2026-01-16 23:57:37.691 [INFO][4208] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-32c338e5e2" Jan 16 23:57:37.788722 containerd[1486]: 2026-01-16 23:57:37.700 [INFO][4208] ipam/ipam.go 511: Trying affinity for 192.168.58.0/26 host="ci-4081-3-6-n-32c338e5e2" Jan 16 23:57:37.788722 containerd[1486]: 2026-01-16 23:57:37.705 [INFO][4208] ipam/ipam.go 158: Attempting to load block cidr=192.168.58.0/26 host="ci-4081-3-6-n-32c338e5e2" Jan 16 23:57:37.788722 containerd[1486]: 2026-01-16 23:57:37.711 [INFO][4208] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.58.0/26 host="ci-4081-3-6-n-32c338e5e2" Jan 16 23:57:37.788722 containerd[1486]: 2026-01-16 23:57:37.711 [INFO][4208] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.58.0/26 handle="k8s-pod-network.ca91b9f1ef84aad4606e9fb44e711971d4a1ab9064542fd2ce40ec5805c719e8" host="ci-4081-3-6-n-32c338e5e2" Jan 16 23:57:37.788722 containerd[1486]: 2026-01-16 23:57:37.715 [INFO][4208] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ca91b9f1ef84aad4606e9fb44e711971d4a1ab9064542fd2ce40ec5805c719e8 Jan 16 23:57:37.788722 containerd[1486]: 2026-01-16 23:57:37.730 [INFO][4208] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.58.0/26 handle="k8s-pod-network.ca91b9f1ef84aad4606e9fb44e711971d4a1ab9064542fd2ce40ec5805c719e8" host="ci-4081-3-6-n-32c338e5e2" Jan 16 23:57:37.788722 containerd[1486]: 2026-01-16 23:57:37.743 [INFO][4208] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.58.2/26] block=192.168.58.0/26 handle="k8s-pod-network.ca91b9f1ef84aad4606e9fb44e711971d4a1ab9064542fd2ce40ec5805c719e8" host="ci-4081-3-6-n-32c338e5e2" Jan 16 23:57:37.788722 containerd[1486]: 2026-01-16 23:57:37.743 [INFO][4208] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.58.2/26] handle="k8s-pod-network.ca91b9f1ef84aad4606e9fb44e711971d4a1ab9064542fd2ce40ec5805c719e8" host="ci-4081-3-6-n-32c338e5e2" Jan 16 23:57:37.788722 containerd[1486]: 2026-01-16 23:57:37.743 [INFO][4208] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:37.788722 containerd[1486]: 2026-01-16 23:57:37.743 [INFO][4208] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.58.2/26] IPv6=[] ContainerID="ca91b9f1ef84aad4606e9fb44e711971d4a1ab9064542fd2ce40ec5805c719e8" HandleID="k8s-pod-network.ca91b9f1ef84aad4606e9fb44e711971d4a1ab9064542fd2ce40ec5805c719e8" Workload="ci--4081--3--6--n--32c338e5e2-k8s-goldmane--666569f655--hkcjr-eth0" Jan 16 23:57:37.789513 containerd[1486]: 2026-01-16 23:57:37.749 [INFO][4186] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ca91b9f1ef84aad4606e9fb44e711971d4a1ab9064542fd2ce40ec5805c719e8" Namespace="calico-system" Pod="goldmane-666569f655-hkcjr" WorkloadEndpoint="ci--4081--3--6--n--32c338e5e2-k8s-goldmane--666569f655--hkcjr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--32c338e5e2-k8s-goldmane--666569f655--hkcjr-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"494d2d41-870f-485e-a8b2-cbb0fecf4357", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 57, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-32c338e5e2", ContainerID:"", Pod:"goldmane-666569f655-hkcjr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.58.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib94ac785ab1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:57:37.789513 containerd[1486]: 2026-01-16 23:57:37.750 [INFO][4186] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.58.2/32] ContainerID="ca91b9f1ef84aad4606e9fb44e711971d4a1ab9064542fd2ce40ec5805c719e8" Namespace="calico-system" Pod="goldmane-666569f655-hkcjr" WorkloadEndpoint="ci--4081--3--6--n--32c338e5e2-k8s-goldmane--666569f655--hkcjr-eth0" Jan 16 23:57:37.789513 containerd[1486]: 2026-01-16 23:57:37.750 [INFO][4186] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib94ac785ab1 ContainerID="ca91b9f1ef84aad4606e9fb44e711971d4a1ab9064542fd2ce40ec5805c719e8" Namespace="calico-system" Pod="goldmane-666569f655-hkcjr" WorkloadEndpoint="ci--4081--3--6--n--32c338e5e2-k8s-goldmane--666569f655--hkcjr-eth0" Jan 16 23:57:37.789513 containerd[1486]: 2026-01-16 23:57:37.766 [INFO][4186] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ca91b9f1ef84aad4606e9fb44e711971d4a1ab9064542fd2ce40ec5805c719e8" Namespace="calico-system" Pod="goldmane-666569f655-hkcjr" WorkloadEndpoint="ci--4081--3--6--n--32c338e5e2-k8s-goldmane--666569f655--hkcjr-eth0" Jan 16 23:57:37.789513 containerd[1486]: 2026-01-16 23:57:37.766 [INFO][4186] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ca91b9f1ef84aad4606e9fb44e711971d4a1ab9064542fd2ce40ec5805c719e8" Namespace="calico-system" Pod="goldmane-666569f655-hkcjr" WorkloadEndpoint="ci--4081--3--6--n--32c338e5e2-k8s-goldmane--666569f655--hkcjr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--32c338e5e2-k8s-goldmane--666569f655--hkcjr-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"494d2d41-870f-485e-a8b2-cbb0fecf4357", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 57, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-32c338e5e2", ContainerID:"ca91b9f1ef84aad4606e9fb44e711971d4a1ab9064542fd2ce40ec5805c719e8", Pod:"goldmane-666569f655-hkcjr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.58.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib94ac785ab1", MAC:"ea:c7:92:fa:3d:07", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:57:37.789513 containerd[1486]: 2026-01-16 23:57:37.785 [INFO][4186] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ca91b9f1ef84aad4606e9fb44e711971d4a1ab9064542fd2ce40ec5805c719e8" Namespace="calico-system" Pod="goldmane-666569f655-hkcjr" WorkloadEndpoint="ci--4081--3--6--n--32c338e5e2-k8s-goldmane--666569f655--hkcjr-eth0" Jan 16 23:57:37.822308 containerd[1486]: time="2026-01-16T23:57:37.822136243Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 23:57:37.822308 containerd[1486]: time="2026-01-16T23:57:37.822229995Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 23:57:37.822308 containerd[1486]: time="2026-01-16T23:57:37.822284271Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:57:37.824282 containerd[1486]: time="2026-01-16T23:57:37.824128005Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:57:37.848024 systemd-networkd[1380]: cali65de395b1aa: Link UP Jan 16 23:57:37.850390 systemd-networkd[1380]: cali65de395b1aa: Gained carrier Jan 16 23:57:37.856228 systemd[1]: Started cri-containerd-ca91b9f1ef84aad4606e9fb44e711971d4a1ab9064542fd2ce40ec5805c719e8.scope - libcontainer container ca91b9f1ef84aad4606e9fb44e711971d4a1ab9064542fd2ce40ec5805c719e8. Jan 16 23:57:37.877795 containerd[1486]: 2026-01-16 23:57:37.640 [INFO][4195] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 16 23:57:37.877795 containerd[1486]: 2026-01-16 23:57:37.659 [INFO][4195] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--32c338e5e2-k8s-csi--node--driver--b76rd-eth0 csi-node-driver- calico-system ebb15273-01f0-4342-86a4-e67c5f3e53d0 961 0 2026-01-16 23:57:16 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-6-n-32c338e5e2 csi-node-driver-b76rd eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali65de395b1aa [] [] }} ContainerID="8fe39fa46649fadda007c8f21329c30c8a4261bbff928d23763e59bd1e1737bb" Namespace="calico-system" Pod="csi-node-driver-b76rd" WorkloadEndpoint="ci--4081--3--6--n--32c338e5e2-k8s-csi--node--driver--b76rd-" Jan 16 23:57:37.877795 containerd[1486]: 2026-01-16 23:57:37.659 [INFO][4195] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8fe39fa46649fadda007c8f21329c30c8a4261bbff928d23763e59bd1e1737bb" Namespace="calico-system" Pod="csi-node-driver-b76rd" WorkloadEndpoint="ci--4081--3--6--n--32c338e5e2-k8s-csi--node--driver--b76rd-eth0" Jan 16 23:57:37.877795 containerd[1486]: 2026-01-16 23:57:37.720 [INFO][4226] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8fe39fa46649fadda007c8f21329c30c8a4261bbff928d23763e59bd1e1737bb" HandleID="k8s-pod-network.8fe39fa46649fadda007c8f21329c30c8a4261bbff928d23763e59bd1e1737bb" Workload="ci--4081--3--6--n--32c338e5e2-k8s-csi--node--driver--b76rd-eth0" Jan 16 23:57:37.877795 containerd[1486]: 2026-01-16 23:57:37.720 [INFO][4226] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8fe39fa46649fadda007c8f21329c30c8a4261bbff928d23763e59bd1e1737bb" HandleID="k8s-pod-network.8fe39fa46649fadda007c8f21329c30c8a4261bbff928d23763e59bd1e1737bb" Workload="ci--4081--3--6--n--32c338e5e2-k8s-csi--node--driver--b76rd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cb7f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-32c338e5e2", "pod":"csi-node-driver-b76rd", "timestamp":"2026-01-16 23:57:37.720204091 +0000 UTC"}, Hostname:"ci-4081-3-6-n-32c338e5e2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 23:57:37.877795 containerd[1486]: 2026-01-16 23:57:37.720 [INFO][4226] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:37.877795 containerd[1486]: 2026-01-16 23:57:37.743 [INFO][4226] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:37.877795 containerd[1486]: 2026-01-16 23:57:37.744 [INFO][4226] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-32c338e5e2' Jan 16 23:57:37.877795 containerd[1486]: 2026-01-16 23:57:37.788 [INFO][4226] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8fe39fa46649fadda007c8f21329c30c8a4261bbff928d23763e59bd1e1737bb" host="ci-4081-3-6-n-32c338e5e2" Jan 16 23:57:37.877795 containerd[1486]: 2026-01-16 23:57:37.796 [INFO][4226] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-32c338e5e2" Jan 16 23:57:37.877795 containerd[1486]: 2026-01-16 23:57:37.805 [INFO][4226] ipam/ipam.go 511: Trying affinity for 192.168.58.0/26 host="ci-4081-3-6-n-32c338e5e2" Jan 16 23:57:37.877795 containerd[1486]: 2026-01-16 23:57:37.808 [INFO][4226] ipam/ipam.go 158: Attempting to load block cidr=192.168.58.0/26 host="ci-4081-3-6-n-32c338e5e2" Jan 16 23:57:37.877795 containerd[1486]: 2026-01-16 23:57:37.811 [INFO][4226] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.58.0/26 host="ci-4081-3-6-n-32c338e5e2" Jan 16 23:57:37.877795 containerd[1486]: 2026-01-16 23:57:37.811 [INFO][4226] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.58.0/26 handle="k8s-pod-network.8fe39fa46649fadda007c8f21329c30c8a4261bbff928d23763e59bd1e1737bb" host="ci-4081-3-6-n-32c338e5e2" Jan 16 23:57:37.877795 containerd[1486]: 2026-01-16 23:57:37.814 [INFO][4226] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8fe39fa46649fadda007c8f21329c30c8a4261bbff928d23763e59bd1e1737bb Jan 16 23:57:37.877795 containerd[1486]: 2026-01-16 23:57:37.822 [INFO][4226] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.58.0/26 handle="k8s-pod-network.8fe39fa46649fadda007c8f21329c30c8a4261bbff928d23763e59bd1e1737bb" host="ci-4081-3-6-n-32c338e5e2" Jan 16 23:57:37.877795 containerd[1486]: 2026-01-16 23:57:37.832 [INFO][4226] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.58.3/26] block=192.168.58.0/26 handle="k8s-pod-network.8fe39fa46649fadda007c8f21329c30c8a4261bbff928d23763e59bd1e1737bb" host="ci-4081-3-6-n-32c338e5e2" Jan 16 23:57:37.877795 containerd[1486]: 2026-01-16 23:57:37.832 [INFO][4226] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.58.3/26] handle="k8s-pod-network.8fe39fa46649fadda007c8f21329c30c8a4261bbff928d23763e59bd1e1737bb" host="ci-4081-3-6-n-32c338e5e2" Jan 16 23:57:37.877795 containerd[1486]: 2026-01-16 23:57:37.832 [INFO][4226] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:37.877795 containerd[1486]: 2026-01-16 23:57:37.832 [INFO][4226] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.58.3/26] IPv6=[] ContainerID="8fe39fa46649fadda007c8f21329c30c8a4261bbff928d23763e59bd1e1737bb" HandleID="k8s-pod-network.8fe39fa46649fadda007c8f21329c30c8a4261bbff928d23763e59bd1e1737bb" Workload="ci--4081--3--6--n--32c338e5e2-k8s-csi--node--driver--b76rd-eth0" Jan 16 23:57:37.878837 containerd[1486]: 2026-01-16 23:57:37.836 [INFO][4195] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8fe39fa46649fadda007c8f21329c30c8a4261bbff928d23763e59bd1e1737bb" Namespace="calico-system" Pod="csi-node-driver-b76rd" WorkloadEndpoint="ci--4081--3--6--n--32c338e5e2-k8s-csi--node--driver--b76rd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--32c338e5e2-k8s-csi--node--driver--b76rd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ebb15273-01f0-4342-86a4-e67c5f3e53d0", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 57, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-32c338e5e2", ContainerID:"", Pod:"csi-node-driver-b76rd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.58.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali65de395b1aa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:57:37.878837 containerd[1486]: 2026-01-16 23:57:37.836 [INFO][4195] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.58.3/32] ContainerID="8fe39fa46649fadda007c8f21329c30c8a4261bbff928d23763e59bd1e1737bb" Namespace="calico-system" Pod="csi-node-driver-b76rd" WorkloadEndpoint="ci--4081--3--6--n--32c338e5e2-k8s-csi--node--driver--b76rd-eth0" Jan 16 23:57:37.878837 containerd[1486]: 2026-01-16 23:57:37.836 [INFO][4195] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali65de395b1aa ContainerID="8fe39fa46649fadda007c8f21329c30c8a4261bbff928d23763e59bd1e1737bb" Namespace="calico-system" Pod="csi-node-driver-b76rd" WorkloadEndpoint="ci--4081--3--6--n--32c338e5e2-k8s-csi--node--driver--b76rd-eth0" Jan 16 23:57:37.878837 containerd[1486]: 2026-01-16 23:57:37.854 [INFO][4195] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8fe39fa46649fadda007c8f21329c30c8a4261bbff928d23763e59bd1e1737bb" Namespace="calico-system" Pod="csi-node-driver-b76rd" WorkloadEndpoint="ci--4081--3--6--n--32c338e5e2-k8s-csi--node--driver--b76rd-eth0" Jan 16 23:57:37.878837 containerd[1486]: 2026-01-16 23:57:37.855 [INFO][4195] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8fe39fa46649fadda007c8f21329c30c8a4261bbff928d23763e59bd1e1737bb" Namespace="calico-system" Pod="csi-node-driver-b76rd" WorkloadEndpoint="ci--4081--3--6--n--32c338e5e2-k8s-csi--node--driver--b76rd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--32c338e5e2-k8s-csi--node--driver--b76rd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ebb15273-01f0-4342-86a4-e67c5f3e53d0", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 57, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-32c338e5e2", ContainerID:"8fe39fa46649fadda007c8f21329c30c8a4261bbff928d23763e59bd1e1737bb", Pod:"csi-node-driver-b76rd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.58.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali65de395b1aa", MAC:"c6:df:b2:17:6d:b3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:57:37.878837 containerd[1486]: 2026-01-16 23:57:37.875 [INFO][4195] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8fe39fa46649fadda007c8f21329c30c8a4261bbff928d23763e59bd1e1737bb" Namespace="calico-system" Pod="csi-node-driver-b76rd" WorkloadEndpoint="ci--4081--3--6--n--32c338e5e2-k8s-csi--node--driver--b76rd-eth0" Jan 16 23:57:37.921661 containerd[1486]: time="2026-01-16T23:57:37.921337132Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 23:57:37.921900 containerd[1486]: time="2026-01-16T23:57:37.921631628Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 23:57:37.921900 containerd[1486]: time="2026-01-16T23:57:37.921653226Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:57:37.922076 containerd[1486]: time="2026-01-16T23:57:37.921984120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:57:37.957602 containerd[1486]: time="2026-01-16T23:57:37.957231843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-hkcjr,Uid:494d2d41-870f-485e-a8b2-cbb0fecf4357,Namespace:calico-system,Attempt:1,} returns sandbox id \"ca91b9f1ef84aad4606e9fb44e711971d4a1ab9064542fd2ce40ec5805c719e8\"" Jan 16 23:57:37.962606 containerd[1486]: time="2026-01-16T23:57:37.962504985Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 16 23:57:37.980350 systemd-networkd[1380]: cali3c12ca45b8f: Link UP Jan 16 23:57:37.982810 systemd-networkd[1380]: cali3c12ca45b8f: Gained carrier Jan 16 23:57:37.988715 systemd[1]: Started cri-containerd-8fe39fa46649fadda007c8f21329c30c8a4261bbff928d23763e59bd1e1737bb.scope - libcontainer container 8fe39fa46649fadda007c8f21329c30c8a4261bbff928d23763e59bd1e1737bb. Jan 16 23:57:38.018306 containerd[1486]: 2026-01-16 23:57:37.697 [INFO][4214] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 16 23:57:38.018306 containerd[1486]: 2026-01-16 23:57:37.732 [INFO][4214] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--32c338e5e2-k8s-coredns--674b8bbfcf--m8gpg-eth0 coredns-674b8bbfcf- kube-system 2e8f4602-9cf1-4251-be4e-4def80a11ec7 960 0 2026-01-16 23:56:57 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-n-32c338e5e2 coredns-674b8bbfcf-m8gpg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3c12ca45b8f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="0bb9f595d7851caf6ffc88d85d67b5e3f617a40a44765cd25e0e737635a91a2e" Namespace="kube-system" Pod="coredns-674b8bbfcf-m8gpg" WorkloadEndpoint="ci--4081--3--6--n--32c338e5e2-k8s-coredns--674b8bbfcf--m8gpg-" Jan 16 23:57:38.018306 containerd[1486]: 2026-01-16 23:57:37.732 [INFO][4214] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0bb9f595d7851caf6ffc88d85d67b5e3f617a40a44765cd25e0e737635a91a2e" Namespace="kube-system" Pod="coredns-674b8bbfcf-m8gpg" WorkloadEndpoint="ci--4081--3--6--n--32c338e5e2-k8s-coredns--674b8bbfcf--m8gpg-eth0" Jan 16 23:57:38.018306 containerd[1486]: 2026-01-16 23:57:37.780 [INFO][4235] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0bb9f595d7851caf6ffc88d85d67b5e3f617a40a44765cd25e0e737635a91a2e" HandleID="k8s-pod-network.0bb9f595d7851caf6ffc88d85d67b5e3f617a40a44765cd25e0e737635a91a2e" Workload="ci--4081--3--6--n--32c338e5e2-k8s-coredns--674b8bbfcf--m8gpg-eth0" Jan 16 23:57:38.018306 containerd[1486]: 2026-01-16 23:57:37.781 [INFO][4235] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0bb9f595d7851caf6ffc88d85d67b5e3f617a40a44765cd25e0e737635a91a2e" HandleID="k8s-pod-network.0bb9f595d7851caf6ffc88d85d67b5e3f617a40a44765cd25e0e737635a91a2e" Workload="ci--4081--3--6--n--32c338e5e2-k8s-coredns--674b8bbfcf--m8gpg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3050), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-n-32c338e5e2", "pod":"coredns-674b8bbfcf-m8gpg", "timestamp":"2026-01-16 23:57:37.780810442 +0000 UTC"}, Hostname:"ci-4081-3-6-n-32c338e5e2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 23:57:38.018306 containerd[1486]: 2026-01-16 23:57:37.781 [INFO][4235] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:38.018306 containerd[1486]: 2026-01-16 23:57:37.833 [INFO][4235] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:38.018306 containerd[1486]: 2026-01-16 23:57:37.833 [INFO][4235] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-32c338e5e2' Jan 16 23:57:38.018306 containerd[1486]: 2026-01-16 23:57:37.892 [INFO][4235] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0bb9f595d7851caf6ffc88d85d67b5e3f617a40a44765cd25e0e737635a91a2e" host="ci-4081-3-6-n-32c338e5e2" Jan 16 23:57:38.018306 containerd[1486]: 2026-01-16 23:57:37.910 [INFO][4235] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-32c338e5e2" Jan 16 23:57:38.018306 containerd[1486]: 2026-01-16 23:57:37.927 [INFO][4235] ipam/ipam.go 511: Trying affinity for 192.168.58.0/26 host="ci-4081-3-6-n-32c338e5e2" Jan 16 23:57:38.018306 containerd[1486]: 2026-01-16 23:57:37.935 [INFO][4235] ipam/ipam.go 158: Attempting to load block cidr=192.168.58.0/26 host="ci-4081-3-6-n-32c338e5e2" Jan 16 23:57:38.018306 containerd[1486]: 2026-01-16 23:57:37.942 [INFO][4235] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.58.0/26 host="ci-4081-3-6-n-32c338e5e2" Jan 16 23:57:38.018306 containerd[1486]: 2026-01-16 23:57:37.942 [INFO][4235] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.58.0/26 handle="k8s-pod-network.0bb9f595d7851caf6ffc88d85d67b5e3f617a40a44765cd25e0e737635a91a2e" host="ci-4081-3-6-n-32c338e5e2" Jan 16 23:57:38.018306 containerd[1486]: 2026-01-16 23:57:37.945 [INFO][4235] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0bb9f595d7851caf6ffc88d85d67b5e3f617a40a44765cd25e0e737635a91a2e Jan 16 23:57:38.018306 containerd[1486]: 2026-01-16 23:57:37.954 [INFO][4235] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.58.0/26 handle="k8s-pod-network.0bb9f595d7851caf6ffc88d85d67b5e3f617a40a44765cd25e0e737635a91a2e" host="ci-4081-3-6-n-32c338e5e2" Jan 16 23:57:38.018306 containerd[1486]: 2026-01-16 23:57:37.970 [INFO][4235] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.58.4/26] block=192.168.58.0/26 handle="k8s-pod-network.0bb9f595d7851caf6ffc88d85d67b5e3f617a40a44765cd25e0e737635a91a2e" host="ci-4081-3-6-n-32c338e5e2" Jan 16 23:57:38.018306 containerd[1486]: 2026-01-16 23:57:37.970 [INFO][4235] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.58.4/26] handle="k8s-pod-network.0bb9f595d7851caf6ffc88d85d67b5e3f617a40a44765cd25e0e737635a91a2e" host="ci-4081-3-6-n-32c338e5e2" Jan 16 23:57:38.018306 containerd[1486]: 2026-01-16 23:57:37.970 [INFO][4235] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:38.018306 containerd[1486]: 2026-01-16 23:57:37.970 [INFO][4235] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.58.4/26] IPv6=[] ContainerID="0bb9f595d7851caf6ffc88d85d67b5e3f617a40a44765cd25e0e737635a91a2e" HandleID="k8s-pod-network.0bb9f595d7851caf6ffc88d85d67b5e3f617a40a44765cd25e0e737635a91a2e" Workload="ci--4081--3--6--n--32c338e5e2-k8s-coredns--674b8bbfcf--m8gpg-eth0" Jan 16 23:57:38.019064 containerd[1486]: 2026-01-16 23:57:37.974 [INFO][4214] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0bb9f595d7851caf6ffc88d85d67b5e3f617a40a44765cd25e0e737635a91a2e" Namespace="kube-system" Pod="coredns-674b8bbfcf-m8gpg" WorkloadEndpoint="ci--4081--3--6--n--32c338e5e2-k8s-coredns--674b8bbfcf--m8gpg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--32c338e5e2-k8s-coredns--674b8bbfcf--m8gpg-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"2e8f4602-9cf1-4251-be4e-4def80a11ec7", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 56, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-32c338e5e2", ContainerID:"", Pod:"coredns-674b8bbfcf-m8gpg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.58.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3c12ca45b8f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:57:38.019064 containerd[1486]: 2026-01-16 23:57:37.974 [INFO][4214] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.58.4/32] ContainerID="0bb9f595d7851caf6ffc88d85d67b5e3f617a40a44765cd25e0e737635a91a2e" Namespace="kube-system" Pod="coredns-674b8bbfcf-m8gpg" WorkloadEndpoint="ci--4081--3--6--n--32c338e5e2-k8s-coredns--674b8bbfcf--m8gpg-eth0" Jan 16 23:57:38.019064 containerd[1486]: 2026-01-16 23:57:37.974 [INFO][4214] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3c12ca45b8f ContainerID="0bb9f595d7851caf6ffc88d85d67b5e3f617a40a44765cd25e0e737635a91a2e" Namespace="kube-system" Pod="coredns-674b8bbfcf-m8gpg" WorkloadEndpoint="ci--4081--3--6--n--32c338e5e2-k8s-coredns--674b8bbfcf--m8gpg-eth0" Jan 16 23:57:38.019064 containerd[1486]: 2026-01-16 23:57:37.981 [INFO][4214] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0bb9f595d7851caf6ffc88d85d67b5e3f617a40a44765cd25e0e737635a91a2e" Namespace="kube-system" Pod="coredns-674b8bbfcf-m8gpg" WorkloadEndpoint="ci--4081--3--6--n--32c338e5e2-k8s-coredns--674b8bbfcf--m8gpg-eth0" Jan 16 23:57:38.019064 containerd[1486]: 2026-01-16 23:57:37.982 [INFO][4214] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0bb9f595d7851caf6ffc88d85d67b5e3f617a40a44765cd25e0e737635a91a2e" Namespace="kube-system" Pod="coredns-674b8bbfcf-m8gpg" WorkloadEndpoint="ci--4081--3--6--n--32c338e5e2-k8s-coredns--674b8bbfcf--m8gpg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--32c338e5e2-k8s-coredns--674b8bbfcf--m8gpg-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"2e8f4602-9cf1-4251-be4e-4def80a11ec7", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 56, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-32c338e5e2", ContainerID:"0bb9f595d7851caf6ffc88d85d67b5e3f617a40a44765cd25e0e737635a91a2e", Pod:"coredns-674b8bbfcf-m8gpg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.58.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3c12ca45b8f", MAC:"52:79:09:8f:20:8b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:57:38.019064 containerd[1486]: 2026-01-16 23:57:38.013 [INFO][4214] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0bb9f595d7851caf6ffc88d85d67b5e3f617a40a44765cd25e0e737635a91a2e" Namespace="kube-system" Pod="coredns-674b8bbfcf-m8gpg" WorkloadEndpoint="ci--4081--3--6--n--32c338e5e2-k8s-coredns--674b8bbfcf--m8gpg-eth0" Jan 16 23:57:38.047156 containerd[1486]: time="2026-01-16T23:57:38.046903593Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 23:57:38.047156 containerd[1486]: time="2026-01-16T23:57:38.046964789Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 23:57:38.047156 containerd[1486]: time="2026-01-16T23:57:38.046989467Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:57:38.047156 containerd[1486]: time="2026-01-16T23:57:38.047087339Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:57:38.078749 systemd[1]: Started cri-containerd-0bb9f595d7851caf6ffc88d85d67b5e3f617a40a44765cd25e0e737635a91a2e.scope - libcontainer container 0bb9f595d7851caf6ffc88d85d67b5e3f617a40a44765cd25e0e737635a91a2e. Jan 16 23:57:38.086342 containerd[1486]: time="2026-01-16T23:57:38.086268315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b76rd,Uid:ebb15273-01f0-4342-86a4-e67c5f3e53d0,Namespace:calico-system,Attempt:1,} returns sandbox id \"8fe39fa46649fadda007c8f21329c30c8a4261bbff928d23763e59bd1e1737bb\"" Jan 16 23:57:38.132737 containerd[1486]: time="2026-01-16T23:57:38.132343887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-m8gpg,Uid:2e8f4602-9cf1-4251-be4e-4def80a11ec7,Namespace:kube-system,Attempt:1,} returns sandbox id \"0bb9f595d7851caf6ffc88d85d67b5e3f617a40a44765cd25e0e737635a91a2e\"" Jan 16 23:57:38.145126 containerd[1486]: time="2026-01-16T23:57:38.145053879Z" level=info msg="CreateContainer within sandbox \"0bb9f595d7851caf6ffc88d85d67b5e3f617a40a44765cd25e0e737635a91a2e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 16 23:57:38.165911 containerd[1486]: time="2026-01-16T23:57:38.165835816Z" level=info msg="CreateContainer within sandbox \"0bb9f595d7851caf6ffc88d85d67b5e3f617a40a44765cd25e0e737635a91a2e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2aad3cb223793501b0a964c2a6c6f66d12ba31ac712d0dc1fdbb7fbbfb3d53bc\"" Jan 16 23:57:38.168397 containerd[1486]: time="2026-01-16T23:57:38.168171998Z" level=info msg="StartContainer for \"2aad3cb223793501b0a964c2a6c6f66d12ba31ac712d0dc1fdbb7fbbfb3d53bc\"" Jan 16 23:57:38.210850 systemd[1]: Started cri-containerd-2aad3cb223793501b0a964c2a6c6f66d12ba31ac712d0dc1fdbb7fbbfb3d53bc.scope - libcontainer container 2aad3cb223793501b0a964c2a6c6f66d12ba31ac712d0dc1fdbb7fbbfb3d53bc. Jan 16 23:57:38.251981 containerd[1486]: time="2026-01-16T23:57:38.251900342Z" level=info msg="StartContainer for \"2aad3cb223793501b0a964c2a6c6f66d12ba31ac712d0dc1fdbb7fbbfb3d53bc\" returns successfully" Jan 16 23:57:38.308392 containerd[1486]: time="2026-01-16T23:57:38.308199014Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:57:38.312173 containerd[1486]: time="2026-01-16T23:57:38.312034242Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 16 23:57:38.312173 containerd[1486]: time="2026-01-16T23:57:38.312130595Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 16 23:57:38.312823 kubelet[2574]: E0116 23:57:38.312766 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 23:57:38.313166 kubelet[2574]: E0116 23:57:38.312844 2574 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 23:57:38.315484 containerd[1486]: time="2026-01-16T23:57:38.314805591Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 16 23:57:38.320758 kubelet[2574]: E0116 23:57:38.320671 2574 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-58xrg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-hkcjr_calico-system(494d2d41-870f-485e-a8b2-cbb0fecf4357): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 16 23:57:38.321887 kubelet[2574]: E0116 23:57:38.321824 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hkcjr" podUID="494d2d41-870f-485e-a8b2-cbb0fecf4357" Jan 16 23:57:38.382372 containerd[1486]: time="2026-01-16T23:57:38.382224537Z" level=info msg="StopPodSandbox for \"edf380fdd62aae19d478c347e1946317eb45e6e825abc4b36b90ad18e83ed7f3\"" Jan 16 23:57:38.384167 containerd[1486]: time="2026-01-16T23:57:38.382247415Z" level=info msg="StopPodSandbox for \"a12f5242447c8b03d19df8cbcc7b691518162461033008b0db4011134517a576\"" Jan 16 23:57:38.515871 containerd[1486]: 2026-01-16 23:57:38.456 [INFO][4455] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="edf380fdd62aae19d478c347e1946317eb45e6e825abc4b36b90ad18e83ed7f3" Jan 16 23:57:38.515871 containerd[1486]: 2026-01-16 23:57:38.458 [INFO][4455] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="edf380fdd62aae19d478c347e1946317eb45e6e825abc4b36b90ad18e83ed7f3" iface="eth0" netns="/var/run/netns/cni-aeb0c181-e02c-9058-aee3-4f05675c54ab" Jan 16 23:57:38.515871 containerd[1486]: 2026-01-16 23:57:38.458 [INFO][4455] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="edf380fdd62aae19d478c347e1946317eb45e6e825abc4b36b90ad18e83ed7f3" iface="eth0" netns="/var/run/netns/cni-aeb0c181-e02c-9058-aee3-4f05675c54ab" Jan 16 23:57:38.515871 containerd[1486]: 2026-01-16 23:57:38.458 [INFO][4455] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="edf380fdd62aae19d478c347e1946317eb45e6e825abc4b36b90ad18e83ed7f3" iface="eth0" netns="/var/run/netns/cni-aeb0c181-e02c-9058-aee3-4f05675c54ab" Jan 16 23:57:38.515871 containerd[1486]: 2026-01-16 23:57:38.458 [INFO][4455] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="edf380fdd62aae19d478c347e1946317eb45e6e825abc4b36b90ad18e83ed7f3" Jan 16 23:57:38.515871 containerd[1486]: 2026-01-16 23:57:38.458 [INFO][4455] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="edf380fdd62aae19d478c347e1946317eb45e6e825abc4b36b90ad18e83ed7f3" Jan 16 23:57:38.515871 containerd[1486]: 2026-01-16 23:57:38.494 [INFO][4470] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="edf380fdd62aae19d478c347e1946317eb45e6e825abc4b36b90ad18e83ed7f3" HandleID="k8s-pod-network.edf380fdd62aae19d478c347e1946317eb45e6e825abc4b36b90ad18e83ed7f3" Workload="ci--4081--3--6--n--32c338e5e2-k8s-calico--apiserver--7c6f969f4--kxjbr-eth0" Jan 16 23:57:38.515871 containerd[1486]: 2026-01-16 23:57:38.494 [INFO][4470] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:38.515871 containerd[1486]: 2026-01-16 23:57:38.494 [INFO][4470] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:38.515871 containerd[1486]: 2026-01-16 23:57:38.506 [WARNING][4470] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="edf380fdd62aae19d478c347e1946317eb45e6e825abc4b36b90ad18e83ed7f3" HandleID="k8s-pod-network.edf380fdd62aae19d478c347e1946317eb45e6e825abc4b36b90ad18e83ed7f3" Workload="ci--4081--3--6--n--32c338e5e2-k8s-calico--apiserver--7c6f969f4--kxjbr-eth0" Jan 16 23:57:38.515871 containerd[1486]: 2026-01-16 23:57:38.506 [INFO][4470] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="edf380fdd62aae19d478c347e1946317eb45e6e825abc4b36b90ad18e83ed7f3" HandleID="k8s-pod-network.edf380fdd62aae19d478c347e1946317eb45e6e825abc4b36b90ad18e83ed7f3" Workload="ci--4081--3--6--n--32c338e5e2-k8s-calico--apiserver--7c6f969f4--kxjbr-eth0" Jan 16 23:57:38.515871 containerd[1486]: 2026-01-16 23:57:38.509 [INFO][4470] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:38.515871 containerd[1486]: 2026-01-16 23:57:38.512 [INFO][4455] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="edf380fdd62aae19d478c347e1946317eb45e6e825abc4b36b90ad18e83ed7f3" Jan 16 23:57:38.516531 containerd[1486]: time="2026-01-16T23:57:38.516173416Z" level=info msg="TearDown network for sandbox \"edf380fdd62aae19d478c347e1946317eb45e6e825abc4b36b90ad18e83ed7f3\" successfully" Jan 16 23:57:38.516531 containerd[1486]: time="2026-01-16T23:57:38.516206254Z" level=info msg="StopPodSandbox for \"edf380fdd62aae19d478c347e1946317eb45e6e825abc4b36b90ad18e83ed7f3\" returns successfully" Jan 16 23:57:38.518096 containerd[1486]: time="2026-01-16T23:57:38.517673342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c6f969f4-kxjbr,Uid:28527141-9485-40ed-9795-772c961207d3,Namespace:calico-apiserver,Attempt:1,}" Jan 16 23:57:38.547483 containerd[1486]: 2026-01-16 23:57:38.451 [INFO][4452] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a12f5242447c8b03d19df8cbcc7b691518162461033008b0db4011134517a576" Jan 16 23:57:38.547483 containerd[1486]: 2026-01-16 23:57:38.453 [INFO][4452] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a12f5242447c8b03d19df8cbcc7b691518162461033008b0db4011134517a576" iface="eth0" netns="/var/run/netns/cni-388e8a1e-fb18-37df-efbc-ccf695dfb204" Jan 16 23:57:38.547483 containerd[1486]: 2026-01-16 23:57:38.453 [INFO][4452] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a12f5242447c8b03d19df8cbcc7b691518162461033008b0db4011134517a576" iface="eth0" netns="/var/run/netns/cni-388e8a1e-fb18-37df-efbc-ccf695dfb204" Jan 16 23:57:38.547483 containerd[1486]: 2026-01-16 23:57:38.454 [INFO][4452] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a12f5242447c8b03d19df8cbcc7b691518162461033008b0db4011134517a576" iface="eth0" netns="/var/run/netns/cni-388e8a1e-fb18-37df-efbc-ccf695dfb204" Jan 16 23:57:38.547483 containerd[1486]: 2026-01-16 23:57:38.454 [INFO][4452] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a12f5242447c8b03d19df8cbcc7b691518162461033008b0db4011134517a576" Jan 16 23:57:38.547483 containerd[1486]: 2026-01-16 23:57:38.454 [INFO][4452] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a12f5242447c8b03d19df8cbcc7b691518162461033008b0db4011134517a576" Jan 16 23:57:38.547483 containerd[1486]: 2026-01-16 23:57:38.496 [INFO][4468] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a12f5242447c8b03d19df8cbcc7b691518162461033008b0db4011134517a576" HandleID="k8s-pod-network.a12f5242447c8b03d19df8cbcc7b691518162461033008b0db4011134517a576" Workload="ci--4081--3--6--n--32c338e5e2-k8s-calico--kube--controllers--866b5b959f--q6rnd-eth0" Jan 16 23:57:38.547483 containerd[1486]: 2026-01-16 23:57:38.496 [INFO][4468] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:38.547483 containerd[1486]: 2026-01-16 23:57:38.509 [INFO][4468] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:38.547483 containerd[1486]: 2026-01-16 23:57:38.524 [WARNING][4468] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a12f5242447c8b03d19df8cbcc7b691518162461033008b0db4011134517a576" HandleID="k8s-pod-network.a12f5242447c8b03d19df8cbcc7b691518162461033008b0db4011134517a576" Workload="ci--4081--3--6--n--32c338e5e2-k8s-calico--kube--controllers--866b5b959f--q6rnd-eth0" Jan 16 23:57:38.547483 containerd[1486]: 2026-01-16 23:57:38.524 [INFO][4468] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a12f5242447c8b03d19df8cbcc7b691518162461033008b0db4011134517a576" HandleID="k8s-pod-network.a12f5242447c8b03d19df8cbcc7b691518162461033008b0db4011134517a576" Workload="ci--4081--3--6--n--32c338e5e2-k8s-calico--kube--controllers--866b5b959f--q6rnd-eth0" Jan 16 23:57:38.547483 containerd[1486]: 2026-01-16 23:57:38.527 [INFO][4468] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:38.547483 containerd[1486]: 2026-01-16 23:57:38.530 [INFO][4452] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a12f5242447c8b03d19df8cbcc7b691518162461033008b0db4011134517a576" Jan 16 23:57:38.548431 containerd[1486]: time="2026-01-16T23:57:38.547721054Z" level=info msg="TearDown network for sandbox \"a12f5242447c8b03d19df8cbcc7b691518162461033008b0db4011134517a576\" successfully" Jan 16 23:57:38.548431 containerd[1486]: time="2026-01-16T23:57:38.547762490Z" level=info msg="StopPodSandbox for \"a12f5242447c8b03d19df8cbcc7b691518162461033008b0db4011134517a576\" returns successfully" Jan 16 23:57:38.551626 containerd[1486]: time="2026-01-16T23:57:38.549595991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-866b5b959f-q6rnd,Uid:ab55bbc8-2f84-4b63-ae7a-3f7a0c596089,Namespace:calico-system,Attempt:1,}" Jan 16 23:57:38.555391 systemd[1]: run-netns-cni\x2daeb0c181\x2de02c\x2d9058\x2daee3\x2d4f05675c54ab.mount: Deactivated successfully. Jan 16 23:57:38.555675 systemd[1]: run-netns-cni\x2d995c4572\x2d4f9a\x2dd838\x2d9360\x2d6bf4798659dd.mount: Deactivated successfully. Jan 16 23:57:38.563307 systemd[1]: run-netns-cni\x2d388e8a1e\x2dfb18\x2d37df\x2defbc\x2dccf695dfb204.mount: Deactivated successfully. Jan 16 23:57:38.646368 kubelet[2574]: E0116 23:57:38.646181 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hkcjr" podUID="494d2d41-870f-485e-a8b2-cbb0fecf4357" Jan 16 23:57:38.662499 containerd[1486]: time="2026-01-16T23:57:38.662410279Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:57:38.665672 containerd[1486]: time="2026-01-16T23:57:38.663625027Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 16 23:57:38.665672 containerd[1486]: time="2026-01-16T23:57:38.663727099Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 16 23:57:38.669342 kubelet[2574]: E0116 23:57:38.667750 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 23:57:38.669342 kubelet[2574]: E0116 23:57:38.667825 2574 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 23:57:38.670156 kubelet[2574]: E0116 23:57:38.669823 2574 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gr4fs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-b76rd_calico-system(ebb15273-01f0-4342-86a4-e67c5f3e53d0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 16 23:57:38.672620 containerd[1486]: time="2026-01-16T23:57:38.672355482Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 16 23:57:38.713481 kubelet[2574]: I0116 23:57:38.712304 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-m8gpg" podStartSLOduration=41.712266923 podStartE2EDuration="41.712266923s" podCreationTimestamp="2026-01-16 23:56:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-16 23:57:38.708565324 +0000 UTC m=+48.442628152" watchObservedRunningTime="2026-01-16 23:57:38.712266923 +0000 UTC m=+48.446329751" Jan 16 23:57:38.789413 systemd-networkd[1380]: calieb68b6c30c0: Link UP Jan 16 23:57:38.792904 systemd-networkd[1380]: calieb68b6c30c0: Gained carrier Jan 16 23:57:38.825747 containerd[1486]: 2026-01-16 23:57:38.575 [INFO][4482] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 16 23:57:38.825747 containerd[1486]: 2026-01-16 23:57:38.609 [INFO][4482] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--32c338e5e2-k8s-calico--apiserver--7c6f969f4--kxjbr-eth0 calico-apiserver-7c6f969f4- calico-apiserver 28527141-9485-40ed-9795-772c961207d3 983 0 2026-01-16 23:57:08 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7c6f969f4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-n-32c338e5e2 calico-apiserver-7c6f969f4-kxjbr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calieb68b6c30c0 [] [] }} ContainerID="c5e96dfad7b0dd9480d9be28608a0caffe43059dc7056d8bd94edaf36500659a" Namespace="calico-apiserver" Pod="calico-apiserver-7c6f969f4-kxjbr" WorkloadEndpoint="ci--4081--3--6--n--32c338e5e2-k8s-calico--apiserver--7c6f969f4--kxjbr-" Jan 16 23:57:38.825747 containerd[1486]: 2026-01-16 23:57:38.611 [INFO][4482] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c5e96dfad7b0dd9480d9be28608a0caffe43059dc7056d8bd94edaf36500659a" Namespace="calico-apiserver" Pod="calico-apiserver-7c6f969f4-kxjbr" WorkloadEndpoint="ci--4081--3--6--n--32c338e5e2-k8s-calico--apiserver--7c6f969f4--kxjbr-eth0" Jan 16 23:57:38.825747 containerd[1486]: 2026-01-16 23:57:38.655 [INFO][4504] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c5e96dfad7b0dd9480d9be28608a0caffe43059dc7056d8bd94edaf36500659a" HandleID="k8s-pod-network.c5e96dfad7b0dd9480d9be28608a0caffe43059dc7056d8bd94edaf36500659a" Workload="ci--4081--3--6--n--32c338e5e2-k8s-calico--apiserver--7c6f969f4--kxjbr-eth0" Jan 16 23:57:38.825747 containerd[1486]: 2026-01-16 23:57:38.656 [INFO][4504] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c5e96dfad7b0dd9480d9be28608a0caffe43059dc7056d8bd94edaf36500659a" HandleID="k8s-pod-network.c5e96dfad7b0dd9480d9be28608a0caffe43059dc7056d8bd94edaf36500659a" Workload="ci--4081--3--6--n--32c338e5e2-k8s-calico--apiserver--7c6f969f4--kxjbr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d2fe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-6-n-32c338e5e2", "pod":"calico-apiserver-7c6f969f4-kxjbr", "timestamp":"2026-01-16 23:57:38.655875817 +0000 UTC"}, Hostname:"ci-4081-3-6-n-32c338e5e2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 23:57:38.825747 containerd[1486]: 2026-01-16 23:57:38.656 [INFO][4504] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:38.825747 containerd[1486]: 2026-01-16 23:57:38.656 [INFO][4504] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:38.825747 containerd[1486]: 2026-01-16 23:57:38.656 [INFO][4504] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-32c338e5e2' Jan 16 23:57:38.825747 containerd[1486]: 2026-01-16 23:57:38.686 [INFO][4504] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c5e96dfad7b0dd9480d9be28608a0caffe43059dc7056d8bd94edaf36500659a" host="ci-4081-3-6-n-32c338e5e2" Jan 16 23:57:38.825747 containerd[1486]: 2026-01-16 23:57:38.707 [INFO][4504] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-32c338e5e2" Jan 16 23:57:38.825747 containerd[1486]: 2026-01-16 23:57:38.725 [INFO][4504] ipam/ipam.go 511: Trying affinity for 192.168.58.0/26 host="ci-4081-3-6-n-32c338e5e2" Jan 16 23:57:38.825747 containerd[1486]: 2026-01-16 23:57:38.737 [INFO][4504] ipam/ipam.go 158: Attempting to load block cidr=192.168.58.0/26 host="ci-4081-3-6-n-32c338e5e2" Jan 16 23:57:38.825747 containerd[1486]: 2026-01-16 23:57:38.749 [INFO][4504] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.58.0/26 host="ci-4081-3-6-n-32c338e5e2" Jan 16 23:57:38.825747 containerd[1486]: 2026-01-16 23:57:38.749 [INFO][4504] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.58.0/26 handle="k8s-pod-network.c5e96dfad7b0dd9480d9be28608a0caffe43059dc7056d8bd94edaf36500659a" host="ci-4081-3-6-n-32c338e5e2" Jan 16 23:57:38.825747 containerd[1486]: 2026-01-16 23:57:38.754 [INFO][4504] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c5e96dfad7b0dd9480d9be28608a0caffe43059dc7056d8bd94edaf36500659a Jan 16 23:57:38.825747 containerd[1486]: 2026-01-16 23:57:38.763 [INFO][4504] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.58.0/26 handle="k8s-pod-network.c5e96dfad7b0dd9480d9be28608a0caffe43059dc7056d8bd94edaf36500659a" host="ci-4081-3-6-n-32c338e5e2" Jan 16 23:57:38.825747 containerd[1486]: 2026-01-16 23:57:38.777 [INFO][4504] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.58.5/26] block=192.168.58.0/26 handle="k8s-pod-network.c5e96dfad7b0dd9480d9be28608a0caffe43059dc7056d8bd94edaf36500659a" host="ci-4081-3-6-n-32c338e5e2" Jan 16 23:57:38.825747 containerd[1486]: 2026-01-16 23:57:38.777 [INFO][4504] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.58.5/26] handle="k8s-pod-network.c5e96dfad7b0dd9480d9be28608a0caffe43059dc7056d8bd94edaf36500659a" host="ci-4081-3-6-n-32c338e5e2" Jan 16 23:57:38.825747 containerd[1486]: 2026-01-16 23:57:38.778 [INFO][4504] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:38.825747 containerd[1486]: 2026-01-16 23:57:38.778 [INFO][4504] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.58.5/26] IPv6=[] ContainerID="c5e96dfad7b0dd9480d9be28608a0caffe43059dc7056d8bd94edaf36500659a" HandleID="k8s-pod-network.c5e96dfad7b0dd9480d9be28608a0caffe43059dc7056d8bd94edaf36500659a" Workload="ci--4081--3--6--n--32c338e5e2-k8s-calico--apiserver--7c6f969f4--kxjbr-eth0" Jan 16 23:57:38.826326 containerd[1486]: 2026-01-16 23:57:38.783 [INFO][4482] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c5e96dfad7b0dd9480d9be28608a0caffe43059dc7056d8bd94edaf36500659a" Namespace="calico-apiserver" Pod="calico-apiserver-7c6f969f4-kxjbr" WorkloadEndpoint="ci--4081--3--6--n--32c338e5e2-k8s-calico--apiserver--7c6f969f4--kxjbr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--32c338e5e2-k8s-calico--apiserver--7c6f969f4--kxjbr-eth0", GenerateName:"calico-apiserver-7c6f969f4-", Namespace:"calico-apiserver", SelfLink:"", UID:"28527141-9485-40ed-9795-772c961207d3", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 57, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c6f969f4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-32c338e5e2", ContainerID:"", Pod:"calico-apiserver-7c6f969f4-kxjbr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.58.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieb68b6c30c0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:57:38.826326 containerd[1486]: 2026-01-16 23:57:38.784 [INFO][4482] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.58.5/32] ContainerID="c5e96dfad7b0dd9480d9be28608a0caffe43059dc7056d8bd94edaf36500659a" Namespace="calico-apiserver" Pod="calico-apiserver-7c6f969f4-kxjbr" WorkloadEndpoint="ci--4081--3--6--n--32c338e5e2-k8s-calico--apiserver--7c6f969f4--kxjbr-eth0" Jan 16 23:57:38.826326 containerd[1486]: 2026-01-16 23:57:38.784 [INFO][4482] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calieb68b6c30c0 ContainerID="c5e96dfad7b0dd9480d9be28608a0caffe43059dc7056d8bd94edaf36500659a" Namespace="calico-apiserver" Pod="calico-apiserver-7c6f969f4-kxjbr" WorkloadEndpoint="ci--4081--3--6--n--32c338e5e2-k8s-calico--apiserver--7c6f969f4--kxjbr-eth0" Jan 16 23:57:38.826326 containerd[1486]: 2026-01-16 23:57:38.790 [INFO][4482] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c5e96dfad7b0dd9480d9be28608a0caffe43059dc7056d8bd94edaf36500659a" Namespace="calico-apiserver" Pod="calico-apiserver-7c6f969f4-kxjbr" WorkloadEndpoint="ci--4081--3--6--n--32c338e5e2-k8s-calico--apiserver--7c6f969f4--kxjbr-eth0" Jan 16 23:57:38.826326 containerd[1486]: 2026-01-16 23:57:38.791 [INFO][4482] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c5e96dfad7b0dd9480d9be28608a0caffe43059dc7056d8bd94edaf36500659a" Namespace="calico-apiserver" Pod="calico-apiserver-7c6f969f4-kxjbr" WorkloadEndpoint="ci--4081--3--6--n--32c338e5e2-k8s-calico--apiserver--7c6f969f4--kxjbr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--32c338e5e2-k8s-calico--apiserver--7c6f969f4--kxjbr-eth0", GenerateName:"calico-apiserver-7c6f969f4-", Namespace:"calico-apiserver", SelfLink:"", UID:"28527141-9485-40ed-9795-772c961207d3", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 57, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c6f969f4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-32c338e5e2", ContainerID:"c5e96dfad7b0dd9480d9be28608a0caffe43059dc7056d8bd94edaf36500659a", Pod:"calico-apiserver-7c6f969f4-kxjbr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.58.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieb68b6c30c0", MAC:"6a:98:9c:1f:ba:9f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:57:38.826326 containerd[1486]: 2026-01-16 23:57:38.821 [INFO][4482] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c5e96dfad7b0dd9480d9be28608a0caffe43059dc7056d8bd94edaf36500659a" Namespace="calico-apiserver" Pod="calico-apiserver-7c6f969f4-kxjbr" WorkloadEndpoint="ci--4081--3--6--n--32c338e5e2-k8s-calico--apiserver--7c6f969f4--kxjbr-eth0" Jan 16 23:57:38.862091 containerd[1486]: time="2026-01-16T23:57:38.861946244Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 23:57:38.862091 containerd[1486]: time="2026-01-16T23:57:38.862035277Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 23:57:38.862091 containerd[1486]: time="2026-01-16T23:57:38.862056195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:57:38.862301 containerd[1486]: time="2026-01-16T23:57:38.862158388Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:57:38.895719 systemd[1]: Started cri-containerd-c5e96dfad7b0dd9480d9be28608a0caffe43059dc7056d8bd94edaf36500659a.scope - libcontainer container c5e96dfad7b0dd9480d9be28608a0caffe43059dc7056d8bd94edaf36500659a. Jan 16 23:57:38.922732 systemd-networkd[1380]: cali1139fda82c0: Link UP Jan 16 23:57:38.925740 systemd-networkd[1380]: cali1139fda82c0: Gained carrier Jan 16 23:57:38.957715 containerd[1486]: 2026-01-16 23:57:38.630 [INFO][4493] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 16 23:57:38.957715 containerd[1486]: 2026-01-16 23:57:38.667 [INFO][4493] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--32c338e5e2-k8s-calico--kube--controllers--866b5b959f--q6rnd-eth0 calico-kube-controllers-866b5b959f- calico-system ab55bbc8-2f84-4b63-ae7a-3f7a0c596089 982 0 2026-01-16 23:57:16 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:866b5b959f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-6-n-32c338e5e2 calico-kube-controllers-866b5b959f-q6rnd eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali1139fda82c0 [] [] }} ContainerID="0fe63fc848c184a9b35a8689e38ea19d9605a02473900ab86ded57fe287b7e44" Namespace="calico-system" Pod="calico-kube-controllers-866b5b959f-q6rnd" WorkloadEndpoint="ci--4081--3--6--n--32c338e5e2-k8s-calico--kube--controllers--866b5b959f--q6rnd-" Jan 16 23:57:38.957715 containerd[1486]: 2026-01-16 23:57:38.668 [INFO][4493] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0fe63fc848c184a9b35a8689e38ea19d9605a02473900ab86ded57fe287b7e44" Namespace="calico-system" Pod="calico-kube-controllers-866b5b959f-q6rnd" WorkloadEndpoint="ci--4081--3--6--n--32c338e5e2-k8s-calico--kube--controllers--866b5b959f--q6rnd-eth0" Jan 16 23:57:38.957715 containerd[1486]: 2026-01-16 23:57:38.756 [INFO][4514] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0fe63fc848c184a9b35a8689e38ea19d9605a02473900ab86ded57fe287b7e44" HandleID="k8s-pod-network.0fe63fc848c184a9b35a8689e38ea19d9605a02473900ab86ded57fe287b7e44" Workload="ci--4081--3--6--n--32c338e5e2-k8s-calico--kube--controllers--866b5b959f--q6rnd-eth0" Jan 16 23:57:38.957715 containerd[1486]: 2026-01-16 23:57:38.757 [INFO][4514] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0fe63fc848c184a9b35a8689e38ea19d9605a02473900ab86ded57fe287b7e44" HandleID="k8s-pod-network.0fe63fc848c184a9b35a8689e38ea19d9605a02473900ab86ded57fe287b7e44" Workload="ci--4081--3--6--n--32c338e5e2-k8s-calico--kube--controllers--866b5b959f--q6rnd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004cf80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-32c338e5e2", "pod":"calico-kube-controllers-866b5b959f-q6rnd", "timestamp":"2026-01-16 23:57:38.75628661 +0000 UTC"}, Hostname:"ci-4081-3-6-n-32c338e5e2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 23:57:38.957715 containerd[1486]: 2026-01-16 23:57:38.757 [INFO][4514] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:38.957715 containerd[1486]: 2026-01-16 23:57:38.778 [INFO][4514] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:38.957715 containerd[1486]: 2026-01-16 23:57:38.779 [INFO][4514] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-32c338e5e2' Jan 16 23:57:38.957715 containerd[1486]: 2026-01-16 23:57:38.823 [INFO][4514] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0fe63fc848c184a9b35a8689e38ea19d9605a02473900ab86ded57fe287b7e44" host="ci-4081-3-6-n-32c338e5e2" Jan 16 23:57:38.957715 containerd[1486]: 2026-01-16 23:57:38.836 [INFO][4514] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-32c338e5e2" Jan 16 23:57:38.957715 containerd[1486]: 2026-01-16 23:57:38.851 [INFO][4514] ipam/ipam.go 511: Trying affinity for 192.168.58.0/26 host="ci-4081-3-6-n-32c338e5e2" Jan 16 23:57:38.957715 containerd[1486]: 2026-01-16 23:57:38.856 [INFO][4514] ipam/ipam.go 158: Attempting to load block cidr=192.168.58.0/26 host="ci-4081-3-6-n-32c338e5e2" Jan 16 23:57:38.957715 containerd[1486]: 2026-01-16 23:57:38.875 [INFO][4514] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.58.0/26 host="ci-4081-3-6-n-32c338e5e2" Jan 16 23:57:38.957715 containerd[1486]: 2026-01-16 23:57:38.875 [INFO][4514] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.58.0/26 handle="k8s-pod-network.0fe63fc848c184a9b35a8689e38ea19d9605a02473900ab86ded57fe287b7e44" host="ci-4081-3-6-n-32c338e5e2" Jan 16 23:57:38.957715 containerd[1486]: 2026-01-16 23:57:38.877 [INFO][4514] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0fe63fc848c184a9b35a8689e38ea19d9605a02473900ab86ded57fe287b7e44 Jan 16 23:57:38.957715 containerd[1486]: 2026-01-16 23:57:38.899 [INFO][4514] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.58.0/26 handle="k8s-pod-network.0fe63fc848c184a9b35a8689e38ea19d9605a02473900ab86ded57fe287b7e44" host="ci-4081-3-6-n-32c338e5e2" Jan 16 23:57:38.957715 containerd[1486]: 2026-01-16 23:57:38.911 [INFO][4514] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.58.6/26] block=192.168.58.0/26 handle="k8s-pod-network.0fe63fc848c184a9b35a8689e38ea19d9605a02473900ab86ded57fe287b7e44" host="ci-4081-3-6-n-32c338e5e2" Jan 16 23:57:38.957715 containerd[1486]: 2026-01-16 23:57:38.911 [INFO][4514] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.58.6/26] handle="k8s-pod-network.0fe63fc848c184a9b35a8689e38ea19d9605a02473900ab86ded57fe287b7e44" host="ci-4081-3-6-n-32c338e5e2" Jan 16 23:57:38.957715 containerd[1486]: 2026-01-16 23:57:38.912 [INFO][4514] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:38.957715 containerd[1486]: 2026-01-16 23:57:38.912 [INFO][4514] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.58.6/26] IPv6=[] ContainerID="0fe63fc848c184a9b35a8689e38ea19d9605a02473900ab86ded57fe287b7e44" HandleID="k8s-pod-network.0fe63fc848c184a9b35a8689e38ea19d9605a02473900ab86ded57fe287b7e44" Workload="ci--4081--3--6--n--32c338e5e2-k8s-calico--kube--controllers--866b5b959f--q6rnd-eth0" Jan 16 23:57:38.958290 containerd[1486]: 2026-01-16 23:57:38.917 [INFO][4493] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0fe63fc848c184a9b35a8689e38ea19d9605a02473900ab86ded57fe287b7e44" Namespace="calico-system" Pod="calico-kube-controllers-866b5b959f-q6rnd" WorkloadEndpoint="ci--4081--3--6--n--32c338e5e2-k8s-calico--kube--controllers--866b5b959f--q6rnd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--32c338e5e2-k8s-calico--kube--controllers--866b5b959f--q6rnd-eth0", GenerateName:"calico-kube-controllers-866b5b959f-", Namespace:"calico-system", SelfLink:"", UID:"ab55bbc8-2f84-4b63-ae7a-3f7a0c596089", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 57, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"866b5b959f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-32c338e5e2", ContainerID:"", Pod:"calico-kube-controllers-866b5b959f-q6rnd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.58.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1139fda82c0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:57:38.958290 containerd[1486]: 2026-01-16 23:57:38.917 [INFO][4493] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.58.6/32] ContainerID="0fe63fc848c184a9b35a8689e38ea19d9605a02473900ab86ded57fe287b7e44" Namespace="calico-system" Pod="calico-kube-controllers-866b5b959f-q6rnd" WorkloadEndpoint="ci--4081--3--6--n--32c338e5e2-k8s-calico--kube--controllers--866b5b959f--q6rnd-eth0" Jan 16 23:57:38.958290 containerd[1486]: 2026-01-16 23:57:38.917 [INFO][4493] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1139fda82c0 ContainerID="0fe63fc848c184a9b35a8689e38ea19d9605a02473900ab86ded57fe287b7e44" Namespace="calico-system" Pod="calico-kube-controllers-866b5b959f-q6rnd" WorkloadEndpoint="ci--4081--3--6--n--32c338e5e2-k8s-calico--kube--controllers--866b5b959f--q6rnd-eth0" Jan 16 23:57:38.958290 containerd[1486]: 2026-01-16 23:57:38.920 [INFO][4493] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0fe63fc848c184a9b35a8689e38ea19d9605a02473900ab86ded57fe287b7e44" Namespace="calico-system" Pod="calico-kube-controllers-866b5b959f-q6rnd" WorkloadEndpoint="ci--4081--3--6--n--32c338e5e2-k8s-calico--kube--controllers--866b5b959f--q6rnd-eth0" Jan 16 23:57:38.958290 containerd[1486]: 2026-01-16 23:57:38.926 [INFO][4493] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0fe63fc848c184a9b35a8689e38ea19d9605a02473900ab86ded57fe287b7e44" Namespace="calico-system" Pod="calico-kube-controllers-866b5b959f-q6rnd" WorkloadEndpoint="ci--4081--3--6--n--32c338e5e2-k8s-calico--kube--controllers--866b5b959f--q6rnd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--32c338e5e2-k8s-calico--kube--controllers--866b5b959f--q6rnd-eth0", GenerateName:"calico-kube-controllers-866b5b959f-", Namespace:"calico-system", SelfLink:"", UID:"ab55bbc8-2f84-4b63-ae7a-3f7a0c596089", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 57, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"866b5b959f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-32c338e5e2", ContainerID:"0fe63fc848c184a9b35a8689e38ea19d9605a02473900ab86ded57fe287b7e44", Pod:"calico-kube-controllers-866b5b959f-q6rnd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.58.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1139fda82c0", MAC:"56:4b:57:8c:02:b4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:57:38.958290 containerd[1486]: 2026-01-16 23:57:38.953 [INFO][4493] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0fe63fc848c184a9b35a8689e38ea19d9605a02473900ab86ded57fe287b7e44" Namespace="calico-system" Pod="calico-kube-controllers-866b5b959f-q6rnd" WorkloadEndpoint="ci--4081--3--6--n--32c338e5e2-k8s-calico--kube--controllers--866b5b959f--q6rnd-eth0" Jan 16 23:57:38.997161 containerd[1486]: time="2026-01-16T23:57:38.996673344Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 23:57:38.997161 containerd[1486]: time="2026-01-16T23:57:38.996753737Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 23:57:38.997161 containerd[1486]: time="2026-01-16T23:57:38.996764697Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:57:38.997161 containerd[1486]: time="2026-01-16T23:57:38.997045755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:57:39.031696 systemd[1]: Started cri-containerd-0fe63fc848c184a9b35a8689e38ea19d9605a02473900ab86ded57fe287b7e44.scope - libcontainer container 0fe63fc848c184a9b35a8689e38ea19d9605a02473900ab86ded57fe287b7e44. Jan 16 23:57:39.038015 containerd[1486]: time="2026-01-16T23:57:39.037837161Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:57:39.038649 containerd[1486]: time="2026-01-16T23:57:39.038277169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c6f969f4-kxjbr,Uid:28527141-9485-40ed-9795-772c961207d3,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"c5e96dfad7b0dd9480d9be28608a0caffe43059dc7056d8bd94edaf36500659a\"" Jan 16 23:57:39.039716 containerd[1486]: time="2026-01-16T23:57:39.039407527Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 16 23:57:39.039908 containerd[1486]: time="2026-01-16T23:57:39.039871293Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 16 23:57:39.040253 kubelet[2574]: E0116 23:57:39.040007 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 23:57:39.040253 kubelet[2574]: E0116 23:57:39.040064 2574 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 23:57:39.040253 kubelet[2574]: E0116 23:57:39.040189 2574 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gr4fs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-b76rd_calico-system(ebb15273-01f0-4342-86a4-e67c5f3e53d0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 16 23:57:39.041899 kubelet[2574]: E0116 23:57:39.041843 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-b76rd" podUID="ebb15273-01f0-4342-86a4-e67c5f3e53d0" Jan 16 23:57:39.046863 containerd[1486]: time="2026-01-16T23:57:39.046158633Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 23:57:39.094378 containerd[1486]: time="2026-01-16T23:57:39.094254719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-866b5b959f-q6rnd,Uid:ab55bbc8-2f84-4b63-ae7a-3f7a0c596089,Namespace:calico-system,Attempt:1,} returns sandbox id \"0fe63fc848c184a9b35a8689e38ea19d9605a02473900ab86ded57fe287b7e44\"" Jan 16 23:57:39.367669 systemd-networkd[1380]: cali65de395b1aa: Gained IPv6LL Jan 16 23:57:39.380577 containerd[1486]: time="2026-01-16T23:57:39.380176669Z" level=info msg="StopPodSandbox for \"32c6585aca5f8c5c5cdc4eaae41999928b2b18636e66ffc11d735015927059eb\"" Jan 16 23:57:39.392016 containerd[1486]: time="2026-01-16T23:57:39.391974407Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:57:39.394196 containerd[1486]: time="2026-01-16T23:57:39.393847390Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 23:57:39.394196 containerd[1486]: time="2026-01-16T23:57:39.393977901Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 16 23:57:39.394684 kubelet[2574]: E0116 23:57:39.394515 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 23:57:39.394684 kubelet[2574]: E0116 23:57:39.394665 2574 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 23:57:39.395854 kubelet[2574]: E0116 23:57:39.394911 2574 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lpxvc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7c6f969f4-kxjbr_calico-apiserver(28527141-9485-40ed-9795-772c961207d3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 23:57:39.395942 containerd[1486]: time="2026-01-16T23:57:39.395277046Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 16 23:57:39.396302 kubelet[2574]: E0116 23:57:39.396215 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c6f969f4-kxjbr" podUID="28527141-9485-40ed-9795-772c961207d3" Jan 16 23:57:39.431772 systemd-networkd[1380]: calib94ac785ab1: Gained IPv6LL Jan 16 23:57:39.528719 containerd[1486]: 2026-01-16 23:57:39.470 [INFO][4648] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="32c6585aca5f8c5c5cdc4eaae41999928b2b18636e66ffc11d735015927059eb" Jan 16 23:57:39.528719 containerd[1486]: 2026-01-16 23:57:39.471 [INFO][4648] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="32c6585aca5f8c5c5cdc4eaae41999928b2b18636e66ffc11d735015927059eb" iface="eth0" netns="/var/run/netns/cni-f3c63ee8-25d9-b44b-3552-cb3b2e7b8f91" Jan 16 23:57:39.528719 containerd[1486]: 2026-01-16 23:57:39.472 [INFO][4648] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="32c6585aca5f8c5c5cdc4eaae41999928b2b18636e66ffc11d735015927059eb" iface="eth0" netns="/var/run/netns/cni-f3c63ee8-25d9-b44b-3552-cb3b2e7b8f91" Jan 16 23:57:39.528719 containerd[1486]: 2026-01-16 23:57:39.472 [INFO][4648] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="32c6585aca5f8c5c5cdc4eaae41999928b2b18636e66ffc11d735015927059eb" iface="eth0" netns="/var/run/netns/cni-f3c63ee8-25d9-b44b-3552-cb3b2e7b8f91" Jan 16 23:57:39.528719 containerd[1486]: 2026-01-16 23:57:39.472 [INFO][4648] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="32c6585aca5f8c5c5cdc4eaae41999928b2b18636e66ffc11d735015927059eb" Jan 16 23:57:39.528719 containerd[1486]: 2026-01-16 23:57:39.472 [INFO][4648] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="32c6585aca5f8c5c5cdc4eaae41999928b2b18636e66ffc11d735015927059eb" Jan 16 23:57:39.528719 containerd[1486]: 2026-01-16 23:57:39.508 [INFO][4656] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="32c6585aca5f8c5c5cdc4eaae41999928b2b18636e66ffc11d735015927059eb" HandleID="k8s-pod-network.32c6585aca5f8c5c5cdc4eaae41999928b2b18636e66ffc11d735015927059eb" Workload="ci--4081--3--6--n--32c338e5e2-k8s-calico--apiserver--7c6f969f4--6hcrh-eth0" Jan 16 23:57:39.528719 containerd[1486]: 2026-01-16 23:57:39.508 [INFO][4656] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:39.528719 containerd[1486]: 2026-01-16 23:57:39.508 [INFO][4656] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:39.528719 containerd[1486]: 2026-01-16 23:57:39.520 [WARNING][4656] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="32c6585aca5f8c5c5cdc4eaae41999928b2b18636e66ffc11d735015927059eb" HandleID="k8s-pod-network.32c6585aca5f8c5c5cdc4eaae41999928b2b18636e66ffc11d735015927059eb" Workload="ci--4081--3--6--n--32c338e5e2-k8s-calico--apiserver--7c6f969f4--6hcrh-eth0" Jan 16 23:57:39.528719 containerd[1486]: 2026-01-16 23:57:39.520 [INFO][4656] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="32c6585aca5f8c5c5cdc4eaae41999928b2b18636e66ffc11d735015927059eb" HandleID="k8s-pod-network.32c6585aca5f8c5c5cdc4eaae41999928b2b18636e66ffc11d735015927059eb" Workload="ci--4081--3--6--n--32c338e5e2-k8s-calico--apiserver--7c6f969f4--6hcrh-eth0" Jan 16 23:57:39.528719 containerd[1486]: 2026-01-16 23:57:39.522 [INFO][4656] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:39.528719 containerd[1486]: 2026-01-16 23:57:39.525 [INFO][4648] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="32c6585aca5f8c5c5cdc4eaae41999928b2b18636e66ffc11d735015927059eb" Jan 16 23:57:39.529880 containerd[1486]: time="2026-01-16T23:57:39.529686105Z" level=info msg="TearDown network for sandbox \"32c6585aca5f8c5c5cdc4eaae41999928b2b18636e66ffc11d735015927059eb\" successfully" Jan 16 23:57:39.529880 containerd[1486]: time="2026-01-16T23:57:39.529751301Z" level=info msg="StopPodSandbox for \"32c6585aca5f8c5c5cdc4eaae41999928b2b18636e66ffc11d735015927059eb\" returns successfully" Jan 16 23:57:39.531563 containerd[1486]: time="2026-01-16T23:57:39.530697391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c6f969f4-6hcrh,Uid:2216c59e-7647-44aa-810f-40503d382780,Namespace:calico-apiserver,Attempt:1,}" Jan 16 23:57:39.552263 systemd[1]: run-netns-cni\x2df3c63ee8\x2d25d9\x2db44b\x2d3552\x2dcb3b2e7b8f91.mount: Deactivated successfully. Jan 16 23:57:39.666577 kubelet[2574]: E0116 23:57:39.665949 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c6f969f4-kxjbr" podUID="28527141-9485-40ed-9795-772c961207d3" Jan 16 23:57:39.672952 kubelet[2574]: E0116 23:57:39.672876 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hkcjr" podUID="494d2d41-870f-485e-a8b2-cbb0fecf4357" Jan 16 23:57:39.675751 kubelet[2574]: E0116 23:57:39.675669 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-b76rd" podUID="ebb15273-01f0-4342-86a4-e67c5f3e53d0" Jan 16 23:57:39.687677 systemd-networkd[1380]: cali3c12ca45b8f: Gained IPv6LL Jan 16 23:57:39.738572 systemd-networkd[1380]: cali2fa51af46eb: Link UP Jan 16 23:57:39.740149 systemd-networkd[1380]: cali2fa51af46eb: Gained carrier Jan 16 23:57:39.756356 containerd[1486]: time="2026-01-16T23:57:39.756284069Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:57:39.757865 containerd[1486]: time="2026-01-16T23:57:39.757814797Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 16 23:57:39.759234 containerd[1486]: time="2026-01-16T23:57:39.757851995Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 16 23:57:39.759285 kubelet[2574]: E0116 23:57:39.758040 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 23:57:39.759285 kubelet[2574]: E0116 23:57:39.758090 2574 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 23:57:39.759285 kubelet[2574]: E0116 23:57:39.758219 2574 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9rlx7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-866b5b959f-q6rnd_calico-system(ab55bbc8-2f84-4b63-ae7a-3f7a0c596089): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 16 23:57:39.759650 kubelet[2574]: E0116 23:57:39.759594 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-866b5b959f-q6rnd" podUID="ab55bbc8-2f84-4b63-ae7a-3f7a0c596089" Jan 16 23:57:39.764284 containerd[1486]: 2026-01-16 23:57:39.590 [INFO][4663] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 16 23:57:39.764284 containerd[1486]: 2026-01-16 23:57:39.607 [INFO][4663] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--32c338e5e2-k8s-calico--apiserver--7c6f969f4--6hcrh-eth0 calico-apiserver-7c6f969f4- calico-apiserver 2216c59e-7647-44aa-810f-40503d382780 1015 0 2026-01-16 23:57:08 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7c6f969f4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-n-32c338e5e2 calico-apiserver-7c6f969f4-6hcrh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2fa51af46eb [] [] }} ContainerID="8e51be4ef309e71eed25088c5373965ee2ce9bc7cb0b359ae99d7ba39e87d3d3" Namespace="calico-apiserver" Pod="calico-apiserver-7c6f969f4-6hcrh" WorkloadEndpoint="ci--4081--3--6--n--32c338e5e2-k8s-calico--apiserver--7c6f969f4--6hcrh-" Jan 16 23:57:39.764284 containerd[1486]: 2026-01-16 23:57:39.607 [INFO][4663] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8e51be4ef309e71eed25088c5373965ee2ce9bc7cb0b359ae99d7ba39e87d3d3" Namespace="calico-apiserver" Pod="calico-apiserver-7c6f969f4-6hcrh" WorkloadEndpoint="ci--4081--3--6--n--32c338e5e2-k8s-calico--apiserver--7c6f969f4--6hcrh-eth0" Jan 16 23:57:39.764284 containerd[1486]: 2026-01-16 23:57:39.647 [INFO][4674] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8e51be4ef309e71eed25088c5373965ee2ce9bc7cb0b359ae99d7ba39e87d3d3" HandleID="k8s-pod-network.8e51be4ef309e71eed25088c5373965ee2ce9bc7cb0b359ae99d7ba39e87d3d3" Workload="ci--4081--3--6--n--32c338e5e2-k8s-calico--apiserver--7c6f969f4--6hcrh-eth0" Jan 16 23:57:39.764284 containerd[1486]: 2026-01-16 23:57:39.647 [INFO][4674] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8e51be4ef309e71eed25088c5373965ee2ce9bc7cb0b359ae99d7ba39e87d3d3" HandleID="k8s-pod-network.8e51be4ef309e71eed25088c5373965ee2ce9bc7cb0b359ae99d7ba39e87d3d3" Workload="ci--4081--3--6--n--32c338e5e2-k8s-calico--apiserver--7c6f969f4--6hcrh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b1b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-6-n-32c338e5e2", "pod":"calico-apiserver-7c6f969f4-6hcrh", "timestamp":"2026-01-16 23:57:39.647294072 +0000 UTC"}, Hostname:"ci-4081-3-6-n-32c338e5e2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 23:57:39.764284 containerd[1486]: 2026-01-16 23:57:39.647 [INFO][4674] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:39.764284 containerd[1486]: 2026-01-16 23:57:39.647 [INFO][4674] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:39.764284 containerd[1486]: 2026-01-16 23:57:39.647 [INFO][4674] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-32c338e5e2' Jan 16 23:57:39.764284 containerd[1486]: 2026-01-16 23:57:39.659 [INFO][4674] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8e51be4ef309e71eed25088c5373965ee2ce9bc7cb0b359ae99d7ba39e87d3d3" host="ci-4081-3-6-n-32c338e5e2" Jan 16 23:57:39.764284 containerd[1486]: 2026-01-16 23:57:39.667 [INFO][4674] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-32c338e5e2" Jan 16 23:57:39.764284 containerd[1486]: 2026-01-16 23:57:39.681 [INFO][4674] ipam/ipam.go 511: Trying affinity for 192.168.58.0/26 host="ci-4081-3-6-n-32c338e5e2" Jan 16 23:57:39.764284 containerd[1486]: 2026-01-16 23:57:39.686 [INFO][4674] ipam/ipam.go 158: Attempting to load block cidr=192.168.58.0/26 host="ci-4081-3-6-n-32c338e5e2" Jan 16 23:57:39.764284 containerd[1486]: 2026-01-16 23:57:39.695 [INFO][4674] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.58.0/26 host="ci-4081-3-6-n-32c338e5e2" Jan 16 23:57:39.764284 containerd[1486]: 2026-01-16 23:57:39.695 [INFO][4674] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.58.0/26 handle="k8s-pod-network.8e51be4ef309e71eed25088c5373965ee2ce9bc7cb0b359ae99d7ba39e87d3d3" host="ci-4081-3-6-n-32c338e5e2" Jan 16 23:57:39.764284 containerd[1486]: 2026-01-16 23:57:39.699 [INFO][4674] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8e51be4ef309e71eed25088c5373965ee2ce9bc7cb0b359ae99d7ba39e87d3d3 Jan 16 23:57:39.764284 containerd[1486]: 2026-01-16 23:57:39.705 [INFO][4674] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.58.0/26 handle="k8s-pod-network.8e51be4ef309e71eed25088c5373965ee2ce9bc7cb0b359ae99d7ba39e87d3d3" host="ci-4081-3-6-n-32c338e5e2" Jan 16 23:57:39.764284 containerd[1486]: 2026-01-16 23:57:39.726 [INFO][4674] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.58.7/26] block=192.168.58.0/26 handle="k8s-pod-network.8e51be4ef309e71eed25088c5373965ee2ce9bc7cb0b359ae99d7ba39e87d3d3" host="ci-4081-3-6-n-32c338e5e2" Jan 16 23:57:39.764284 containerd[1486]: 2026-01-16 23:57:39.726 [INFO][4674] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.58.7/26] handle="k8s-pod-network.8e51be4ef309e71eed25088c5373965ee2ce9bc7cb0b359ae99d7ba39e87d3d3" host="ci-4081-3-6-n-32c338e5e2" Jan 16 23:57:39.764284 containerd[1486]: 2026-01-16 23:57:39.726 [INFO][4674] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:39.764284 containerd[1486]: 2026-01-16 23:57:39.726 [INFO][4674] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.58.7/26] IPv6=[] ContainerID="8e51be4ef309e71eed25088c5373965ee2ce9bc7cb0b359ae99d7ba39e87d3d3" HandleID="k8s-pod-network.8e51be4ef309e71eed25088c5373965ee2ce9bc7cb0b359ae99d7ba39e87d3d3" Workload="ci--4081--3--6--n--32c338e5e2-k8s-calico--apiserver--7c6f969f4--6hcrh-eth0" Jan 16 23:57:39.766413 containerd[1486]: 2026-01-16 23:57:39.730 [INFO][4663] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8e51be4ef309e71eed25088c5373965ee2ce9bc7cb0b359ae99d7ba39e87d3d3" Namespace="calico-apiserver" Pod="calico-apiserver-7c6f969f4-6hcrh" WorkloadEndpoint="ci--4081--3--6--n--32c338e5e2-k8s-calico--apiserver--7c6f969f4--6hcrh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--32c338e5e2-k8s-calico--apiserver--7c6f969f4--6hcrh-eth0", GenerateName:"calico-apiserver-7c6f969f4-", Namespace:"calico-apiserver", SelfLink:"", UID:"2216c59e-7647-44aa-810f-40503d382780", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 57, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c6f969f4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-32c338e5e2", ContainerID:"", Pod:"calico-apiserver-7c6f969f4-6hcrh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.58.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2fa51af46eb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:57:39.766413 containerd[1486]: 2026-01-16 23:57:39.731 [INFO][4663] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.58.7/32] ContainerID="8e51be4ef309e71eed25088c5373965ee2ce9bc7cb0b359ae99d7ba39e87d3d3" Namespace="calico-apiserver" Pod="calico-apiserver-7c6f969f4-6hcrh" WorkloadEndpoint="ci--4081--3--6--n--32c338e5e2-k8s-calico--apiserver--7c6f969f4--6hcrh-eth0" Jan 16 23:57:39.766413 containerd[1486]: 2026-01-16 23:57:39.731 [INFO][4663] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2fa51af46eb ContainerID="8e51be4ef309e71eed25088c5373965ee2ce9bc7cb0b359ae99d7ba39e87d3d3" Namespace="calico-apiserver" Pod="calico-apiserver-7c6f969f4-6hcrh" WorkloadEndpoint="ci--4081--3--6--n--32c338e5e2-k8s-calico--apiserver--7c6f969f4--6hcrh-eth0" Jan 16 23:57:39.766413 containerd[1486]: 2026-01-16 23:57:39.741 [INFO][4663] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8e51be4ef309e71eed25088c5373965ee2ce9bc7cb0b359ae99d7ba39e87d3d3" Namespace="calico-apiserver" Pod="calico-apiserver-7c6f969f4-6hcrh" WorkloadEndpoint="ci--4081--3--6--n--32c338e5e2-k8s-calico--apiserver--7c6f969f4--6hcrh-eth0" Jan 16 23:57:39.766413 containerd[1486]: 2026-01-16 23:57:39.741 [INFO][4663] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8e51be4ef309e71eed25088c5373965ee2ce9bc7cb0b359ae99d7ba39e87d3d3" Namespace="calico-apiserver" Pod="calico-apiserver-7c6f969f4-6hcrh" WorkloadEndpoint="ci--4081--3--6--n--32c338e5e2-k8s-calico--apiserver--7c6f969f4--6hcrh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--32c338e5e2-k8s-calico--apiserver--7c6f969f4--6hcrh-eth0", GenerateName:"calico-apiserver-7c6f969f4-", Namespace:"calico-apiserver", SelfLink:"", UID:"2216c59e-7647-44aa-810f-40503d382780", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 57, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c6f969f4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-32c338e5e2", ContainerID:"8e51be4ef309e71eed25088c5373965ee2ce9bc7cb0b359ae99d7ba39e87d3d3", Pod:"calico-apiserver-7c6f969f4-6hcrh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.58.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2fa51af46eb", MAC:"86:7b:6b:20:ae:c0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:57:39.766413 containerd[1486]: 2026-01-16 23:57:39.760 [INFO][4663] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8e51be4ef309e71eed25088c5373965ee2ce9bc7cb0b359ae99d7ba39e87d3d3" Namespace="calico-apiserver" Pod="calico-apiserver-7c6f969f4-6hcrh" WorkloadEndpoint="ci--4081--3--6--n--32c338e5e2-k8s-calico--apiserver--7c6f969f4--6hcrh-eth0" Jan 16 23:57:39.790761 containerd[1486]: time="2026-01-16T23:57:39.790594842Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 23:57:39.790761 containerd[1486]: time="2026-01-16T23:57:39.790686156Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 23:57:39.790761 containerd[1486]: time="2026-01-16T23:57:39.790726633Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:57:39.791202 containerd[1486]: time="2026-01-16T23:57:39.790835065Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:57:39.817669 systemd[1]: Started cri-containerd-8e51be4ef309e71eed25088c5373965ee2ce9bc7cb0b359ae99d7ba39e87d3d3.scope - libcontainer container 8e51be4ef309e71eed25088c5373965ee2ce9bc7cb0b359ae99d7ba39e87d3d3. Jan 16 23:57:39.862976 containerd[1486]: time="2026-01-16T23:57:39.862937477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c6f969f4-6hcrh,Uid:2216c59e-7647-44aa-810f-40503d382780,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"8e51be4ef309e71eed25088c5373965ee2ce9bc7cb0b359ae99d7ba39e87d3d3\"" Jan 16 23:57:39.865096 containerd[1486]: time="2026-01-16T23:57:39.865042963Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 23:57:40.209403 containerd[1486]: time="2026-01-16T23:57:40.209313072Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:57:40.210790 containerd[1486]: time="2026-01-16T23:57:40.210742732Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 23:57:40.210926 containerd[1486]: time="2026-01-16T23:57:40.210863204Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 16 23:57:40.211208 kubelet[2574]: E0116 23:57:40.211123 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 23:57:40.211208 kubelet[2574]: E0116 23:57:40.211183 2574 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 23:57:40.211399 kubelet[2574]: E0116 23:57:40.211344 2574 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wvxng,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7c6f969f4-6hcrh_calico-apiserver(2216c59e-7647-44aa-810f-40503d382780): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 23:57:40.212971 kubelet[2574]: E0116 23:57:40.212813 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c6f969f4-6hcrh" podUID="2216c59e-7647-44aa-810f-40503d382780" Jan 16 23:57:40.519678 systemd-networkd[1380]: cali1139fda82c0: Gained IPv6LL Jan 16 23:57:40.646765 kubelet[2574]: I0116 23:57:40.646385 2574 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 16 23:57:40.680790 kubelet[2574]: E0116 23:57:40.680549 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c6f969f4-6hcrh" podUID="2216c59e-7647-44aa-810f-40503d382780" Jan 16 23:57:40.683780 kubelet[2574]: E0116 23:57:40.683646 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c6f969f4-kxjbr" podUID="28527141-9485-40ed-9795-772c961207d3" Jan 16 23:57:40.683780 kubelet[2574]: E0116 23:57:40.683747 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-866b5b959f-q6rnd" podUID="ab55bbc8-2f84-4b63-ae7a-3f7a0c596089" Jan 16 23:57:40.776002 systemd-networkd[1380]: calieb68b6c30c0: Gained IPv6LL Jan 16 23:57:41.095766 systemd-networkd[1380]: cali2fa51af46eb: Gained IPv6LL Jan 16 23:57:41.380033 containerd[1486]: time="2026-01-16T23:57:41.379406383Z" level=info msg="StopPodSandbox for \"a0d98276a45f0f336105b9bd5f8f2a1a9759b9f48476cf77f99a806378b7ec77\"" Jan 16 23:57:41.493788 containerd[1486]: 2026-01-16 23:57:41.447 [INFO][4763] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a0d98276a45f0f336105b9bd5f8f2a1a9759b9f48476cf77f99a806378b7ec77" Jan 16 23:57:41.493788 containerd[1486]: 2026-01-16 23:57:41.448 [INFO][4763] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a0d98276a45f0f336105b9bd5f8f2a1a9759b9f48476cf77f99a806378b7ec77" iface="eth0" netns="/var/run/netns/cni-a5c2a767-2018-55ba-f2ce-6cbe95ee8ac8" Jan 16 23:57:41.493788 containerd[1486]: 2026-01-16 23:57:41.449 [INFO][4763] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a0d98276a45f0f336105b9bd5f8f2a1a9759b9f48476cf77f99a806378b7ec77" iface="eth0" netns="/var/run/netns/cni-a5c2a767-2018-55ba-f2ce-6cbe95ee8ac8" Jan 16 23:57:41.493788 containerd[1486]: 2026-01-16 23:57:41.450 [INFO][4763] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a0d98276a45f0f336105b9bd5f8f2a1a9759b9f48476cf77f99a806378b7ec77" iface="eth0" netns="/var/run/netns/cni-a5c2a767-2018-55ba-f2ce-6cbe95ee8ac8" Jan 16 23:57:41.493788 containerd[1486]: 2026-01-16 23:57:41.451 [INFO][4763] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a0d98276a45f0f336105b9bd5f8f2a1a9759b9f48476cf77f99a806378b7ec77" Jan 16 23:57:41.493788 containerd[1486]: 2026-01-16 23:57:41.451 [INFO][4763] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a0d98276a45f0f336105b9bd5f8f2a1a9759b9f48476cf77f99a806378b7ec77" Jan 16 23:57:41.493788 containerd[1486]: 2026-01-16 23:57:41.475 [INFO][4776] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a0d98276a45f0f336105b9bd5f8f2a1a9759b9f48476cf77f99a806378b7ec77" HandleID="k8s-pod-network.a0d98276a45f0f336105b9bd5f8f2a1a9759b9f48476cf77f99a806378b7ec77" Workload="ci--4081--3--6--n--32c338e5e2-k8s-coredns--674b8bbfcf--cvb6h-eth0" Jan 16 23:57:41.493788 containerd[1486]: 2026-01-16 23:57:41.475 [INFO][4776] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:41.493788 containerd[1486]: 2026-01-16 23:57:41.475 [INFO][4776] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:41.493788 containerd[1486]: 2026-01-16 23:57:41.485 [WARNING][4776] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a0d98276a45f0f336105b9bd5f8f2a1a9759b9f48476cf77f99a806378b7ec77" HandleID="k8s-pod-network.a0d98276a45f0f336105b9bd5f8f2a1a9759b9f48476cf77f99a806378b7ec77" Workload="ci--4081--3--6--n--32c338e5e2-k8s-coredns--674b8bbfcf--cvb6h-eth0" Jan 16 23:57:41.493788 containerd[1486]: 2026-01-16 23:57:41.485 [INFO][4776] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a0d98276a45f0f336105b9bd5f8f2a1a9759b9f48476cf77f99a806378b7ec77" HandleID="k8s-pod-network.a0d98276a45f0f336105b9bd5f8f2a1a9759b9f48476cf77f99a806378b7ec77" Workload="ci--4081--3--6--n--32c338e5e2-k8s-coredns--674b8bbfcf--cvb6h-eth0" Jan 16 23:57:41.493788 containerd[1486]: 2026-01-16 23:57:41.487 [INFO][4776] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:41.493788 containerd[1486]: 2026-01-16 23:57:41.490 [INFO][4763] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a0d98276a45f0f336105b9bd5f8f2a1a9759b9f48476cf77f99a806378b7ec77" Jan 16 23:57:41.493788 containerd[1486]: time="2026-01-16T23:57:41.493551236Z" level=info msg="TearDown network for sandbox \"a0d98276a45f0f336105b9bd5f8f2a1a9759b9f48476cf77f99a806378b7ec77\" successfully" Jan 16 23:57:41.493788 containerd[1486]: time="2026-01-16T23:57:41.493601833Z" level=info msg="StopPodSandbox for \"a0d98276a45f0f336105b9bd5f8f2a1a9759b9f48476cf77f99a806378b7ec77\" returns successfully" Jan 16 23:57:41.497538 containerd[1486]: time="2026-01-16T23:57:41.496255574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cvb6h,Uid:7c95011c-e199-44d0-b1e0-17f58fab750a,Namespace:kube-system,Attempt:1,}" Jan 16 23:57:41.498497 systemd[1]: run-netns-cni\x2da5c2a767\x2d2018\x2d55ba\x2df2ce\x2d6cbe95ee8ac8.mount: Deactivated successfully. Jan 16 23:57:41.685317 systemd-networkd[1380]: cali9977d17229d: Link UP Jan 16 23:57:41.689015 kubelet[2574]: E0116 23:57:41.688709 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c6f969f4-6hcrh" podUID="2216c59e-7647-44aa-810f-40503d382780" Jan 16 23:57:41.691043 systemd-networkd[1380]: cali9977d17229d: Gained carrier Jan 16 23:57:41.717021 containerd[1486]: 2026-01-16 23:57:41.555 [INFO][4784] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 16 23:57:41.717021 containerd[1486]: 2026-01-16 23:57:41.573 [INFO][4784] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--32c338e5e2-k8s-coredns--674b8bbfcf--cvb6h-eth0 coredns-674b8bbfcf- kube-system 7c95011c-e199-44d0-b1e0-17f58fab750a 1059 0 2026-01-16 23:56:57 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-n-32c338e5e2 coredns-674b8bbfcf-cvb6h eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9977d17229d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="87526ade0224998dccb0c22e11bf68829df618dbabbaf60d7b87f265eed5a0df" Namespace="kube-system" Pod="coredns-674b8bbfcf-cvb6h" WorkloadEndpoint="ci--4081--3--6--n--32c338e5e2-k8s-coredns--674b8bbfcf--cvb6h-" Jan 16 23:57:41.717021 containerd[1486]: 2026-01-16 23:57:41.573 [INFO][4784] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="87526ade0224998dccb0c22e11bf68829df618dbabbaf60d7b87f265eed5a0df" Namespace="kube-system" Pod="coredns-674b8bbfcf-cvb6h" WorkloadEndpoint="ci--4081--3--6--n--32c338e5e2-k8s-coredns--674b8bbfcf--cvb6h-eth0" Jan 16 23:57:41.717021 containerd[1486]: 2026-01-16 23:57:41.609 [INFO][4805] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="87526ade0224998dccb0c22e11bf68829df618dbabbaf60d7b87f265eed5a0df" HandleID="k8s-pod-network.87526ade0224998dccb0c22e11bf68829df618dbabbaf60d7b87f265eed5a0df" Workload="ci--4081--3--6--n--32c338e5e2-k8s-coredns--674b8bbfcf--cvb6h-eth0" Jan 16 23:57:41.717021 containerd[1486]: 2026-01-16 23:57:41.609 [INFO][4805] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="87526ade0224998dccb0c22e11bf68829df618dbabbaf60d7b87f265eed5a0df" HandleID="k8s-pod-network.87526ade0224998dccb0c22e11bf68829df618dbabbaf60d7b87f265eed5a0df" Workload="ci--4081--3--6--n--32c338e5e2-k8s-coredns--674b8bbfcf--cvb6h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002aa4a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-n-32c338e5e2", "pod":"coredns-674b8bbfcf-cvb6h", "timestamp":"2026-01-16 23:57:41.609427533 +0000 UTC"}, Hostname:"ci-4081-3-6-n-32c338e5e2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 23:57:41.717021 containerd[1486]: 2026-01-16 23:57:41.609 [INFO][4805] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:41.717021 containerd[1486]: 2026-01-16 23:57:41.609 [INFO][4805] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:41.717021 containerd[1486]: 2026-01-16 23:57:41.609 [INFO][4805] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-32c338e5e2' Jan 16 23:57:41.717021 containerd[1486]: 2026-01-16 23:57:41.620 [INFO][4805] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.87526ade0224998dccb0c22e11bf68829df618dbabbaf60d7b87f265eed5a0df" host="ci-4081-3-6-n-32c338e5e2" Jan 16 23:57:41.717021 containerd[1486]: 2026-01-16 23:57:41.626 [INFO][4805] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-32c338e5e2" Jan 16 23:57:41.717021 containerd[1486]: 2026-01-16 23:57:41.631 [INFO][4805] ipam/ipam.go 511: Trying affinity for 192.168.58.0/26 host="ci-4081-3-6-n-32c338e5e2" Jan 16 23:57:41.717021 containerd[1486]: 2026-01-16 23:57:41.634 [INFO][4805] ipam/ipam.go 158: Attempting to load block cidr=192.168.58.0/26 host="ci-4081-3-6-n-32c338e5e2" Jan 16 23:57:41.717021 containerd[1486]: 2026-01-16 23:57:41.636 [INFO][4805] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.58.0/26 host="ci-4081-3-6-n-32c338e5e2" Jan 16 23:57:41.717021 containerd[1486]: 2026-01-16 23:57:41.637 [INFO][4805] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.58.0/26 handle="k8s-pod-network.87526ade0224998dccb0c22e11bf68829df618dbabbaf60d7b87f265eed5a0df" host="ci-4081-3-6-n-32c338e5e2" Jan 16 23:57:41.717021 containerd[1486]: 2026-01-16 23:57:41.640 [INFO][4805] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.87526ade0224998dccb0c22e11bf68829df618dbabbaf60d7b87f265eed5a0df Jan 16 23:57:41.717021 containerd[1486]: 2026-01-16 23:57:41.658 [INFO][4805] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.58.0/26 handle="k8s-pod-network.87526ade0224998dccb0c22e11bf68829df618dbabbaf60d7b87f265eed5a0df" host="ci-4081-3-6-n-32c338e5e2" Jan 16 23:57:41.717021 containerd[1486]: 2026-01-16 23:57:41.667 [INFO][4805] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.58.8/26] block=192.168.58.0/26 handle="k8s-pod-network.87526ade0224998dccb0c22e11bf68829df618dbabbaf60d7b87f265eed5a0df" host="ci-4081-3-6-n-32c338e5e2" Jan 16 23:57:41.717021 containerd[1486]: 2026-01-16 23:57:41.667 [INFO][4805] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.58.8/26] handle="k8s-pod-network.87526ade0224998dccb0c22e11bf68829df618dbabbaf60d7b87f265eed5a0df" host="ci-4081-3-6-n-32c338e5e2" Jan 16 23:57:41.717021 containerd[1486]: 2026-01-16 23:57:41.667 [INFO][4805] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:41.717021 containerd[1486]: 2026-01-16 23:57:41.667 [INFO][4805] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.58.8/26] IPv6=[] ContainerID="87526ade0224998dccb0c22e11bf68829df618dbabbaf60d7b87f265eed5a0df" HandleID="k8s-pod-network.87526ade0224998dccb0c22e11bf68829df618dbabbaf60d7b87f265eed5a0df" Workload="ci--4081--3--6--n--32c338e5e2-k8s-coredns--674b8bbfcf--cvb6h-eth0" Jan 16 23:57:41.718784 containerd[1486]: 2026-01-16 23:57:41.671 [INFO][4784] cni-plugin/k8s.go 418: Populated endpoint ContainerID="87526ade0224998dccb0c22e11bf68829df618dbabbaf60d7b87f265eed5a0df" Namespace="kube-system" Pod="coredns-674b8bbfcf-cvb6h" WorkloadEndpoint="ci--4081--3--6--n--32c338e5e2-k8s-coredns--674b8bbfcf--cvb6h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--32c338e5e2-k8s-coredns--674b8bbfcf--cvb6h-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7c95011c-e199-44d0-b1e0-17f58fab750a", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 56, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-32c338e5e2", ContainerID:"", Pod:"coredns-674b8bbfcf-cvb6h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.58.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9977d17229d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:57:41.718784 containerd[1486]: 2026-01-16 23:57:41.672 [INFO][4784] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.58.8/32] ContainerID="87526ade0224998dccb0c22e11bf68829df618dbabbaf60d7b87f265eed5a0df" Namespace="kube-system" Pod="coredns-674b8bbfcf-cvb6h" WorkloadEndpoint="ci--4081--3--6--n--32c338e5e2-k8s-coredns--674b8bbfcf--cvb6h-eth0" Jan 16 23:57:41.718784 containerd[1486]: 2026-01-16 23:57:41.672 [INFO][4784] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9977d17229d ContainerID="87526ade0224998dccb0c22e11bf68829df618dbabbaf60d7b87f265eed5a0df" Namespace="kube-system" Pod="coredns-674b8bbfcf-cvb6h" WorkloadEndpoint="ci--4081--3--6--n--32c338e5e2-k8s-coredns--674b8bbfcf--cvb6h-eth0" Jan 16 23:57:41.718784 containerd[1486]: 2026-01-16 23:57:41.688 [INFO][4784] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="87526ade0224998dccb0c22e11bf68829df618dbabbaf60d7b87f265eed5a0df" Namespace="kube-system" Pod="coredns-674b8bbfcf-cvb6h" WorkloadEndpoint="ci--4081--3--6--n--32c338e5e2-k8s-coredns--674b8bbfcf--cvb6h-eth0" Jan 16 23:57:41.718784 containerd[1486]: 2026-01-16 23:57:41.691 [INFO][4784] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="87526ade0224998dccb0c22e11bf68829df618dbabbaf60d7b87f265eed5a0df" Namespace="kube-system" Pod="coredns-674b8bbfcf-cvb6h" WorkloadEndpoint="ci--4081--3--6--n--32c338e5e2-k8s-coredns--674b8bbfcf--cvb6h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--32c338e5e2-k8s-coredns--674b8bbfcf--cvb6h-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7c95011c-e199-44d0-b1e0-17f58fab750a", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 56, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-32c338e5e2", ContainerID:"87526ade0224998dccb0c22e11bf68829df618dbabbaf60d7b87f265eed5a0df", Pod:"coredns-674b8bbfcf-cvb6h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.58.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9977d17229d", MAC:"da:b7:10:13:ba:11", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:57:41.718784 containerd[1486]: 2026-01-16 23:57:41.713 [INFO][4784] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="87526ade0224998dccb0c22e11bf68829df618dbabbaf60d7b87f265eed5a0df" Namespace="kube-system" Pod="coredns-674b8bbfcf-cvb6h" WorkloadEndpoint="ci--4081--3--6--n--32c338e5e2-k8s-coredns--674b8bbfcf--cvb6h-eth0" Jan 16 23:57:41.756694 containerd[1486]: time="2026-01-16T23:57:41.755858698Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 23:57:41.756694 containerd[1486]: time="2026-01-16T23:57:41.756152878Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 23:57:41.756694 containerd[1486]: time="2026-01-16T23:57:41.756228833Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:57:41.756694 containerd[1486]: time="2026-01-16T23:57:41.756537972Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:57:41.804659 systemd[1]: Started cri-containerd-87526ade0224998dccb0c22e11bf68829df618dbabbaf60d7b87f265eed5a0df.scope - libcontainer container 87526ade0224998dccb0c22e11bf68829df618dbabbaf60d7b87f265eed5a0df. Jan 16 23:57:41.847187 containerd[1486]: time="2026-01-16T23:57:41.847136967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cvb6h,Uid:7c95011c-e199-44d0-b1e0-17f58fab750a,Namespace:kube-system,Attempt:1,} returns sandbox id \"87526ade0224998dccb0c22e11bf68829df618dbabbaf60d7b87f265eed5a0df\"" Jan 16 23:57:41.861175 containerd[1486]: time="2026-01-16T23:57:41.861038753Z" level=info msg="CreateContainer within sandbox \"87526ade0224998dccb0c22e11bf68829df618dbabbaf60d7b87f265eed5a0df\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 16 23:57:41.879917 containerd[1486]: time="2026-01-16T23:57:41.879844770Z" level=info msg="CreateContainer within sandbox \"87526ade0224998dccb0c22e11bf68829df618dbabbaf60d7b87f265eed5a0df\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"135b76146db5a66c1db35ee250cbb145ee9fcf73fb29755a8148c9f9a9339088\"" Jan 16 23:57:41.881652 containerd[1486]: time="2026-01-16T23:57:41.881598013Z" level=info msg="StartContainer for \"135b76146db5a66c1db35ee250cbb145ee9fcf73fb29755a8148c9f9a9339088\"" Jan 16 23:57:41.914686 systemd[1]: Started cri-containerd-135b76146db5a66c1db35ee250cbb145ee9fcf73fb29755a8148c9f9a9339088.scope - libcontainer container 135b76146db5a66c1db35ee250cbb145ee9fcf73fb29755a8148c9f9a9339088. Jan 16 23:57:41.966367 containerd[1486]: time="2026-01-16T23:57:41.965754920Z" level=info msg="StartContainer for \"135b76146db5a66c1db35ee250cbb145ee9fcf73fb29755a8148c9f9a9339088\" returns successfully" Jan 16 23:57:42.045584 kernel: bpftool[4937]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 16 23:57:42.246309 systemd-networkd[1380]: vxlan.calico: Link UP Jan 16 23:57:42.246317 systemd-networkd[1380]: vxlan.calico: Gained carrier Jan 16 23:57:42.499447 systemd[1]: run-containerd-runc-k8s.io-87526ade0224998dccb0c22e11bf68829df618dbabbaf60d7b87f265eed5a0df-runc.0lLg8A.mount: Deactivated successfully. Jan 16 23:57:42.703935 kubelet[2574]: I0116 23:57:42.703809 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-cvb6h" podStartSLOduration=45.703780444 podStartE2EDuration="45.703780444s" podCreationTimestamp="2026-01-16 23:56:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-16 23:57:42.700750959 +0000 UTC m=+52.434813787" watchObservedRunningTime="2026-01-16 23:57:42.703780444 +0000 UTC m=+52.437843272" Jan 16 23:57:43.208190 systemd-networkd[1380]: cali9977d17229d: Gained IPv6LL Jan 16 23:57:43.382883 containerd[1486]: time="2026-01-16T23:57:43.382123308Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 16 23:57:43.715250 containerd[1486]: time="2026-01-16T23:57:43.715123105Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:57:43.717696 containerd[1486]: time="2026-01-16T23:57:43.717544435Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 16 23:57:43.717696 containerd[1486]: time="2026-01-16T23:57:43.717626950Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 16 23:57:43.718733 kubelet[2574]: E0116 23:57:43.717879 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 23:57:43.718733 kubelet[2574]: E0116 23:57:43.717934 2574 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 23:57:43.718733 kubelet[2574]: E0116 23:57:43.718077 2574 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e3f02b4406bf4653bd0ff6a5488c4241,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ph5z8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-58dd6fc975-fz8dg_calico-system(40cdc5e9-1abb-47b0-ad9d-ea94f986178b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 16 23:57:43.720845 containerd[1486]: time="2026-01-16T23:57:43.720483254Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 16 23:57:44.061289 containerd[1486]: time="2026-01-16T23:57:44.061040344Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:57:44.062679 containerd[1486]: time="2026-01-16T23:57:44.062525857Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 16 23:57:44.062896 containerd[1486]: time="2026-01-16T23:57:44.062639730Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 16 23:57:44.063028 kubelet[2574]: E0116 23:57:44.062929 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 23:57:44.063028 kubelet[2574]: E0116 23:57:44.063000 2574 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 23:57:44.063342 kubelet[2574]: E0116 23:57:44.063261 2574 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ph5z8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-58dd6fc975-fz8dg_calico-system(40cdc5e9-1abb-47b0-ad9d-ea94f986178b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 16 23:57:44.065379 kubelet[2574]: E0116 23:57:44.065306 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58dd6fc975-fz8dg" podUID="40cdc5e9-1abb-47b0-ad9d-ea94f986178b" Jan 16 23:57:44.153579 kubelet[2574]: I0116 23:57:44.153511 2574 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 16 23:57:44.168398 systemd-networkd[1380]: vxlan.calico: Gained IPv6LL Jan 16 23:57:44.285420 systemd[1]: run-containerd-runc-k8s.io-e535ceb6bdcdd55fbec6fe1cc5e631d9851e9ab2c515f51cb1bce71a8c58f9d3-runc.X5brYw.mount: Deactivated successfully. Jan 16 23:57:50.388704 containerd[1486]: time="2026-01-16T23:57:50.388381381Z" level=info msg="StopPodSandbox for \"ef4e9e8ff9534c99c1b81587db4ca0d4478f6471de58d2827288e0d5a4879ac2\"" Jan 16 23:57:50.531954 containerd[1486]: 2026-01-16 23:57:50.467 [WARNING][5082] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ef4e9e8ff9534c99c1b81587db4ca0d4478f6471de58d2827288e0d5a4879ac2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--32c338e5e2-k8s-coredns--674b8bbfcf--m8gpg-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"2e8f4602-9cf1-4251-be4e-4def80a11ec7", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 56, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-32c338e5e2", ContainerID:"0bb9f595d7851caf6ffc88d85d67b5e3f617a40a44765cd25e0e737635a91a2e", Pod:"coredns-674b8bbfcf-m8gpg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.58.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3c12ca45b8f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:57:50.531954 containerd[1486]: 2026-01-16 23:57:50.468 [INFO][5082] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ef4e9e8ff9534c99c1b81587db4ca0d4478f6471de58d2827288e0d5a4879ac2" Jan 16 23:57:50.531954 containerd[1486]: 2026-01-16 23:57:50.468 [INFO][5082] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ef4e9e8ff9534c99c1b81587db4ca0d4478f6471de58d2827288e0d5a4879ac2" iface="eth0" netns="" Jan 16 23:57:50.531954 containerd[1486]: 2026-01-16 23:57:50.468 [INFO][5082] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ef4e9e8ff9534c99c1b81587db4ca0d4478f6471de58d2827288e0d5a4879ac2" Jan 16 23:57:50.531954 containerd[1486]: 2026-01-16 23:57:50.468 [INFO][5082] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ef4e9e8ff9534c99c1b81587db4ca0d4478f6471de58d2827288e0d5a4879ac2" Jan 16 23:57:50.531954 containerd[1486]: 2026-01-16 23:57:50.505 [INFO][5089] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ef4e9e8ff9534c99c1b81587db4ca0d4478f6471de58d2827288e0d5a4879ac2" HandleID="k8s-pod-network.ef4e9e8ff9534c99c1b81587db4ca0d4478f6471de58d2827288e0d5a4879ac2" Workload="ci--4081--3--6--n--32c338e5e2-k8s-coredns--674b8bbfcf--m8gpg-eth0" Jan 16 23:57:50.531954 containerd[1486]: 2026-01-16 23:57:50.505 [INFO][5089] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:50.531954 containerd[1486]: 2026-01-16 23:57:50.505 [INFO][5089] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:50.531954 containerd[1486]: 2026-01-16 23:57:50.517 [WARNING][5089] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ef4e9e8ff9534c99c1b81587db4ca0d4478f6471de58d2827288e0d5a4879ac2" HandleID="k8s-pod-network.ef4e9e8ff9534c99c1b81587db4ca0d4478f6471de58d2827288e0d5a4879ac2" Workload="ci--4081--3--6--n--32c338e5e2-k8s-coredns--674b8bbfcf--m8gpg-eth0" Jan 16 23:57:50.531954 containerd[1486]: 2026-01-16 23:57:50.517 [INFO][5089] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ef4e9e8ff9534c99c1b81587db4ca0d4478f6471de58d2827288e0d5a4879ac2" HandleID="k8s-pod-network.ef4e9e8ff9534c99c1b81587db4ca0d4478f6471de58d2827288e0d5a4879ac2" Workload="ci--4081--3--6--n--32c338e5e2-k8s-coredns--674b8bbfcf--m8gpg-eth0" Jan 16 23:57:50.531954 containerd[1486]: 2026-01-16 23:57:50.523 [INFO][5089] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:50.531954 containerd[1486]: 2026-01-16 23:57:50.527 [INFO][5082] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ef4e9e8ff9534c99c1b81587db4ca0d4478f6471de58d2827288e0d5a4879ac2" Jan 16 23:57:50.531954 containerd[1486]: time="2026-01-16T23:57:50.531562125Z" level=info msg="TearDown network for sandbox \"ef4e9e8ff9534c99c1b81587db4ca0d4478f6471de58d2827288e0d5a4879ac2\" successfully" Jan 16 23:57:50.531954 containerd[1486]: time="2026-01-16T23:57:50.531624603Z" level=info msg="StopPodSandbox for \"ef4e9e8ff9534c99c1b81587db4ca0d4478f6471de58d2827288e0d5a4879ac2\" returns successfully" Jan 16 23:57:50.534426 containerd[1486]: time="2026-01-16T23:57:50.534388039Z" level=info msg="RemovePodSandbox for \"ef4e9e8ff9534c99c1b81587db4ca0d4478f6471de58d2827288e0d5a4879ac2\"" Jan 16 23:57:50.553269 containerd[1486]: time="2026-01-16T23:57:50.553197476Z" level=info msg="Forcibly stopping sandbox \"ef4e9e8ff9534c99c1b81587db4ca0d4478f6471de58d2827288e0d5a4879ac2\"" Jan 16 23:57:50.689598 containerd[1486]: 2026-01-16 23:57:50.597 [WARNING][5103] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ef4e9e8ff9534c99c1b81587db4ca0d4478f6471de58d2827288e0d5a4879ac2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--32c338e5e2-k8s-coredns--674b8bbfcf--m8gpg-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"2e8f4602-9cf1-4251-be4e-4def80a11ec7", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 56, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-32c338e5e2", ContainerID:"0bb9f595d7851caf6ffc88d85d67b5e3f617a40a44765cd25e0e737635a91a2e", Pod:"coredns-674b8bbfcf-m8gpg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.58.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3c12ca45b8f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:57:50.689598 containerd[1486]: 2026-01-16 23:57:50.597 [INFO][5103] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ef4e9e8ff9534c99c1b81587db4ca0d4478f6471de58d2827288e0d5a4879ac2" Jan 16 23:57:50.689598 containerd[1486]: 2026-01-16 23:57:50.597 [INFO][5103] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ef4e9e8ff9534c99c1b81587db4ca0d4478f6471de58d2827288e0d5a4879ac2" iface="eth0" netns="" Jan 16 23:57:50.689598 containerd[1486]: 2026-01-16 23:57:50.597 [INFO][5103] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ef4e9e8ff9534c99c1b81587db4ca0d4478f6471de58d2827288e0d5a4879ac2" Jan 16 23:57:50.689598 containerd[1486]: 2026-01-16 23:57:50.598 [INFO][5103] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ef4e9e8ff9534c99c1b81587db4ca0d4478f6471de58d2827288e0d5a4879ac2" Jan 16 23:57:50.689598 containerd[1486]: 2026-01-16 23:57:50.647 [INFO][5110] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ef4e9e8ff9534c99c1b81587db4ca0d4478f6471de58d2827288e0d5a4879ac2" HandleID="k8s-pod-network.ef4e9e8ff9534c99c1b81587db4ca0d4478f6471de58d2827288e0d5a4879ac2" Workload="ci--4081--3--6--n--32c338e5e2-k8s-coredns--674b8bbfcf--m8gpg-eth0" Jan 16 23:57:50.689598 containerd[1486]: 2026-01-16 23:57:50.647 [INFO][5110] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:50.689598 containerd[1486]: 2026-01-16 23:57:50.648 [INFO][5110] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:50.689598 containerd[1486]: 2026-01-16 23:57:50.680 [WARNING][5110] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ef4e9e8ff9534c99c1b81587db4ca0d4478f6471de58d2827288e0d5a4879ac2" HandleID="k8s-pod-network.ef4e9e8ff9534c99c1b81587db4ca0d4478f6471de58d2827288e0d5a4879ac2" Workload="ci--4081--3--6--n--32c338e5e2-k8s-coredns--674b8bbfcf--m8gpg-eth0" Jan 16 23:57:50.689598 containerd[1486]: 2026-01-16 23:57:50.680 [INFO][5110] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ef4e9e8ff9534c99c1b81587db4ca0d4478f6471de58d2827288e0d5a4879ac2" HandleID="k8s-pod-network.ef4e9e8ff9534c99c1b81587db4ca0d4478f6471de58d2827288e0d5a4879ac2" Workload="ci--4081--3--6--n--32c338e5e2-k8s-coredns--674b8bbfcf--m8gpg-eth0" Jan 16 23:57:50.689598 containerd[1486]: 2026-01-16 23:57:50.685 [INFO][5110] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:50.689598 containerd[1486]: 2026-01-16 23:57:50.687 [INFO][5103] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ef4e9e8ff9534c99c1b81587db4ca0d4478f6471de58d2827288e0d5a4879ac2" Jan 16 23:57:50.690844 containerd[1486]: time="2026-01-16T23:57:50.690157219Z" level=info msg="TearDown network for sandbox \"ef4e9e8ff9534c99c1b81587db4ca0d4478f6471de58d2827288e0d5a4879ac2\" successfully" Jan 16 23:57:50.718494 containerd[1486]: time="2026-01-16T23:57:50.717411157Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ef4e9e8ff9534c99c1b81587db4ca0d4478f6471de58d2827288e0d5a4879ac2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 16 23:57:50.718494 containerd[1486]: time="2026-01-16T23:57:50.717547031Z" level=info msg="RemovePodSandbox \"ef4e9e8ff9534c99c1b81587db4ca0d4478f6471de58d2827288e0d5a4879ac2\" returns successfully" Jan 16 23:57:50.720590 containerd[1486]: time="2026-01-16T23:57:50.719169439Z" level=info msg="StopPodSandbox for \"32c6585aca5f8c5c5cdc4eaae41999928b2b18636e66ffc11d735015927059eb\"" Jan 16 23:57:50.825532 containerd[1486]: 2026-01-16 23:57:50.772 [WARNING][5126] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="32c6585aca5f8c5c5cdc4eaae41999928b2b18636e66ffc11d735015927059eb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--32c338e5e2-k8s-calico--apiserver--7c6f969f4--6hcrh-eth0", GenerateName:"calico-apiserver-7c6f969f4-", Namespace:"calico-apiserver", SelfLink:"", UID:"2216c59e-7647-44aa-810f-40503d382780", ResourceVersion:"1064", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 57, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c6f969f4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-32c338e5e2", ContainerID:"8e51be4ef309e71eed25088c5373965ee2ce9bc7cb0b359ae99d7ba39e87d3d3", Pod:"calico-apiserver-7c6f969f4-6hcrh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.58.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2fa51af46eb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:57:50.825532 containerd[1486]: 2026-01-16 23:57:50.772 [INFO][5126] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="32c6585aca5f8c5c5cdc4eaae41999928b2b18636e66ffc11d735015927059eb" Jan 16 23:57:50.825532 containerd[1486]: 2026-01-16 23:57:50.772 [INFO][5126] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="32c6585aca5f8c5c5cdc4eaae41999928b2b18636e66ffc11d735015927059eb" iface="eth0" netns="" Jan 16 23:57:50.825532 containerd[1486]: 2026-01-16 23:57:50.773 [INFO][5126] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="32c6585aca5f8c5c5cdc4eaae41999928b2b18636e66ffc11d735015927059eb" Jan 16 23:57:50.825532 containerd[1486]: 2026-01-16 23:57:50.773 [INFO][5126] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="32c6585aca5f8c5c5cdc4eaae41999928b2b18636e66ffc11d735015927059eb" Jan 16 23:57:50.825532 containerd[1486]: 2026-01-16 23:57:50.806 [INFO][5134] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="32c6585aca5f8c5c5cdc4eaae41999928b2b18636e66ffc11d735015927059eb" HandleID="k8s-pod-network.32c6585aca5f8c5c5cdc4eaae41999928b2b18636e66ffc11d735015927059eb" Workload="ci--4081--3--6--n--32c338e5e2-k8s-calico--apiserver--7c6f969f4--6hcrh-eth0" Jan 16 23:57:50.825532 containerd[1486]: 2026-01-16 23:57:50.806 [INFO][5134] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:50.825532 containerd[1486]: 2026-01-16 23:57:50.806 [INFO][5134] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:50.825532 containerd[1486]: 2026-01-16 23:57:50.817 [WARNING][5134] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="32c6585aca5f8c5c5cdc4eaae41999928b2b18636e66ffc11d735015927059eb" HandleID="k8s-pod-network.32c6585aca5f8c5c5cdc4eaae41999928b2b18636e66ffc11d735015927059eb" Workload="ci--4081--3--6--n--32c338e5e2-k8s-calico--apiserver--7c6f969f4--6hcrh-eth0" Jan 16 23:57:50.825532 containerd[1486]: 2026-01-16 23:57:50.817 [INFO][5134] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="32c6585aca5f8c5c5cdc4eaae41999928b2b18636e66ffc11d735015927059eb" HandleID="k8s-pod-network.32c6585aca5f8c5c5cdc4eaae41999928b2b18636e66ffc11d735015927059eb" Workload="ci--4081--3--6--n--32c338e5e2-k8s-calico--apiserver--7c6f969f4--6hcrh-eth0" Jan 16 23:57:50.825532 containerd[1486]: 2026-01-16 23:57:50.819 [INFO][5134] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:50.825532 containerd[1486]: 2026-01-16 23:57:50.822 [INFO][5126] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="32c6585aca5f8c5c5cdc4eaae41999928b2b18636e66ffc11d735015927059eb" Jan 16 23:57:50.827547 containerd[1486]: time="2026-01-16T23:57:50.825646347Z" level=info msg="TearDown network for sandbox \"32c6585aca5f8c5c5cdc4eaae41999928b2b18636e66ffc11d735015927059eb\" successfully" Jan 16 23:57:50.827547 containerd[1486]: time="2026-01-16T23:57:50.825690625Z" level=info msg="StopPodSandbox for \"32c6585aca5f8c5c5cdc4eaae41999928b2b18636e66ffc11d735015927059eb\" returns successfully" Jan 16 23:57:50.827547 containerd[1486]: time="2026-01-16T23:57:50.826808655Z" level=info msg="RemovePodSandbox for \"32c6585aca5f8c5c5cdc4eaae41999928b2b18636e66ffc11d735015927059eb\"" Jan 16 23:57:50.827547 containerd[1486]: time="2026-01-16T23:57:50.826847694Z" level=info msg="Forcibly stopping sandbox \"32c6585aca5f8c5c5cdc4eaae41999928b2b18636e66ffc11d735015927059eb\"" Jan 16 23:57:50.942561 containerd[1486]: 2026-01-16 23:57:50.887 [WARNING][5148] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="32c6585aca5f8c5c5cdc4eaae41999928b2b18636e66ffc11d735015927059eb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--32c338e5e2-k8s-calico--apiserver--7c6f969f4--6hcrh-eth0", GenerateName:"calico-apiserver-7c6f969f4-", Namespace:"calico-apiserver", SelfLink:"", UID:"2216c59e-7647-44aa-810f-40503d382780", ResourceVersion:"1064", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 57, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c6f969f4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-32c338e5e2", ContainerID:"8e51be4ef309e71eed25088c5373965ee2ce9bc7cb0b359ae99d7ba39e87d3d3", Pod:"calico-apiserver-7c6f969f4-6hcrh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.58.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2fa51af46eb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:57:50.942561 containerd[1486]: 2026-01-16 23:57:50.887 [INFO][5148] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="32c6585aca5f8c5c5cdc4eaae41999928b2b18636e66ffc11d735015927059eb" Jan 16 23:57:50.942561 containerd[1486]: 2026-01-16 23:57:50.888 [INFO][5148] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="32c6585aca5f8c5c5cdc4eaae41999928b2b18636e66ffc11d735015927059eb" iface="eth0" netns="" Jan 16 23:57:50.942561 containerd[1486]: 2026-01-16 23:57:50.888 [INFO][5148] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="32c6585aca5f8c5c5cdc4eaae41999928b2b18636e66ffc11d735015927059eb" Jan 16 23:57:50.942561 containerd[1486]: 2026-01-16 23:57:50.888 [INFO][5148] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="32c6585aca5f8c5c5cdc4eaae41999928b2b18636e66ffc11d735015927059eb" Jan 16 23:57:50.942561 containerd[1486]: 2026-01-16 23:57:50.915 [INFO][5155] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="32c6585aca5f8c5c5cdc4eaae41999928b2b18636e66ffc11d735015927059eb" HandleID="k8s-pod-network.32c6585aca5f8c5c5cdc4eaae41999928b2b18636e66ffc11d735015927059eb" Workload="ci--4081--3--6--n--32c338e5e2-k8s-calico--apiserver--7c6f969f4--6hcrh-eth0" Jan 16 23:57:50.942561 containerd[1486]: 2026-01-16 23:57:50.915 [INFO][5155] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:50.942561 containerd[1486]: 2026-01-16 23:57:50.915 [INFO][5155] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:50.942561 containerd[1486]: 2026-01-16 23:57:50.929 [WARNING][5155] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="32c6585aca5f8c5c5cdc4eaae41999928b2b18636e66ffc11d735015927059eb" HandleID="k8s-pod-network.32c6585aca5f8c5c5cdc4eaae41999928b2b18636e66ffc11d735015927059eb" Workload="ci--4081--3--6--n--32c338e5e2-k8s-calico--apiserver--7c6f969f4--6hcrh-eth0" Jan 16 23:57:50.942561 containerd[1486]: 2026-01-16 23:57:50.929 [INFO][5155] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="32c6585aca5f8c5c5cdc4eaae41999928b2b18636e66ffc11d735015927059eb" HandleID="k8s-pod-network.32c6585aca5f8c5c5cdc4eaae41999928b2b18636e66ffc11d735015927059eb" Workload="ci--4081--3--6--n--32c338e5e2-k8s-calico--apiserver--7c6f969f4--6hcrh-eth0" Jan 16 23:57:50.942561 containerd[1486]: 2026-01-16 23:57:50.932 [INFO][5155] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:50.942561 containerd[1486]: 2026-01-16 23:57:50.938 [INFO][5148] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="32c6585aca5f8c5c5cdc4eaae41999928b2b18636e66ffc11d735015927059eb" Jan 16 23:57:50.942561 containerd[1486]: time="2026-01-16T23:57:50.941704387Z" level=info msg="TearDown network for sandbox \"32c6585aca5f8c5c5cdc4eaae41999928b2b18636e66ffc11d735015927059eb\" successfully" Jan 16 23:57:50.947378 containerd[1486]: time="2026-01-16T23:57:50.946445814Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"32c6585aca5f8c5c5cdc4eaae41999928b2b18636e66ffc11d735015927059eb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 16 23:57:50.947662 containerd[1486]: time="2026-01-16T23:57:50.947406731Z" level=info msg="RemovePodSandbox \"32c6585aca5f8c5c5cdc4eaae41999928b2b18636e66ffc11d735015927059eb\" returns successfully" Jan 16 23:57:50.947958 containerd[1486]: time="2026-01-16T23:57:50.947930028Z" level=info msg="StopPodSandbox for \"260549f60cb62393268f6c3fe68998d9cb1cd041a9deff8faaa3dfa07e011ec7\"" Jan 16 23:57:51.095569 containerd[1486]: 2026-01-16 23:57:51.032 [WARNING][5169] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="260549f60cb62393268f6c3fe68998d9cb1cd041a9deff8faaa3dfa07e011ec7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--32c338e5e2-k8s-goldmane--666569f655--hkcjr-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"494d2d41-870f-485e-a8b2-cbb0fecf4357", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 57, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-32c338e5e2", ContainerID:"ca91b9f1ef84aad4606e9fb44e711971d4a1ab9064542fd2ce40ec5805c719e8", Pod:"goldmane-666569f655-hkcjr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.58.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib94ac785ab1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:57:51.095569 containerd[1486]: 2026-01-16 23:57:51.032 [INFO][5169] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="260549f60cb62393268f6c3fe68998d9cb1cd041a9deff8faaa3dfa07e011ec7" Jan 16 23:57:51.095569 containerd[1486]: 2026-01-16 23:57:51.032 [INFO][5169] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="260549f60cb62393268f6c3fe68998d9cb1cd041a9deff8faaa3dfa07e011ec7" iface="eth0" netns="" Jan 16 23:57:51.095569 containerd[1486]: 2026-01-16 23:57:51.032 [INFO][5169] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="260549f60cb62393268f6c3fe68998d9cb1cd041a9deff8faaa3dfa07e011ec7" Jan 16 23:57:51.095569 containerd[1486]: 2026-01-16 23:57:51.032 [INFO][5169] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="260549f60cb62393268f6c3fe68998d9cb1cd041a9deff8faaa3dfa07e011ec7" Jan 16 23:57:51.095569 containerd[1486]: 2026-01-16 23:57:51.074 [INFO][5176] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="260549f60cb62393268f6c3fe68998d9cb1cd041a9deff8faaa3dfa07e011ec7" HandleID="k8s-pod-network.260549f60cb62393268f6c3fe68998d9cb1cd041a9deff8faaa3dfa07e011ec7" Workload="ci--4081--3--6--n--32c338e5e2-k8s-goldmane--666569f655--hkcjr-eth0" Jan 16 23:57:51.095569 containerd[1486]: 2026-01-16 23:57:51.074 [INFO][5176] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:51.095569 containerd[1486]: 2026-01-16 23:57:51.074 [INFO][5176] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:51.095569 containerd[1486]: 2026-01-16 23:57:51.086 [WARNING][5176] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="260549f60cb62393268f6c3fe68998d9cb1cd041a9deff8faaa3dfa07e011ec7" HandleID="k8s-pod-network.260549f60cb62393268f6c3fe68998d9cb1cd041a9deff8faaa3dfa07e011ec7" Workload="ci--4081--3--6--n--32c338e5e2-k8s-goldmane--666569f655--hkcjr-eth0" Jan 16 23:57:51.095569 containerd[1486]: 2026-01-16 23:57:51.086 [INFO][5176] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="260549f60cb62393268f6c3fe68998d9cb1cd041a9deff8faaa3dfa07e011ec7" HandleID="k8s-pod-network.260549f60cb62393268f6c3fe68998d9cb1cd041a9deff8faaa3dfa07e011ec7" Workload="ci--4081--3--6--n--32c338e5e2-k8s-goldmane--666569f655--hkcjr-eth0" Jan 16 23:57:51.095569 containerd[1486]: 2026-01-16 23:57:51.088 [INFO][5176] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:51.095569 containerd[1486]: 2026-01-16 23:57:51.091 [INFO][5169] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="260549f60cb62393268f6c3fe68998d9cb1cd041a9deff8faaa3dfa07e011ec7" Jan 16 23:57:51.097420 containerd[1486]: time="2026-01-16T23:57:51.095622449Z" level=info msg="TearDown network for sandbox \"260549f60cb62393268f6c3fe68998d9cb1cd041a9deff8faaa3dfa07e011ec7\" successfully" Jan 16 23:57:51.097420 containerd[1486]: time="2026-01-16T23:57:51.095648927Z" level=info msg="StopPodSandbox for \"260549f60cb62393268f6c3fe68998d9cb1cd041a9deff8faaa3dfa07e011ec7\" returns successfully" Jan 16 23:57:51.097420 containerd[1486]: time="2026-01-16T23:57:51.096682243Z" level=info msg="RemovePodSandbox for \"260549f60cb62393268f6c3fe68998d9cb1cd041a9deff8faaa3dfa07e011ec7\"" Jan 16 23:57:51.097420 containerd[1486]: time="2026-01-16T23:57:51.096715522Z" level=info msg="Forcibly stopping sandbox \"260549f60cb62393268f6c3fe68998d9cb1cd041a9deff8faaa3dfa07e011ec7\"" Jan 16 23:57:51.241225 containerd[1486]: 2026-01-16 23:57:51.185 [WARNING][5190] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="260549f60cb62393268f6c3fe68998d9cb1cd041a9deff8faaa3dfa07e011ec7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--32c338e5e2-k8s-goldmane--666569f655--hkcjr-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"494d2d41-870f-485e-a8b2-cbb0fecf4357", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 57, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-32c338e5e2", ContainerID:"ca91b9f1ef84aad4606e9fb44e711971d4a1ab9064542fd2ce40ec5805c719e8", Pod:"goldmane-666569f655-hkcjr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.58.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib94ac785ab1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:57:51.241225 containerd[1486]: 2026-01-16 23:57:51.185 [INFO][5190] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="260549f60cb62393268f6c3fe68998d9cb1cd041a9deff8faaa3dfa07e011ec7" Jan 16 23:57:51.241225 containerd[1486]: 2026-01-16 23:57:51.185 [INFO][5190] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="260549f60cb62393268f6c3fe68998d9cb1cd041a9deff8faaa3dfa07e011ec7" iface="eth0" netns="" Jan 16 23:57:51.241225 containerd[1486]: 2026-01-16 23:57:51.185 [INFO][5190] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="260549f60cb62393268f6c3fe68998d9cb1cd041a9deff8faaa3dfa07e011ec7" Jan 16 23:57:51.241225 containerd[1486]: 2026-01-16 23:57:51.185 [INFO][5190] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="260549f60cb62393268f6c3fe68998d9cb1cd041a9deff8faaa3dfa07e011ec7" Jan 16 23:57:51.241225 containerd[1486]: 2026-01-16 23:57:51.217 [INFO][5198] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="260549f60cb62393268f6c3fe68998d9cb1cd041a9deff8faaa3dfa07e011ec7" HandleID="k8s-pod-network.260549f60cb62393268f6c3fe68998d9cb1cd041a9deff8faaa3dfa07e011ec7" Workload="ci--4081--3--6--n--32c338e5e2-k8s-goldmane--666569f655--hkcjr-eth0" Jan 16 23:57:51.241225 containerd[1486]: 2026-01-16 23:57:51.217 [INFO][5198] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:51.241225 containerd[1486]: 2026-01-16 23:57:51.217 [INFO][5198] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:51.241225 containerd[1486]: 2026-01-16 23:57:51.231 [WARNING][5198] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="260549f60cb62393268f6c3fe68998d9cb1cd041a9deff8faaa3dfa07e011ec7" HandleID="k8s-pod-network.260549f60cb62393268f6c3fe68998d9cb1cd041a9deff8faaa3dfa07e011ec7" Workload="ci--4081--3--6--n--32c338e5e2-k8s-goldmane--666569f655--hkcjr-eth0" Jan 16 23:57:51.241225 containerd[1486]: 2026-01-16 23:57:51.231 [INFO][5198] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="260549f60cb62393268f6c3fe68998d9cb1cd041a9deff8faaa3dfa07e011ec7" HandleID="k8s-pod-network.260549f60cb62393268f6c3fe68998d9cb1cd041a9deff8faaa3dfa07e011ec7" Workload="ci--4081--3--6--n--32c338e5e2-k8s-goldmane--666569f655--hkcjr-eth0" Jan 16 23:57:51.241225 containerd[1486]: 2026-01-16 23:57:51.233 [INFO][5198] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:51.241225 containerd[1486]: 2026-01-16 23:57:51.235 [INFO][5190] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="260549f60cb62393268f6c3fe68998d9cb1cd041a9deff8faaa3dfa07e011ec7" Jan 16 23:57:51.241225 containerd[1486]: time="2026-01-16T23:57:51.239629100Z" level=info msg="TearDown network for sandbox \"260549f60cb62393268f6c3fe68998d9cb1cd041a9deff8faaa3dfa07e011ec7\" successfully" Jan 16 23:57:51.246226 containerd[1486]: time="2026-01-16T23:57:51.246166501Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"260549f60cb62393268f6c3fe68998d9cb1cd041a9deff8faaa3dfa07e011ec7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 16 23:57:51.246525 containerd[1486]: time="2026-01-16T23:57:51.246497726Z" level=info msg="RemovePodSandbox \"260549f60cb62393268f6c3fe68998d9cb1cd041a9deff8faaa3dfa07e011ec7\" returns successfully" Jan 16 23:57:51.247447 containerd[1486]: time="2026-01-16T23:57:51.247412887Z" level=info msg="StopPodSandbox for \"a12f5242447c8b03d19df8cbcc7b691518162461033008b0db4011134517a576\"" Jan 16 23:57:51.348360 containerd[1486]: 2026-01-16 23:57:51.303 [WARNING][5213] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a12f5242447c8b03d19df8cbcc7b691518162461033008b0db4011134517a576" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--32c338e5e2-k8s-calico--kube--controllers--866b5b959f--q6rnd-eth0", GenerateName:"calico-kube-controllers-866b5b959f-", Namespace:"calico-system", SelfLink:"", UID:"ab55bbc8-2f84-4b63-ae7a-3f7a0c596089", ResourceVersion:"1055", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 57, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"866b5b959f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-32c338e5e2", ContainerID:"0fe63fc848c184a9b35a8689e38ea19d9605a02473900ab86ded57fe287b7e44", Pod:"calico-kube-controllers-866b5b959f-q6rnd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.58.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1139fda82c0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:57:51.348360 containerd[1486]: 2026-01-16 23:57:51.304 [INFO][5213] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a12f5242447c8b03d19df8cbcc7b691518162461033008b0db4011134517a576" Jan 16 23:57:51.348360 containerd[1486]: 2026-01-16 23:57:51.304 [INFO][5213] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a12f5242447c8b03d19df8cbcc7b691518162461033008b0db4011134517a576" iface="eth0" netns="" Jan 16 23:57:51.348360 containerd[1486]: 2026-01-16 23:57:51.304 [INFO][5213] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a12f5242447c8b03d19df8cbcc7b691518162461033008b0db4011134517a576" Jan 16 23:57:51.348360 containerd[1486]: 2026-01-16 23:57:51.304 [INFO][5213] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a12f5242447c8b03d19df8cbcc7b691518162461033008b0db4011134517a576" Jan 16 23:57:51.348360 containerd[1486]: 2026-01-16 23:57:51.330 [INFO][5220] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a12f5242447c8b03d19df8cbcc7b691518162461033008b0db4011134517a576" HandleID="k8s-pod-network.a12f5242447c8b03d19df8cbcc7b691518162461033008b0db4011134517a576" Workload="ci--4081--3--6--n--32c338e5e2-k8s-calico--kube--controllers--866b5b959f--q6rnd-eth0" Jan 16 23:57:51.348360 containerd[1486]: 2026-01-16 23:57:51.331 [INFO][5220] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:51.348360 containerd[1486]: 2026-01-16 23:57:51.331 [INFO][5220] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:51.348360 containerd[1486]: 2026-01-16 23:57:51.341 [WARNING][5220] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a12f5242447c8b03d19df8cbcc7b691518162461033008b0db4011134517a576" HandleID="k8s-pod-network.a12f5242447c8b03d19df8cbcc7b691518162461033008b0db4011134517a576" Workload="ci--4081--3--6--n--32c338e5e2-k8s-calico--kube--controllers--866b5b959f--q6rnd-eth0" Jan 16 23:57:51.348360 containerd[1486]: 2026-01-16 23:57:51.341 [INFO][5220] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a12f5242447c8b03d19df8cbcc7b691518162461033008b0db4011134517a576" HandleID="k8s-pod-network.a12f5242447c8b03d19df8cbcc7b691518162461033008b0db4011134517a576" Workload="ci--4081--3--6--n--32c338e5e2-k8s-calico--kube--controllers--866b5b959f--q6rnd-eth0" Jan 16 23:57:51.348360 containerd[1486]: 2026-01-16 23:57:51.343 [INFO][5220] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:51.348360 containerd[1486]: 2026-01-16 23:57:51.346 [INFO][5213] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a12f5242447c8b03d19df8cbcc7b691518162461033008b0db4011134517a576" Jan 16 23:57:51.348866 containerd[1486]: time="2026-01-16T23:57:51.348388056Z" level=info msg="TearDown network for sandbox \"a12f5242447c8b03d19df8cbcc7b691518162461033008b0db4011134517a576\" successfully" Jan 16 23:57:51.348866 containerd[1486]: time="2026-01-16T23:57:51.348447813Z" level=info msg="StopPodSandbox for \"a12f5242447c8b03d19df8cbcc7b691518162461033008b0db4011134517a576\" returns successfully" Jan 16 23:57:51.349988 containerd[1486]: time="2026-01-16T23:57:51.349890712Z" level=info msg="RemovePodSandbox for \"a12f5242447c8b03d19df8cbcc7b691518162461033008b0db4011134517a576\"" Jan 16 23:57:51.349988 containerd[1486]: time="2026-01-16T23:57:51.349930150Z" level=info msg="Forcibly stopping sandbox \"a12f5242447c8b03d19df8cbcc7b691518162461033008b0db4011134517a576\"" Jan 16 23:57:51.384901 containerd[1486]: time="2026-01-16T23:57:51.384596150Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 16 23:57:51.478162 containerd[1486]: 2026-01-16 23:57:51.414 [WARNING][5234] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a12f5242447c8b03d19df8cbcc7b691518162461033008b0db4011134517a576" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--32c338e5e2-k8s-calico--kube--controllers--866b5b959f--q6rnd-eth0", GenerateName:"calico-kube-controllers-866b5b959f-", Namespace:"calico-system", SelfLink:"", UID:"ab55bbc8-2f84-4b63-ae7a-3f7a0c596089", ResourceVersion:"1055", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 57, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"866b5b959f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-32c338e5e2", ContainerID:"0fe63fc848c184a9b35a8689e38ea19d9605a02473900ab86ded57fe287b7e44", Pod:"calico-kube-controllers-866b5b959f-q6rnd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.58.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1139fda82c0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:57:51.478162 containerd[1486]: 2026-01-16 23:57:51.414 [INFO][5234] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a12f5242447c8b03d19df8cbcc7b691518162461033008b0db4011134517a576" Jan 16 23:57:51.478162 containerd[1486]: 2026-01-16 23:57:51.414 [INFO][5234] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a12f5242447c8b03d19df8cbcc7b691518162461033008b0db4011134517a576" iface="eth0" netns="" Jan 16 23:57:51.478162 containerd[1486]: 2026-01-16 23:57:51.414 [INFO][5234] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a12f5242447c8b03d19df8cbcc7b691518162461033008b0db4011134517a576" Jan 16 23:57:51.478162 containerd[1486]: 2026-01-16 23:57:51.414 [INFO][5234] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a12f5242447c8b03d19df8cbcc7b691518162461033008b0db4011134517a576" Jan 16 23:57:51.478162 containerd[1486]: 2026-01-16 23:57:51.457 [INFO][5241] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a12f5242447c8b03d19df8cbcc7b691518162461033008b0db4011134517a576" HandleID="k8s-pod-network.a12f5242447c8b03d19df8cbcc7b691518162461033008b0db4011134517a576" Workload="ci--4081--3--6--n--32c338e5e2-k8s-calico--kube--controllers--866b5b959f--q6rnd-eth0" Jan 16 23:57:51.478162 containerd[1486]: 2026-01-16 23:57:51.457 [INFO][5241] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:51.478162 containerd[1486]: 2026-01-16 23:57:51.457 [INFO][5241] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:51.478162 containerd[1486]: 2026-01-16 23:57:51.470 [WARNING][5241] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a12f5242447c8b03d19df8cbcc7b691518162461033008b0db4011134517a576" HandleID="k8s-pod-network.a12f5242447c8b03d19df8cbcc7b691518162461033008b0db4011134517a576" Workload="ci--4081--3--6--n--32c338e5e2-k8s-calico--kube--controllers--866b5b959f--q6rnd-eth0" Jan 16 23:57:51.478162 containerd[1486]: 2026-01-16 23:57:51.470 [INFO][5241] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a12f5242447c8b03d19df8cbcc7b691518162461033008b0db4011134517a576" HandleID="k8s-pod-network.a12f5242447c8b03d19df8cbcc7b691518162461033008b0db4011134517a576" Workload="ci--4081--3--6--n--32c338e5e2-k8s-calico--kube--controllers--866b5b959f--q6rnd-eth0" Jan 16 23:57:51.478162 containerd[1486]: 2026-01-16 23:57:51.472 [INFO][5241] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:51.478162 containerd[1486]: 2026-01-16 23:57:51.475 [INFO][5234] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a12f5242447c8b03d19df8cbcc7b691518162461033008b0db4011134517a576" Jan 16 23:57:51.479266 containerd[1486]: time="2026-01-16T23:57:51.478248791Z" level=info msg="TearDown network for sandbox \"a12f5242447c8b03d19df8cbcc7b691518162461033008b0db4011134517a576\" successfully" Jan 16 23:57:51.487423 containerd[1486]: time="2026-01-16T23:57:51.486503959Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a12f5242447c8b03d19df8cbcc7b691518162461033008b0db4011134517a576\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 16 23:57:51.487423 containerd[1486]: time="2026-01-16T23:57:51.486785467Z" level=info msg="RemovePodSandbox \"a12f5242447c8b03d19df8cbcc7b691518162461033008b0db4011134517a576\" returns successfully" Jan 16 23:57:51.488992 containerd[1486]: time="2026-01-16T23:57:51.488678706Z" level=info msg="StopPodSandbox for \"1c0ef8d82b9c40862671915bc77825b974c300e58ee4d72de75ab0b7587f1cff\"" Jan 16 23:57:51.601879 containerd[1486]: 2026-01-16 23:57:51.546 [WARNING][5255] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1c0ef8d82b9c40862671915bc77825b974c300e58ee4d72de75ab0b7587f1cff" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--32c338e5e2-k8s-csi--node--driver--b76rd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ebb15273-01f0-4342-86a4-e67c5f3e53d0", ResourceVersion:"1112", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 57, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-32c338e5e2", ContainerID:"8fe39fa46649fadda007c8f21329c30c8a4261bbff928d23763e59bd1e1737bb", Pod:"csi-node-driver-b76rd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.58.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali65de395b1aa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:57:51.601879 containerd[1486]: 2026-01-16 23:57:51.547 [INFO][5255] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1c0ef8d82b9c40862671915bc77825b974c300e58ee4d72de75ab0b7587f1cff" Jan 16 23:57:51.601879 containerd[1486]: 2026-01-16 23:57:51.548 [INFO][5255] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1c0ef8d82b9c40862671915bc77825b974c300e58ee4d72de75ab0b7587f1cff" iface="eth0" netns="" Jan 16 23:57:51.601879 containerd[1486]: 2026-01-16 23:57:51.548 [INFO][5255] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1c0ef8d82b9c40862671915bc77825b974c300e58ee4d72de75ab0b7587f1cff" Jan 16 23:57:51.601879 containerd[1486]: 2026-01-16 23:57:51.548 [INFO][5255] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1c0ef8d82b9c40862671915bc77825b974c300e58ee4d72de75ab0b7587f1cff" Jan 16 23:57:51.601879 containerd[1486]: 2026-01-16 23:57:51.582 [INFO][5262] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1c0ef8d82b9c40862671915bc77825b974c300e58ee4d72de75ab0b7587f1cff" HandleID="k8s-pod-network.1c0ef8d82b9c40862671915bc77825b974c300e58ee4d72de75ab0b7587f1cff" Workload="ci--4081--3--6--n--32c338e5e2-k8s-csi--node--driver--b76rd-eth0" Jan 16 23:57:51.601879 containerd[1486]: 2026-01-16 23:57:51.582 [INFO][5262] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:51.601879 containerd[1486]: 2026-01-16 23:57:51.583 [INFO][5262] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:51.601879 containerd[1486]: 2026-01-16 23:57:51.593 [WARNING][5262] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1c0ef8d82b9c40862671915bc77825b974c300e58ee4d72de75ab0b7587f1cff" HandleID="k8s-pod-network.1c0ef8d82b9c40862671915bc77825b974c300e58ee4d72de75ab0b7587f1cff" Workload="ci--4081--3--6--n--32c338e5e2-k8s-csi--node--driver--b76rd-eth0" Jan 16 23:57:51.601879 containerd[1486]: 2026-01-16 23:57:51.593 [INFO][5262] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1c0ef8d82b9c40862671915bc77825b974c300e58ee4d72de75ab0b7587f1cff" HandleID="k8s-pod-network.1c0ef8d82b9c40862671915bc77825b974c300e58ee4d72de75ab0b7587f1cff" Workload="ci--4081--3--6--n--32c338e5e2-k8s-csi--node--driver--b76rd-eth0" Jan 16 23:57:51.601879 containerd[1486]: 2026-01-16 23:57:51.595 [INFO][5262] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:51.601879 containerd[1486]: 2026-01-16 23:57:51.598 [INFO][5255] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1c0ef8d82b9c40862671915bc77825b974c300e58ee4d72de75ab0b7587f1cff" Jan 16 23:57:51.602921 containerd[1486]: time="2026-01-16T23:57:51.601958709Z" level=info msg="TearDown network for sandbox \"1c0ef8d82b9c40862671915bc77825b974c300e58ee4d72de75ab0b7587f1cff\" successfully" Jan 16 23:57:51.602921 containerd[1486]: time="2026-01-16T23:57:51.602034026Z" level=info msg="StopPodSandbox for \"1c0ef8d82b9c40862671915bc77825b974c300e58ee4d72de75ab0b7587f1cff\" returns successfully" Jan 16 23:57:51.602921 containerd[1486]: time="2026-01-16T23:57:51.602864390Z" level=info msg="RemovePodSandbox for \"1c0ef8d82b9c40862671915bc77825b974c300e58ee4d72de75ab0b7587f1cff\"" Jan 16 23:57:51.602921 containerd[1486]: time="2026-01-16T23:57:51.602912948Z" level=info msg="Forcibly stopping sandbox \"1c0ef8d82b9c40862671915bc77825b974c300e58ee4d72de75ab0b7587f1cff\"" Jan 16 23:57:51.728197 containerd[1486]: 2026-01-16 23:57:51.660 [WARNING][5276] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1c0ef8d82b9c40862671915bc77825b974c300e58ee4d72de75ab0b7587f1cff" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--32c338e5e2-k8s-csi--node--driver--b76rd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ebb15273-01f0-4342-86a4-e67c5f3e53d0", ResourceVersion:"1112", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 57, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-32c338e5e2", ContainerID:"8fe39fa46649fadda007c8f21329c30c8a4261bbff928d23763e59bd1e1737bb", Pod:"csi-node-driver-b76rd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.58.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali65de395b1aa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:57:51.728197 containerd[1486]: 2026-01-16 23:57:51.660 [INFO][5276] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1c0ef8d82b9c40862671915bc77825b974c300e58ee4d72de75ab0b7587f1cff" Jan 16 23:57:51.728197 containerd[1486]: 2026-01-16 23:57:51.660 [INFO][5276] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1c0ef8d82b9c40862671915bc77825b974c300e58ee4d72de75ab0b7587f1cff" iface="eth0" netns="" Jan 16 23:57:51.728197 containerd[1486]: 2026-01-16 23:57:51.660 [INFO][5276] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1c0ef8d82b9c40862671915bc77825b974c300e58ee4d72de75ab0b7587f1cff" Jan 16 23:57:51.728197 containerd[1486]: 2026-01-16 23:57:51.660 [INFO][5276] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1c0ef8d82b9c40862671915bc77825b974c300e58ee4d72de75ab0b7587f1cff" Jan 16 23:57:51.728197 containerd[1486]: 2026-01-16 23:57:51.689 [INFO][5284] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1c0ef8d82b9c40862671915bc77825b974c300e58ee4d72de75ab0b7587f1cff" HandleID="k8s-pod-network.1c0ef8d82b9c40862671915bc77825b974c300e58ee4d72de75ab0b7587f1cff" Workload="ci--4081--3--6--n--32c338e5e2-k8s-csi--node--driver--b76rd-eth0" Jan 16 23:57:51.728197 containerd[1486]: 2026-01-16 23:57:51.690 [INFO][5284] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:51.728197 containerd[1486]: 2026-01-16 23:57:51.690 [INFO][5284] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:51.728197 containerd[1486]: 2026-01-16 23:57:51.703 [WARNING][5284] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1c0ef8d82b9c40862671915bc77825b974c300e58ee4d72de75ab0b7587f1cff" HandleID="k8s-pod-network.1c0ef8d82b9c40862671915bc77825b974c300e58ee4d72de75ab0b7587f1cff" Workload="ci--4081--3--6--n--32c338e5e2-k8s-csi--node--driver--b76rd-eth0" Jan 16 23:57:51.728197 containerd[1486]: 2026-01-16 23:57:51.703 [INFO][5284] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1c0ef8d82b9c40862671915bc77825b974c300e58ee4d72de75ab0b7587f1cff" HandleID="k8s-pod-network.1c0ef8d82b9c40862671915bc77825b974c300e58ee4d72de75ab0b7587f1cff" Workload="ci--4081--3--6--n--32c338e5e2-k8s-csi--node--driver--b76rd-eth0" Jan 16 23:57:51.728197 containerd[1486]: 2026-01-16 23:57:51.709 [INFO][5284] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:51.728197 containerd[1486]: 2026-01-16 23:57:51.715 [INFO][5276] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1c0ef8d82b9c40862671915bc77825b974c300e58ee4d72de75ab0b7587f1cff" Jan 16 23:57:51.728197 containerd[1486]: time="2026-01-16T23:57:51.724978496Z" level=info msg="TearDown network for sandbox \"1c0ef8d82b9c40862671915bc77825b974c300e58ee4d72de75ab0b7587f1cff\" successfully" Jan 16 23:57:51.731826 containerd[1486]: time="2026-01-16T23:57:51.731776086Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1c0ef8d82b9c40862671915bc77825b974c300e58ee4d72de75ab0b7587f1cff\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 16 23:57:51.732032 containerd[1486]: time="2026-01-16T23:57:51.731993277Z" level=info msg="RemovePodSandbox \"1c0ef8d82b9c40862671915bc77825b974c300e58ee4d72de75ab0b7587f1cff\" returns successfully" Jan 16 23:57:51.732603 containerd[1486]: time="2026-01-16T23:57:51.732582291Z" level=info msg="StopPodSandbox for \"6e16001728dd14416be70318e3a602cadbd5c077cee8928c4ea87748e0eac60b\"" Jan 16 23:57:51.739180 containerd[1486]: time="2026-01-16T23:57:51.739130132Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:57:51.740682 containerd[1486]: time="2026-01-16T23:57:51.740560711Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 16 23:57:51.740682 containerd[1486]: time="2026-01-16T23:57:51.740636588Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 16 23:57:51.740941 kubelet[2574]: E0116 23:57:51.740855 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 23:57:51.740941 kubelet[2574]: E0116 23:57:51.740905 2574 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 23:57:51.741350 kubelet[2574]: E0116 23:57:51.741272 2574 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gr4fs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-b76rd_calico-system(ebb15273-01f0-4342-86a4-e67c5f3e53d0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 16 23:57:51.744223 containerd[1486]: time="2026-01-16T23:57:51.744191116Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 16 23:57:51.849013 containerd[1486]: 2026-01-16 23:57:51.793 [WARNING][5298] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="6e16001728dd14416be70318e3a602cadbd5c077cee8928c4ea87748e0eac60b" WorkloadEndpoint="ci--4081--3--6--n--32c338e5e2-k8s-whisker--848475bf4c--wx5qm-eth0" Jan 16 23:57:51.849013 containerd[1486]: 2026-01-16 23:57:51.793 [INFO][5298] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6e16001728dd14416be70318e3a602cadbd5c077cee8928c4ea87748e0eac60b" Jan 16 23:57:51.849013 containerd[1486]: 2026-01-16 23:57:51.793 [INFO][5298] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6e16001728dd14416be70318e3a602cadbd5c077cee8928c4ea87748e0eac60b" iface="eth0" netns="" Jan 16 23:57:51.849013 containerd[1486]: 2026-01-16 23:57:51.793 [INFO][5298] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6e16001728dd14416be70318e3a602cadbd5c077cee8928c4ea87748e0eac60b" Jan 16 23:57:51.849013 containerd[1486]: 2026-01-16 23:57:51.793 [INFO][5298] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6e16001728dd14416be70318e3a602cadbd5c077cee8928c4ea87748e0eac60b" Jan 16 23:57:51.849013 containerd[1486]: 2026-01-16 23:57:51.824 [INFO][5305] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6e16001728dd14416be70318e3a602cadbd5c077cee8928c4ea87748e0eac60b" HandleID="k8s-pod-network.6e16001728dd14416be70318e3a602cadbd5c077cee8928c4ea87748e0eac60b" Workload="ci--4081--3--6--n--32c338e5e2-k8s-whisker--848475bf4c--wx5qm-eth0" Jan 16 23:57:51.849013 containerd[1486]: 2026-01-16 23:57:51.825 [INFO][5305] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:51.849013 containerd[1486]: 2026-01-16 23:57:51.827 [INFO][5305] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:51.849013 containerd[1486]: 2026-01-16 23:57:51.841 [WARNING][5305] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6e16001728dd14416be70318e3a602cadbd5c077cee8928c4ea87748e0eac60b" HandleID="k8s-pod-network.6e16001728dd14416be70318e3a602cadbd5c077cee8928c4ea87748e0eac60b" Workload="ci--4081--3--6--n--32c338e5e2-k8s-whisker--848475bf4c--wx5qm-eth0" Jan 16 23:57:51.849013 containerd[1486]: 2026-01-16 23:57:51.841 [INFO][5305] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6e16001728dd14416be70318e3a602cadbd5c077cee8928c4ea87748e0eac60b" HandleID="k8s-pod-network.6e16001728dd14416be70318e3a602cadbd5c077cee8928c4ea87748e0eac60b" Workload="ci--4081--3--6--n--32c338e5e2-k8s-whisker--848475bf4c--wx5qm-eth0" Jan 16 23:57:51.849013 containerd[1486]: 2026-01-16 23:57:51.843 [INFO][5305] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:51.849013 containerd[1486]: 2026-01-16 23:57:51.845 [INFO][5298] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6e16001728dd14416be70318e3a602cadbd5c077cee8928c4ea87748e0eac60b" Jan 16 23:57:51.849013 containerd[1486]: time="2026-01-16T23:57:51.848818248Z" level=info msg="TearDown network for sandbox \"6e16001728dd14416be70318e3a602cadbd5c077cee8928c4ea87748e0eac60b\" successfully" Jan 16 23:57:51.849013 containerd[1486]: time="2026-01-16T23:57:51.848862687Z" level=info msg="StopPodSandbox for \"6e16001728dd14416be70318e3a602cadbd5c077cee8928c4ea87748e0eac60b\" returns successfully" Jan 16 23:57:51.850547 containerd[1486]: time="2026-01-16T23:57:51.849431582Z" level=info msg="RemovePodSandbox for \"6e16001728dd14416be70318e3a602cadbd5c077cee8928c4ea87748e0eac60b\"" Jan 16 23:57:51.850547 containerd[1486]: time="2026-01-16T23:57:51.849501059Z" level=info msg="Forcibly stopping sandbox \"6e16001728dd14416be70318e3a602cadbd5c077cee8928c4ea87748e0eac60b\"" Jan 16 23:57:51.967665 containerd[1486]: 2026-01-16 23:57:51.907 [WARNING][5319] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="6e16001728dd14416be70318e3a602cadbd5c077cee8928c4ea87748e0eac60b" WorkloadEndpoint="ci--4081--3--6--n--32c338e5e2-k8s-whisker--848475bf4c--wx5qm-eth0" Jan 16 23:57:51.967665 containerd[1486]: 2026-01-16 23:57:51.908 [INFO][5319] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6e16001728dd14416be70318e3a602cadbd5c077cee8928c4ea87748e0eac60b" Jan 16 23:57:51.967665 containerd[1486]: 2026-01-16 23:57:51.908 [INFO][5319] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6e16001728dd14416be70318e3a602cadbd5c077cee8928c4ea87748e0eac60b" iface="eth0" netns="" Jan 16 23:57:51.967665 containerd[1486]: 2026-01-16 23:57:51.908 [INFO][5319] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6e16001728dd14416be70318e3a602cadbd5c077cee8928c4ea87748e0eac60b" Jan 16 23:57:51.967665 containerd[1486]: 2026-01-16 23:57:51.908 [INFO][5319] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6e16001728dd14416be70318e3a602cadbd5c077cee8928c4ea87748e0eac60b" Jan 16 23:57:51.967665 containerd[1486]: 2026-01-16 23:57:51.946 [INFO][5326] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6e16001728dd14416be70318e3a602cadbd5c077cee8928c4ea87748e0eac60b" HandleID="k8s-pod-network.6e16001728dd14416be70318e3a602cadbd5c077cee8928c4ea87748e0eac60b" Workload="ci--4081--3--6--n--32c338e5e2-k8s-whisker--848475bf4c--wx5qm-eth0" Jan 16 23:57:51.967665 containerd[1486]: 2026-01-16 23:57:51.947 [INFO][5326] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:51.967665 containerd[1486]: 2026-01-16 23:57:51.947 [INFO][5326] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:51.967665 containerd[1486]: 2026-01-16 23:57:51.959 [WARNING][5326] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6e16001728dd14416be70318e3a602cadbd5c077cee8928c4ea87748e0eac60b" HandleID="k8s-pod-network.6e16001728dd14416be70318e3a602cadbd5c077cee8928c4ea87748e0eac60b" Workload="ci--4081--3--6--n--32c338e5e2-k8s-whisker--848475bf4c--wx5qm-eth0" Jan 16 23:57:51.967665 containerd[1486]: 2026-01-16 23:57:51.959 [INFO][5326] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6e16001728dd14416be70318e3a602cadbd5c077cee8928c4ea87748e0eac60b" HandleID="k8s-pod-network.6e16001728dd14416be70318e3a602cadbd5c077cee8928c4ea87748e0eac60b" Workload="ci--4081--3--6--n--32c338e5e2-k8s-whisker--848475bf4c--wx5qm-eth0" Jan 16 23:57:51.967665 containerd[1486]: 2026-01-16 23:57:51.961 [INFO][5326] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:51.967665 containerd[1486]: 2026-01-16 23:57:51.964 [INFO][5319] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6e16001728dd14416be70318e3a602cadbd5c077cee8928c4ea87748e0eac60b" Jan 16 23:57:51.967665 containerd[1486]: time="2026-01-16T23:57:51.967348827Z" level=info msg="TearDown network for sandbox \"6e16001728dd14416be70318e3a602cadbd5c077cee8928c4ea87748e0eac60b\" successfully" Jan 16 23:57:51.972176 containerd[1486]: time="2026-01-16T23:57:51.972121344Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6e16001728dd14416be70318e3a602cadbd5c077cee8928c4ea87748e0eac60b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 16 23:57:51.972728 containerd[1486]: time="2026-01-16T23:57:51.972339174Z" level=info msg="RemovePodSandbox \"6e16001728dd14416be70318e3a602cadbd5c077cee8928c4ea87748e0eac60b\" returns successfully" Jan 16 23:57:51.973284 containerd[1486]: time="2026-01-16T23:57:51.972972747Z" level=info msg="StopPodSandbox for \"edf380fdd62aae19d478c347e1946317eb45e6e825abc4b36b90ad18e83ed7f3\"" Jan 16 23:57:52.106822 containerd[1486]: 2026-01-16 23:57:52.046 [WARNING][5340] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="edf380fdd62aae19d478c347e1946317eb45e6e825abc4b36b90ad18e83ed7f3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--32c338e5e2-k8s-calico--apiserver--7c6f969f4--kxjbr-eth0", GenerateName:"calico-apiserver-7c6f969f4-", Namespace:"calico-apiserver", SelfLink:"", UID:"28527141-9485-40ed-9795-772c961207d3", ResourceVersion:"1051", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 57, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c6f969f4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-32c338e5e2", ContainerID:"c5e96dfad7b0dd9480d9be28608a0caffe43059dc7056d8bd94edaf36500659a", Pod:"calico-apiserver-7c6f969f4-kxjbr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.58.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieb68b6c30c0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:57:52.106822 containerd[1486]: 2026-01-16 23:57:52.047 [INFO][5340] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="edf380fdd62aae19d478c347e1946317eb45e6e825abc4b36b90ad18e83ed7f3" Jan 16 23:57:52.106822 containerd[1486]: 2026-01-16 23:57:52.047 [INFO][5340] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="edf380fdd62aae19d478c347e1946317eb45e6e825abc4b36b90ad18e83ed7f3" iface="eth0" netns="" Jan 16 23:57:52.106822 containerd[1486]: 2026-01-16 23:57:52.047 [INFO][5340] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="edf380fdd62aae19d478c347e1946317eb45e6e825abc4b36b90ad18e83ed7f3" Jan 16 23:57:52.106822 containerd[1486]: 2026-01-16 23:57:52.047 [INFO][5340] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="edf380fdd62aae19d478c347e1946317eb45e6e825abc4b36b90ad18e83ed7f3" Jan 16 23:57:52.106822 containerd[1486]: 2026-01-16 23:57:52.076 [INFO][5347] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="edf380fdd62aae19d478c347e1946317eb45e6e825abc4b36b90ad18e83ed7f3" HandleID="k8s-pod-network.edf380fdd62aae19d478c347e1946317eb45e6e825abc4b36b90ad18e83ed7f3" Workload="ci--4081--3--6--n--32c338e5e2-k8s-calico--apiserver--7c6f969f4--kxjbr-eth0" Jan 16 23:57:52.106822 containerd[1486]: 2026-01-16 23:57:52.078 [INFO][5347] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:52.106822 containerd[1486]: 2026-01-16 23:57:52.078 [INFO][5347] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:52.106822 containerd[1486]: 2026-01-16 23:57:52.098 [WARNING][5347] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="edf380fdd62aae19d478c347e1946317eb45e6e825abc4b36b90ad18e83ed7f3" HandleID="k8s-pod-network.edf380fdd62aae19d478c347e1946317eb45e6e825abc4b36b90ad18e83ed7f3" Workload="ci--4081--3--6--n--32c338e5e2-k8s-calico--apiserver--7c6f969f4--kxjbr-eth0" Jan 16 23:57:52.106822 containerd[1486]: 2026-01-16 23:57:52.098 [INFO][5347] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="edf380fdd62aae19d478c347e1946317eb45e6e825abc4b36b90ad18e83ed7f3" HandleID="k8s-pod-network.edf380fdd62aae19d478c347e1946317eb45e6e825abc4b36b90ad18e83ed7f3" Workload="ci--4081--3--6--n--32c338e5e2-k8s-calico--apiserver--7c6f969f4--kxjbr-eth0" Jan 16 23:57:52.106822 containerd[1486]: 2026-01-16 23:57:52.100 [INFO][5347] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:52.106822 containerd[1486]: 2026-01-16 23:57:52.103 [INFO][5340] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="edf380fdd62aae19d478c347e1946317eb45e6e825abc4b36b90ad18e83ed7f3" Jan 16 23:57:52.109050 containerd[1486]: time="2026-01-16T23:57:52.108601975Z" level=info msg="TearDown network for sandbox \"edf380fdd62aae19d478c347e1946317eb45e6e825abc4b36b90ad18e83ed7f3\" successfully" Jan 16 23:57:52.109050 containerd[1486]: time="2026-01-16T23:57:52.108639774Z" level=info msg="StopPodSandbox for \"edf380fdd62aae19d478c347e1946317eb45e6e825abc4b36b90ad18e83ed7f3\" returns successfully" Jan 16 23:57:52.109314 containerd[1486]: time="2026-01-16T23:57:52.109196391Z" level=info msg="RemovePodSandbox for \"edf380fdd62aae19d478c347e1946317eb45e6e825abc4b36b90ad18e83ed7f3\"" Jan 16 23:57:52.109314 containerd[1486]: time="2026-01-16T23:57:52.109228870Z" level=info msg="Forcibly stopping sandbox \"edf380fdd62aae19d478c347e1946317eb45e6e825abc4b36b90ad18e83ed7f3\"" Jan 16 23:57:52.114493 containerd[1486]: time="2026-01-16T23:57:52.112978397Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:57:52.114630 containerd[1486]: time="2026-01-16T23:57:52.114535614Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 16 23:57:52.114656 containerd[1486]: time="2026-01-16T23:57:52.114645210Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 16 23:57:52.115535 kubelet[2574]: E0116 23:57:52.114883 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 23:57:52.115535 kubelet[2574]: E0116 23:57:52.114935 2574 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 23:57:52.115535 kubelet[2574]: E0116 23:57:52.115049 2574 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gr4fs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-b76rd_calico-system(ebb15273-01f0-4342-86a4-e67c5f3e53d0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 16 23:57:52.118513 kubelet[2574]: E0116 23:57:52.116410 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-b76rd" podUID="ebb15273-01f0-4342-86a4-e67c5f3e53d0" Jan 16 23:57:52.247320 containerd[1486]: 2026-01-16 23:57:52.191 [WARNING][5361] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="edf380fdd62aae19d478c347e1946317eb45e6e825abc4b36b90ad18e83ed7f3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--32c338e5e2-k8s-calico--apiserver--7c6f969f4--kxjbr-eth0", GenerateName:"calico-apiserver-7c6f969f4-", Namespace:"calico-apiserver", SelfLink:"", UID:"28527141-9485-40ed-9795-772c961207d3", ResourceVersion:"1051", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 57, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c6f969f4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-32c338e5e2", ContainerID:"c5e96dfad7b0dd9480d9be28608a0caffe43059dc7056d8bd94edaf36500659a", Pod:"calico-apiserver-7c6f969f4-kxjbr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.58.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieb68b6c30c0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:57:52.247320 containerd[1486]: 2026-01-16 23:57:52.192 [INFO][5361] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="edf380fdd62aae19d478c347e1946317eb45e6e825abc4b36b90ad18e83ed7f3" Jan 16 23:57:52.247320 containerd[1486]: 2026-01-16 23:57:52.194 [INFO][5361] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="edf380fdd62aae19d478c347e1946317eb45e6e825abc4b36b90ad18e83ed7f3" iface="eth0" netns="" Jan 16 23:57:52.247320 containerd[1486]: 2026-01-16 23:57:52.194 [INFO][5361] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="edf380fdd62aae19d478c347e1946317eb45e6e825abc4b36b90ad18e83ed7f3" Jan 16 23:57:52.247320 containerd[1486]: 2026-01-16 23:57:52.194 [INFO][5361] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="edf380fdd62aae19d478c347e1946317eb45e6e825abc4b36b90ad18e83ed7f3" Jan 16 23:57:52.247320 containerd[1486]: 2026-01-16 23:57:52.221 [INFO][5368] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="edf380fdd62aae19d478c347e1946317eb45e6e825abc4b36b90ad18e83ed7f3" HandleID="k8s-pod-network.edf380fdd62aae19d478c347e1946317eb45e6e825abc4b36b90ad18e83ed7f3" Workload="ci--4081--3--6--n--32c338e5e2-k8s-calico--apiserver--7c6f969f4--kxjbr-eth0" Jan 16 23:57:52.247320 containerd[1486]: 2026-01-16 23:57:52.221 [INFO][5368] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:52.247320 containerd[1486]: 2026-01-16 23:57:52.221 [INFO][5368] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:52.247320 containerd[1486]: 2026-01-16 23:57:52.236 [WARNING][5368] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="edf380fdd62aae19d478c347e1946317eb45e6e825abc4b36b90ad18e83ed7f3" HandleID="k8s-pod-network.edf380fdd62aae19d478c347e1946317eb45e6e825abc4b36b90ad18e83ed7f3" Workload="ci--4081--3--6--n--32c338e5e2-k8s-calico--apiserver--7c6f969f4--kxjbr-eth0" Jan 16 23:57:52.247320 containerd[1486]: 2026-01-16 23:57:52.236 [INFO][5368] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="edf380fdd62aae19d478c347e1946317eb45e6e825abc4b36b90ad18e83ed7f3" HandleID="k8s-pod-network.edf380fdd62aae19d478c347e1946317eb45e6e825abc4b36b90ad18e83ed7f3" Workload="ci--4081--3--6--n--32c338e5e2-k8s-calico--apiserver--7c6f969f4--kxjbr-eth0" Jan 16 23:57:52.247320 containerd[1486]: 2026-01-16 23:57:52.238 [INFO][5368] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:52.247320 containerd[1486]: 2026-01-16 23:57:52.243 [INFO][5361] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="edf380fdd62aae19d478c347e1946317eb45e6e825abc4b36b90ad18e83ed7f3" Jan 16 23:57:52.247320 containerd[1486]: time="2026-01-16T23:57:52.247287178Z" level=info msg="TearDown network for sandbox \"edf380fdd62aae19d478c347e1946317eb45e6e825abc4b36b90ad18e83ed7f3\" successfully" Jan 16 23:57:52.253374 containerd[1486]: time="2026-01-16T23:57:52.253283094Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"edf380fdd62aae19d478c347e1946317eb45e6e825abc4b36b90ad18e83ed7f3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 16 23:57:52.253537 containerd[1486]: time="2026-01-16T23:57:52.253380930Z" level=info msg="RemovePodSandbox \"edf380fdd62aae19d478c347e1946317eb45e6e825abc4b36b90ad18e83ed7f3\" returns successfully" Jan 16 23:57:52.254578 containerd[1486]: time="2026-01-16T23:57:52.253958266Z" level=info msg="StopPodSandbox for \"a0d98276a45f0f336105b9bd5f8f2a1a9759b9f48476cf77f99a806378b7ec77\"" Jan 16 23:57:52.384526 containerd[1486]: time="2026-01-16T23:57:52.383339447Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 16 23:57:52.386189 containerd[1486]: 2026-01-16 23:57:52.311 [WARNING][5382] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a0d98276a45f0f336105b9bd5f8f2a1a9759b9f48476cf77f99a806378b7ec77" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--32c338e5e2-k8s-coredns--674b8bbfcf--cvb6h-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7c95011c-e199-44d0-b1e0-17f58fab750a", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 56, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-32c338e5e2", ContainerID:"87526ade0224998dccb0c22e11bf68829df618dbabbaf60d7b87f265eed5a0df", Pod:"coredns-674b8bbfcf-cvb6h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.58.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9977d17229d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:57:52.386189 containerd[1486]: 2026-01-16 23:57:52.311 [INFO][5382] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a0d98276a45f0f336105b9bd5f8f2a1a9759b9f48476cf77f99a806378b7ec77" Jan 16 23:57:52.386189 containerd[1486]: 2026-01-16 23:57:52.311 [INFO][5382] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a0d98276a45f0f336105b9bd5f8f2a1a9759b9f48476cf77f99a806378b7ec77" iface="eth0" netns="" Jan 16 23:57:52.386189 containerd[1486]: 2026-01-16 23:57:52.311 [INFO][5382] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a0d98276a45f0f336105b9bd5f8f2a1a9759b9f48476cf77f99a806378b7ec77" Jan 16 23:57:52.386189 containerd[1486]: 2026-01-16 23:57:52.311 [INFO][5382] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a0d98276a45f0f336105b9bd5f8f2a1a9759b9f48476cf77f99a806378b7ec77" Jan 16 23:57:52.386189 containerd[1486]: 2026-01-16 23:57:52.357 [INFO][5389] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a0d98276a45f0f336105b9bd5f8f2a1a9759b9f48476cf77f99a806378b7ec77" HandleID="k8s-pod-network.a0d98276a45f0f336105b9bd5f8f2a1a9759b9f48476cf77f99a806378b7ec77" Workload="ci--4081--3--6--n--32c338e5e2-k8s-coredns--674b8bbfcf--cvb6h-eth0" Jan 16 23:57:52.386189 containerd[1486]: 2026-01-16 23:57:52.358 [INFO][5389] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:52.386189 containerd[1486]: 2026-01-16 23:57:52.359 [INFO][5389] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:52.386189 containerd[1486]: 2026-01-16 23:57:52.372 [WARNING][5389] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a0d98276a45f0f336105b9bd5f8f2a1a9759b9f48476cf77f99a806378b7ec77" HandleID="k8s-pod-network.a0d98276a45f0f336105b9bd5f8f2a1a9759b9f48476cf77f99a806378b7ec77" Workload="ci--4081--3--6--n--32c338e5e2-k8s-coredns--674b8bbfcf--cvb6h-eth0" Jan 16 23:57:52.386189 containerd[1486]: 2026-01-16 23:57:52.373 [INFO][5389] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a0d98276a45f0f336105b9bd5f8f2a1a9759b9f48476cf77f99a806378b7ec77" HandleID="k8s-pod-network.a0d98276a45f0f336105b9bd5f8f2a1a9759b9f48476cf77f99a806378b7ec77" Workload="ci--4081--3--6--n--32c338e5e2-k8s-coredns--674b8bbfcf--cvb6h-eth0" Jan 16 23:57:52.386189 containerd[1486]: 2026-01-16 23:57:52.376 [INFO][5389] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:52.386189 containerd[1486]: 2026-01-16 23:57:52.381 [INFO][5382] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a0d98276a45f0f336105b9bd5f8f2a1a9759b9f48476cf77f99a806378b7ec77" Jan 16 23:57:52.386189 containerd[1486]: time="2026-01-16T23:57:52.385903702Z" level=info msg="TearDown network for sandbox \"a0d98276a45f0f336105b9bd5f8f2a1a9759b9f48476cf77f99a806378b7ec77\" successfully" Jan 16 23:57:52.386189 containerd[1486]: time="2026-01-16T23:57:52.385927982Z" level=info msg="StopPodSandbox for \"a0d98276a45f0f336105b9bd5f8f2a1a9759b9f48476cf77f99a806378b7ec77\" returns successfully" Jan 16 23:57:52.388543 containerd[1486]: time="2026-01-16T23:57:52.386409042Z" level=info msg="RemovePodSandbox for \"a0d98276a45f0f336105b9bd5f8f2a1a9759b9f48476cf77f99a806378b7ec77\"" Jan 16 23:57:52.388543 containerd[1486]: time="2026-01-16T23:57:52.386443241Z" level=info msg="Forcibly stopping sandbox \"a0d98276a45f0f336105b9bd5f8f2a1a9759b9f48476cf77f99a806378b7ec77\"" Jan 16 23:57:52.505250 containerd[1486]: 2026-01-16 23:57:52.457 [WARNING][5403] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a0d98276a45f0f336105b9bd5f8f2a1a9759b9f48476cf77f99a806378b7ec77" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--32c338e5e2-k8s-coredns--674b8bbfcf--cvb6h-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7c95011c-e199-44d0-b1e0-17f58fab750a", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 56, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-32c338e5e2", ContainerID:"87526ade0224998dccb0c22e11bf68829df618dbabbaf60d7b87f265eed5a0df", Pod:"coredns-674b8bbfcf-cvb6h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.58.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9977d17229d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:57:52.505250 containerd[1486]: 2026-01-16 23:57:52.457 [INFO][5403] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a0d98276a45f0f336105b9bd5f8f2a1a9759b9f48476cf77f99a806378b7ec77" Jan 16 23:57:52.505250 containerd[1486]: 2026-01-16 23:57:52.457 [INFO][5403] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a0d98276a45f0f336105b9bd5f8f2a1a9759b9f48476cf77f99a806378b7ec77" iface="eth0" netns="" Jan 16 23:57:52.505250 containerd[1486]: 2026-01-16 23:57:52.457 [INFO][5403] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a0d98276a45f0f336105b9bd5f8f2a1a9759b9f48476cf77f99a806378b7ec77" Jan 16 23:57:52.505250 containerd[1486]: 2026-01-16 23:57:52.457 [INFO][5403] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a0d98276a45f0f336105b9bd5f8f2a1a9759b9f48476cf77f99a806378b7ec77" Jan 16 23:57:52.505250 containerd[1486]: 2026-01-16 23:57:52.482 [INFO][5410] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a0d98276a45f0f336105b9bd5f8f2a1a9759b9f48476cf77f99a806378b7ec77" HandleID="k8s-pod-network.a0d98276a45f0f336105b9bd5f8f2a1a9759b9f48476cf77f99a806378b7ec77" Workload="ci--4081--3--6--n--32c338e5e2-k8s-coredns--674b8bbfcf--cvb6h-eth0" Jan 16 23:57:52.505250 containerd[1486]: 2026-01-16 23:57:52.482 [INFO][5410] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:52.505250 containerd[1486]: 2026-01-16 23:57:52.482 [INFO][5410] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:52.505250 containerd[1486]: 2026-01-16 23:57:52.494 [WARNING][5410] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a0d98276a45f0f336105b9bd5f8f2a1a9759b9f48476cf77f99a806378b7ec77" HandleID="k8s-pod-network.a0d98276a45f0f336105b9bd5f8f2a1a9759b9f48476cf77f99a806378b7ec77" Workload="ci--4081--3--6--n--32c338e5e2-k8s-coredns--674b8bbfcf--cvb6h-eth0" Jan 16 23:57:52.505250 containerd[1486]: 2026-01-16 23:57:52.494 [INFO][5410] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a0d98276a45f0f336105b9bd5f8f2a1a9759b9f48476cf77f99a806378b7ec77" HandleID="k8s-pod-network.a0d98276a45f0f336105b9bd5f8f2a1a9759b9f48476cf77f99a806378b7ec77" Workload="ci--4081--3--6--n--32c338e5e2-k8s-coredns--674b8bbfcf--cvb6h-eth0" Jan 16 23:57:52.505250 containerd[1486]: 2026-01-16 23:57:52.498 [INFO][5410] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:52.505250 containerd[1486]: 2026-01-16 23:57:52.501 [INFO][5403] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a0d98276a45f0f336105b9bd5f8f2a1a9759b9f48476cf77f99a806378b7ec77" Jan 16 23:57:52.507899 containerd[1486]: time="2026-01-16T23:57:52.505804908Z" level=info msg="TearDown network for sandbox \"a0d98276a45f0f336105b9bd5f8f2a1a9759b9f48476cf77f99a806378b7ec77\" successfully" Jan 16 23:57:52.514800 containerd[1486]: time="2026-01-16T23:57:52.514015814Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a0d98276a45f0f336105b9bd5f8f2a1a9759b9f48476cf77f99a806378b7ec77\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 16 23:57:52.514800 containerd[1486]: time="2026-01-16T23:57:52.514635349Z" level=info msg="RemovePodSandbox \"a0d98276a45f0f336105b9bd5f8f2a1a9759b9f48476cf77f99a806378b7ec77\" returns successfully" Jan 16 23:57:52.958427 containerd[1486]: time="2026-01-16T23:57:52.958364111Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:57:52.960494 containerd[1486]: time="2026-01-16T23:57:52.960188597Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 16 23:57:52.960494 containerd[1486]: time="2026-01-16T23:57:52.960238995Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 16 23:57:52.961171 kubelet[2574]: E0116 23:57:52.960917 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 23:57:52.961171 kubelet[2574]: E0116 23:57:52.961123 2574 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 23:57:52.962586 kubelet[2574]: E0116 23:57:52.962489 2574 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-58xrg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-hkcjr_calico-system(494d2d41-870f-485e-a8b2-cbb0fecf4357): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 16 23:57:52.963757 kubelet[2574]: E0116 23:57:52.963706 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hkcjr" podUID="494d2d41-870f-485e-a8b2-cbb0fecf4357" Jan 16 23:57:54.380349 containerd[1486]: time="2026-01-16T23:57:54.380108254Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 23:57:54.721706 containerd[1486]: time="2026-01-16T23:57:54.721520547Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:57:54.723733 containerd[1486]: time="2026-01-16T23:57:54.722913096Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 23:57:54.723824 kubelet[2574]: E0116 23:57:54.723111 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 23:57:54.723824 kubelet[2574]: E0116 23:57:54.723154 2574 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 23:57:54.723824 kubelet[2574]: E0116 23:57:54.723273 2574 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lpxvc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7c6f969f4-kxjbr_calico-apiserver(28527141-9485-40ed-9795-772c961207d3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 23:57:54.724269 containerd[1486]: time="2026-01-16T23:57:54.723005732Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 16 23:57:54.724817 kubelet[2574]: E0116 23:57:54.724761 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c6f969f4-kxjbr" podUID="28527141-9485-40ed-9795-772c961207d3" Jan 16 23:57:55.391509 containerd[1486]: time="2026-01-16T23:57:55.391059106Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 16 23:57:55.732243 containerd[1486]: time="2026-01-16T23:57:55.731374553Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:57:55.733495 containerd[1486]: time="2026-01-16T23:57:55.733338325Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 16 23:57:55.733495 containerd[1486]: time="2026-01-16T23:57:55.733444881Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 16 23:57:55.735480 kubelet[2574]: E0116 23:57:55.733843 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 23:57:55.735480 kubelet[2574]: E0116 23:57:55.733891 2574 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 23:57:55.735480 kubelet[2574]: E0116 23:57:55.734592 2574 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9rlx7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-866b5b959f-q6rnd_calico-system(ab55bbc8-2f84-4b63-ae7a-3f7a0c596089): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 16 23:57:55.736054 containerd[1486]: time="2026-01-16T23:57:55.734183055Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 23:57:55.736588 kubelet[2574]: E0116 23:57:55.736267 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-866b5b959f-q6rnd" podUID="ab55bbc8-2f84-4b63-ae7a-3f7a0c596089" Jan 16 23:57:56.084228 containerd[1486]: time="2026-01-16T23:57:56.084149754Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:57:56.086454 containerd[1486]: time="2026-01-16T23:57:56.086307043Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 23:57:56.086454 containerd[1486]: time="2026-01-16T23:57:56.086401440Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 16 23:57:56.087166 kubelet[2574]: E0116 23:57:56.086597 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 23:57:56.087166 kubelet[2574]: E0116 23:57:56.086655 2574 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 23:57:56.087166 kubelet[2574]: E0116 23:57:56.086805 2574 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wvxng,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7c6f969f4-6hcrh_calico-apiserver(2216c59e-7647-44aa-810f-40503d382780): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 23:57:56.088109 kubelet[2574]: E0116 23:57:56.088059 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c6f969f4-6hcrh" podUID="2216c59e-7647-44aa-810f-40503d382780" Jan 16 23:57:56.386841 kubelet[2574]: E0116 23:57:56.386711 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58dd6fc975-fz8dg" podUID="40cdc5e9-1abb-47b0-ad9d-ea94f986178b" Jan 16 23:58:05.382843 kubelet[2574]: E0116 23:58:05.382344 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hkcjr" podUID="494d2d41-870f-485e-a8b2-cbb0fecf4357" Jan 16 23:58:05.385514 kubelet[2574]: E0116 23:58:05.384083 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-b76rd" podUID="ebb15273-01f0-4342-86a4-e67c5f3e53d0" Jan 16 23:58:07.382071 containerd[1486]: time="2026-01-16T23:58:07.381615211Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 16 23:58:07.742192 containerd[1486]: time="2026-01-16T23:58:07.741844304Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:58:07.743782 containerd[1486]: time="2026-01-16T23:58:07.743490837Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 16 23:58:07.743782 containerd[1486]: time="2026-01-16T23:58:07.743602795Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 16 23:58:07.744320 kubelet[2574]: E0116 23:58:07.744244 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 23:58:07.744320 kubelet[2574]: E0116 23:58:07.744308 2574 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 23:58:07.745287 kubelet[2574]: E0116 23:58:07.744428 2574 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e3f02b4406bf4653bd0ff6a5488c4241,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ph5z8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-58dd6fc975-fz8dg_calico-system(40cdc5e9-1abb-47b0-ad9d-ea94f986178b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 16 23:58:07.747498 containerd[1486]: time="2026-01-16T23:58:07.747305134Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 16 23:58:08.089000 containerd[1486]: time="2026-01-16T23:58:08.088950923Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:58:08.090560 containerd[1486]: time="2026-01-16T23:58:08.090444180Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 16 23:58:08.090560 containerd[1486]: time="2026-01-16T23:58:08.090503100Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 16 23:58:08.092163 kubelet[2574]: E0116 23:58:08.091574 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 23:58:08.092163 kubelet[2574]: E0116 23:58:08.091630 2574 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 23:58:08.092163 kubelet[2574]: E0116 23:58:08.091753 2574 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ph5z8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-58dd6fc975-fz8dg_calico-system(40cdc5e9-1abb-47b0-ad9d-ea94f986178b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 16 23:58:08.093780 kubelet[2574]: E0116 23:58:08.093731 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58dd6fc975-fz8dg" podUID="40cdc5e9-1abb-47b0-ad9d-ea94f986178b" Jan 16 23:58:08.383500 kubelet[2574]: E0116 23:58:08.383362 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-866b5b959f-q6rnd" podUID="ab55bbc8-2f84-4b63-ae7a-3f7a0c596089" Jan 16 23:58:09.381667 kubelet[2574]: E0116 23:58:09.381599 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c6f969f4-kxjbr" podUID="28527141-9485-40ed-9795-772c961207d3" Jan 16 23:58:09.383281 kubelet[2574]: E0116 23:58:09.382281 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c6f969f4-6hcrh" podUID="2216c59e-7647-44aa-810f-40503d382780" Jan 16 23:58:14.277944 systemd[1]: run-containerd-runc-k8s.io-e535ceb6bdcdd55fbec6fe1cc5e631d9851e9ab2c515f51cb1bce71a8c58f9d3-runc.N7BlNz.mount: Deactivated successfully. Jan 16 23:58:18.382189 containerd[1486]: time="2026-01-16T23:58:18.382093763Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 16 23:58:18.713731 containerd[1486]: time="2026-01-16T23:58:18.713444304Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:58:18.715153 containerd[1486]: time="2026-01-16T23:58:18.714877177Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 16 23:58:18.715153 containerd[1486]: time="2026-01-16T23:58:18.714994136Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 16 23:58:18.715292 kubelet[2574]: E0116 23:58:18.715202 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 23:58:18.715292 kubelet[2574]: E0116 23:58:18.715277 2574 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 23:58:18.717520 kubelet[2574]: E0116 23:58:18.715860 2574 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-58xrg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-hkcjr_calico-system(494d2d41-870f-485e-a8b2-cbb0fecf4357): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 16 23:58:18.717520 kubelet[2574]: E0116 23:58:18.717473 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hkcjr" podUID="494d2d41-870f-485e-a8b2-cbb0fecf4357" Jan 16 23:58:19.381802 kubelet[2574]: E0116 23:58:19.381746 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58dd6fc975-fz8dg" podUID="40cdc5e9-1abb-47b0-ad9d-ea94f986178b" Jan 16 23:58:20.383323 containerd[1486]: time="2026-01-16T23:58:20.383218107Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 16 23:58:20.710983 containerd[1486]: time="2026-01-16T23:58:20.710707227Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:58:20.712586 containerd[1486]: time="2026-01-16T23:58:20.712019622Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 16 23:58:20.712586 containerd[1486]: time="2026-01-16T23:58:20.712144502Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 16 23:58:20.712779 kubelet[2574]: E0116 23:58:20.712675 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 23:58:20.713116 kubelet[2574]: E0116 23:58:20.712783 2574 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 23:58:20.715611 kubelet[2574]: E0116 23:58:20.714966 2574 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gr4fs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-b76rd_calico-system(ebb15273-01f0-4342-86a4-e67c5f3e53d0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 16 23:58:20.718040 containerd[1486]: time="2026-01-16T23:58:20.717997403Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 16 23:58:21.066642 containerd[1486]: time="2026-01-16T23:58:21.066590867Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:58:21.068044 containerd[1486]: time="2026-01-16T23:58:21.067982384Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 16 23:58:21.068174 containerd[1486]: time="2026-01-16T23:58:21.068102384Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 16 23:58:21.069479 kubelet[2574]: E0116 23:58:21.068419 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 23:58:21.069479 kubelet[2574]: E0116 23:58:21.068484 2574 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 23:58:21.069479 kubelet[2574]: E0116 23:58:21.068601 2574 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gr4fs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-b76rd_calico-system(ebb15273-01f0-4342-86a4-e67c5f3e53d0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 16 23:58:21.070065 kubelet[2574]: E0116 23:58:21.069802 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-b76rd" podUID="ebb15273-01f0-4342-86a4-e67c5f3e53d0" Jan 16 23:58:21.382017 containerd[1486]: time="2026-01-16T23:58:21.381690685Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 23:58:21.734471 containerd[1486]: time="2026-01-16T23:58:21.734132770Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:58:21.735657 containerd[1486]: time="2026-01-16T23:58:21.735470967Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 23:58:21.735901 containerd[1486]: time="2026-01-16T23:58:21.735577567Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 16 23:58:21.735941 kubelet[2574]: E0116 23:58:21.735833 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 23:58:21.735941 kubelet[2574]: E0116 23:58:21.735889 2574 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 23:58:21.736233 kubelet[2574]: E0116 23:58:21.736078 2574 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wvxng,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7c6f969f4-6hcrh_calico-apiserver(2216c59e-7647-44aa-810f-40503d382780): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 23:58:21.737423 kubelet[2574]: E0116 23:58:21.737383 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c6f969f4-6hcrh" podUID="2216c59e-7647-44aa-810f-40503d382780" Jan 16 23:58:23.382812 containerd[1486]: time="2026-01-16T23:58:23.382339901Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 23:58:23.718156 containerd[1486]: time="2026-01-16T23:58:23.717740990Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:58:23.719314 containerd[1486]: time="2026-01-16T23:58:23.719196749Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 23:58:23.719314 containerd[1486]: time="2026-01-16T23:58:23.719267509Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 16 23:58:23.719517 kubelet[2574]: E0116 23:58:23.719409 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 23:58:23.719517 kubelet[2574]: E0116 23:58:23.719474 2574 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 23:58:23.719829 kubelet[2574]: E0116 23:58:23.719687 2574 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lpxvc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7c6f969f4-kxjbr_calico-apiserver(28527141-9485-40ed-9795-772c961207d3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 23:58:23.720370 containerd[1486]: time="2026-01-16T23:58:23.720341788Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 16 23:58:23.721061 kubelet[2574]: E0116 23:58:23.721007 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c6f969f4-kxjbr" podUID="28527141-9485-40ed-9795-772c961207d3" Jan 16 23:58:24.054635 containerd[1486]: time="2026-01-16T23:58:24.054587997Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:58:24.056892 containerd[1486]: time="2026-01-16T23:58:24.056836437Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 16 23:58:24.057022 containerd[1486]: time="2026-01-16T23:58:24.056951957Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 16 23:58:24.058645 kubelet[2574]: E0116 23:58:24.058604 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 23:58:24.058747 kubelet[2574]: E0116 23:58:24.058655 2574 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 23:58:24.058844 kubelet[2574]: E0116 23:58:24.058795 2574 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9rlx7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-866b5b959f-q6rnd_calico-system(ab55bbc8-2f84-4b63-ae7a-3f7a0c596089): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 16 23:58:24.060374 kubelet[2574]: E0116 23:58:24.060330 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-866b5b959f-q6rnd" podUID="ab55bbc8-2f84-4b63-ae7a-3f7a0c596089" Jan 16 23:58:30.383957 kubelet[2574]: E0116 23:58:30.383911 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hkcjr" podUID="494d2d41-870f-485e-a8b2-cbb0fecf4357" Jan 16 23:58:33.380845 kubelet[2574]: E0116 23:58:33.380782 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58dd6fc975-fz8dg" podUID="40cdc5e9-1abb-47b0-ad9d-ea94f986178b" Jan 16 23:58:34.386718 kubelet[2574]: E0116 23:58:34.386586 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-b76rd" podUID="ebb15273-01f0-4342-86a4-e67c5f3e53d0" Jan 16 23:58:35.380651 kubelet[2574]: E0116 23:58:35.380351 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c6f969f4-kxjbr" podUID="28527141-9485-40ed-9795-772c961207d3" Jan 16 23:58:37.381652 kubelet[2574]: E0116 23:58:37.381206 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c6f969f4-6hcrh" podUID="2216c59e-7647-44aa-810f-40503d382780" Jan 16 23:58:38.382078 kubelet[2574]: E0116 23:58:38.381874 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-866b5b959f-q6rnd" podUID="ab55bbc8-2f84-4b63-ae7a-3f7a0c596089" Jan 16 23:58:45.380563 kubelet[2574]: E0116 23:58:45.380504 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hkcjr" podUID="494d2d41-870f-485e-a8b2-cbb0fecf4357" Jan 16 23:58:46.382513 kubelet[2574]: E0116 23:58:46.382375 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58dd6fc975-fz8dg" podUID="40cdc5e9-1abb-47b0-ad9d-ea94f986178b" Jan 16 23:58:47.380617 kubelet[2574]: E0116 23:58:47.380350 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c6f969f4-kxjbr" podUID="28527141-9485-40ed-9795-772c961207d3" Jan 16 23:58:47.382509 kubelet[2574]: E0116 23:58:47.382395 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-b76rd" podUID="ebb15273-01f0-4342-86a4-e67c5f3e53d0" Jan 16 23:58:52.383500 kubelet[2574]: E0116 23:58:52.382183 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c6f969f4-6hcrh" podUID="2216c59e-7647-44aa-810f-40503d382780" Jan 16 23:58:52.383500 kubelet[2574]: E0116 23:58:52.382881 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-866b5b959f-q6rnd" podUID="ab55bbc8-2f84-4b63-ae7a-3f7a0c596089" Jan 16 23:58:59.382078 containerd[1486]: time="2026-01-16T23:58:59.381034161Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 16 23:58:59.745708 containerd[1486]: time="2026-01-16T23:58:59.744904218Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:58:59.746861 containerd[1486]: time="2026-01-16T23:58:59.746723166Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 16 23:58:59.746861 containerd[1486]: time="2026-01-16T23:58:59.746745846Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 16 23:58:59.747662 kubelet[2574]: E0116 23:58:59.746950 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 23:58:59.747662 kubelet[2574]: E0116 23:58:59.747006 2574 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 23:58:59.747662 kubelet[2574]: E0116 23:58:59.747122 2574 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e3f02b4406bf4653bd0ff6a5488c4241,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ph5z8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-58dd6fc975-fz8dg_calico-system(40cdc5e9-1abb-47b0-ad9d-ea94f986178b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 16 23:58:59.749542 containerd[1486]: time="2026-01-16T23:58:59.749503368Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 16 23:59:00.089314 containerd[1486]: time="2026-01-16T23:59:00.088640869Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:59:00.090465 containerd[1486]: time="2026-01-16T23:59:00.090316375Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 16 23:59:00.090465 containerd[1486]: time="2026-01-16T23:59:00.090425337Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 16 23:59:00.092607 kubelet[2574]: E0116 23:59:00.090679 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 23:59:00.092607 kubelet[2574]: E0116 23:59:00.090729 2574 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 23:59:00.092607 kubelet[2574]: E0116 23:59:00.090846 2574 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ph5z8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-58dd6fc975-fz8dg_calico-system(40cdc5e9-1abb-47b0-ad9d-ea94f986178b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 16 23:59:00.092900 kubelet[2574]: E0116 23:59:00.092859 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58dd6fc975-fz8dg" podUID="40cdc5e9-1abb-47b0-ad9d-ea94f986178b" Jan 16 23:59:00.385154 containerd[1486]: time="2026-01-16T23:59:00.384271598Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 16 23:59:00.386980 kubelet[2574]: E0116 23:59:00.386928 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c6f969f4-kxjbr" podUID="28527141-9485-40ed-9795-772c961207d3" Jan 16 23:59:00.761562 containerd[1486]: time="2026-01-16T23:59:00.761075661Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:59:00.762593 containerd[1486]: time="2026-01-16T23:59:00.762515763Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 16 23:59:00.762811 containerd[1486]: time="2026-01-16T23:59:00.762636445Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 16 23:59:00.763030 kubelet[2574]: E0116 23:59:00.762777 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 23:59:00.763030 kubelet[2574]: E0116 23:59:00.762822 2574 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 23:59:00.763928 kubelet[2574]: E0116 23:59:00.763548 2574 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-58xrg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-hkcjr_calico-system(494d2d41-870f-485e-a8b2-cbb0fecf4357): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 16 23:59:00.765378 kubelet[2574]: E0116 23:59:00.765285 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hkcjr" podUID="494d2d41-870f-485e-a8b2-cbb0fecf4357" Jan 16 23:59:01.382806 containerd[1486]: time="2026-01-16T23:59:01.382754315Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 16 23:59:01.736184 containerd[1486]: time="2026-01-16T23:59:01.735140161Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:59:01.738489 containerd[1486]: time="2026-01-16T23:59:01.736748826Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 16 23:59:01.738489 containerd[1486]: time="2026-01-16T23:59:01.736835388Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 16 23:59:01.738652 kubelet[2574]: E0116 23:59:01.736990 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 23:59:01.738652 kubelet[2574]: E0116 23:59:01.737058 2574 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 23:59:01.738652 kubelet[2574]: E0116 23:59:01.737172 2574 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gr4fs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-b76rd_calico-system(ebb15273-01f0-4342-86a4-e67c5f3e53d0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 16 23:59:01.739918 containerd[1486]: time="2026-01-16T23:59:01.739694073Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 16 23:59:02.069863 containerd[1486]: time="2026-01-16T23:59:02.069199015Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:59:02.071360 containerd[1486]: time="2026-01-16T23:59:02.071065485Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 16 23:59:02.071360 containerd[1486]: time="2026-01-16T23:59:02.071175327Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 16 23:59:02.071505 kubelet[2574]: E0116 23:59:02.071404 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 23:59:02.071761 kubelet[2574]: E0116 23:59:02.071519 2574 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 23:59:02.071788 kubelet[2574]: E0116 23:59:02.071711 2574 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gr4fs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-b76rd_calico-system(ebb15273-01f0-4342-86a4-e67c5f3e53d0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 16 23:59:02.073397 kubelet[2574]: E0116 23:59:02.073349 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-b76rd" podUID="ebb15273-01f0-4342-86a4-e67c5f3e53d0" Jan 16 23:59:07.380742 containerd[1486]: time="2026-01-16T23:59:07.380373708Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 16 23:59:07.722568 containerd[1486]: time="2026-01-16T23:59:07.722128768Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:59:07.724483 containerd[1486]: time="2026-01-16T23:59:07.723688554Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 16 23:59:07.724483 containerd[1486]: time="2026-01-16T23:59:07.723839757Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 16 23:59:07.725441 kubelet[2574]: E0116 23:59:07.725383 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 23:59:07.725441 kubelet[2574]: E0116 23:59:07.725436 2574 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 23:59:07.725832 kubelet[2574]: E0116 23:59:07.725662 2574 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9rlx7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-866b5b959f-q6rnd_calico-system(ab55bbc8-2f84-4b63-ae7a-3f7a0c596089): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 16 23:59:07.727717 containerd[1486]: time="2026-01-16T23:59:07.727670101Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 23:59:07.728116 kubelet[2574]: E0116 23:59:07.728046 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-866b5b959f-q6rnd" podUID="ab55bbc8-2f84-4b63-ae7a-3f7a0c596089" Jan 16 23:59:08.075876 containerd[1486]: time="2026-01-16T23:59:08.075605080Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:59:08.079091 containerd[1486]: time="2026-01-16T23:59:08.078928416Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 23:59:08.079091 containerd[1486]: time="2026-01-16T23:59:08.079051858Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 16 23:59:08.079603 kubelet[2574]: E0116 23:59:08.079322 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 23:59:08.079603 kubelet[2574]: E0116 23:59:08.079404 2574 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 23:59:08.079907 kubelet[2574]: E0116 23:59:08.079600 2574 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wvxng,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7c6f969f4-6hcrh_calico-apiserver(2216c59e-7647-44aa-810f-40503d382780): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 23:59:08.080859 kubelet[2574]: E0116 23:59:08.080799 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c6f969f4-6hcrh" podUID="2216c59e-7647-44aa-810f-40503d382780" Jan 16 23:59:13.386391 kubelet[2574]: E0116 23:59:13.385257 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58dd6fc975-fz8dg" podUID="40cdc5e9-1abb-47b0-ad9d-ea94f986178b" Jan 16 23:59:14.284121 systemd[1]: run-containerd-runc-k8s.io-e535ceb6bdcdd55fbec6fe1cc5e631d9851e9ab2c515f51cb1bce71a8c58f9d3-runc.ZgZqd9.mount: Deactivated successfully. Jan 16 23:59:14.387076 kubelet[2574]: E0116 23:59:14.386676 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hkcjr" podUID="494d2d41-870f-485e-a8b2-cbb0fecf4357" Jan 16 23:59:15.380821 containerd[1486]: time="2026-01-16T23:59:15.380767177Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 23:59:15.383500 kubelet[2574]: E0116 23:59:15.383348 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-b76rd" podUID="ebb15273-01f0-4342-86a4-e67c5f3e53d0" Jan 16 23:59:15.728118 containerd[1486]: time="2026-01-16T23:59:15.727374415Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:59:15.729586 containerd[1486]: time="2026-01-16T23:59:15.729223648Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 23:59:15.729586 containerd[1486]: time="2026-01-16T23:59:15.729336210Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 16 23:59:15.729711 kubelet[2574]: E0116 23:59:15.729496 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 23:59:15.729711 kubelet[2574]: E0116 23:59:15.729540 2574 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 23:59:15.729711 kubelet[2574]: E0116 23:59:15.729677 2574 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lpxvc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7c6f969f4-kxjbr_calico-apiserver(28527141-9485-40ed-9795-772c961207d3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 23:59:15.731203 kubelet[2574]: E0116 23:59:15.731159 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c6f969f4-kxjbr" podUID="28527141-9485-40ed-9795-772c961207d3" Jan 16 23:59:20.384387 kubelet[2574]: E0116 23:59:20.384339 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c6f969f4-6hcrh" podUID="2216c59e-7647-44aa-810f-40503d382780" Jan 16 23:59:22.383314 kubelet[2574]: E0116 23:59:22.382886 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-866b5b959f-q6rnd" podUID="ab55bbc8-2f84-4b63-ae7a-3f7a0c596089" Jan 16 23:59:22.663844 systemd[1]: Started sshd@7-49.13.115.208:22-4.153.228.146:48690.service - OpenSSH per-connection server daemon (4.153.228.146:48690). Jan 16 23:59:23.306789 sshd[5557]: Accepted publickey for core from 4.153.228.146 port 48690 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 16 23:59:23.310549 sshd[5557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 23:59:23.317201 systemd-logind[1461]: New session 8 of user core. Jan 16 23:59:23.323946 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 16 23:59:23.905245 sshd[5557]: pam_unix(sshd:session): session closed for user core Jan 16 23:59:23.912756 systemd[1]: sshd@7-49.13.115.208:22-4.153.228.146:48690.service: Deactivated successfully. Jan 16 23:59:23.922345 systemd[1]: session-8.scope: Deactivated successfully. Jan 16 23:59:23.929141 systemd-logind[1461]: Session 8 logged out. Waiting for processes to exit. Jan 16 23:59:23.932156 systemd-logind[1461]: Removed session 8. Jan 16 23:59:25.391579 kubelet[2574]: E0116 23:59:25.391353 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58dd6fc975-fz8dg" podUID="40cdc5e9-1abb-47b0-ad9d-ea94f986178b" Jan 16 23:59:28.383448 kubelet[2574]: E0116 23:59:28.383068 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hkcjr" podUID="494d2d41-870f-485e-a8b2-cbb0fecf4357" Jan 16 23:59:29.014712 systemd[1]: Started sshd@8-49.13.115.208:22-4.153.228.146:48452.service - OpenSSH per-connection server daemon (4.153.228.146:48452). Jan 16 23:59:29.380237 kubelet[2574]: E0116 23:59:29.380061 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c6f969f4-kxjbr" podUID="28527141-9485-40ed-9795-772c961207d3" Jan 16 23:59:29.382087 kubelet[2574]: E0116 23:59:29.381803 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-b76rd" podUID="ebb15273-01f0-4342-86a4-e67c5f3e53d0" Jan 16 23:59:29.640678 sshd[5573]: Accepted publickey for core from 4.153.228.146 port 48452 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 16 23:59:29.642213 sshd[5573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 23:59:29.648930 systemd-logind[1461]: New session 9 of user core. Jan 16 23:59:29.651753 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 16 23:59:30.161034 sshd[5573]: pam_unix(sshd:session): session closed for user core Jan 16 23:59:30.168144 systemd-logind[1461]: Session 9 logged out. Waiting for processes to exit. Jan 16 23:59:30.168674 systemd[1]: sshd@8-49.13.115.208:22-4.153.228.146:48452.service: Deactivated successfully. Jan 16 23:59:30.172918 systemd[1]: session-9.scope: Deactivated successfully. Jan 16 23:59:30.176074 systemd-logind[1461]: Removed session 9. Jan 16 23:59:35.279668 systemd[1]: Started sshd@9-49.13.115.208:22-4.153.228.146:51224.service - OpenSSH per-connection server daemon (4.153.228.146:51224). Jan 16 23:59:35.381494 kubelet[2574]: E0116 23:59:35.381126 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c6f969f4-6hcrh" podUID="2216c59e-7647-44aa-810f-40503d382780" Jan 16 23:59:35.905607 sshd[5587]: Accepted publickey for core from 4.153.228.146 port 51224 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 16 23:59:35.909021 sshd[5587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 23:59:35.917201 systemd-logind[1461]: New session 10 of user core. Jan 16 23:59:35.923726 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 16 23:59:36.381687 kubelet[2574]: E0116 23:59:36.381202 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-866b5b959f-q6rnd" podUID="ab55bbc8-2f84-4b63-ae7a-3f7a0c596089" Jan 16 23:59:36.439965 sshd[5587]: pam_unix(sshd:session): session closed for user core Jan 16 23:59:36.446590 systemd-logind[1461]: Session 10 logged out. Waiting for processes to exit. Jan 16 23:59:36.447660 systemd[1]: sshd@9-49.13.115.208:22-4.153.228.146:51224.service: Deactivated successfully. Jan 16 23:59:36.451888 systemd[1]: session-10.scope: Deactivated successfully. Jan 16 23:59:36.454744 systemd-logind[1461]: Removed session 10. Jan 16 23:59:36.557580 systemd[1]: Started sshd@10-49.13.115.208:22-4.153.228.146:51238.service - OpenSSH per-connection server daemon (4.153.228.146:51238). Jan 16 23:59:37.201286 sshd[5601]: Accepted publickey for core from 4.153.228.146 port 51238 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 16 23:59:37.203950 sshd[5601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 23:59:37.208900 systemd-logind[1461]: New session 11 of user core. Jan 16 23:59:37.215767 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 16 23:59:37.810345 sshd[5601]: pam_unix(sshd:session): session closed for user core Jan 16 23:59:37.814434 systemd[1]: session-11.scope: Deactivated successfully. Jan 16 23:59:37.814951 systemd-logind[1461]: Session 11 logged out. Waiting for processes to exit. Jan 16 23:59:37.816640 systemd[1]: sshd@10-49.13.115.208:22-4.153.228.146:51238.service: Deactivated successfully. Jan 16 23:59:37.826432 systemd-logind[1461]: Removed session 11. Jan 16 23:59:37.924872 systemd[1]: Started sshd@11-49.13.115.208:22-4.153.228.146:51248.service - OpenSSH per-connection server daemon (4.153.228.146:51248). Jan 16 23:59:38.549381 sshd[5615]: Accepted publickey for core from 4.153.228.146 port 51248 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 16 23:59:38.552490 sshd[5615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 23:59:38.558978 systemd-logind[1461]: New session 12 of user core. Jan 16 23:59:38.564661 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 16 23:59:39.099201 sshd[5615]: pam_unix(sshd:session): session closed for user core Jan 16 23:59:39.104740 systemd-logind[1461]: Session 12 logged out. Waiting for processes to exit. Jan 16 23:59:39.104957 systemd[1]: sshd@11-49.13.115.208:22-4.153.228.146:51248.service: Deactivated successfully. Jan 16 23:59:39.109024 systemd[1]: session-12.scope: Deactivated successfully. Jan 16 23:59:39.113855 systemd-logind[1461]: Removed session 12. Jan 16 23:59:40.383179 kubelet[2574]: E0116 23:59:40.383055 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58dd6fc975-fz8dg" podUID="40cdc5e9-1abb-47b0-ad9d-ea94f986178b" Jan 16 23:59:41.382016 kubelet[2574]: E0116 23:59:41.381578 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c6f969f4-kxjbr" podUID="28527141-9485-40ed-9795-772c961207d3" Jan 16 23:59:42.383440 kubelet[2574]: E0116 23:59:42.383344 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-b76rd" podUID="ebb15273-01f0-4342-86a4-e67c5f3e53d0" Jan 16 23:59:43.379730 kubelet[2574]: E0116 23:59:43.379678 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hkcjr" podUID="494d2d41-870f-485e-a8b2-cbb0fecf4357" Jan 16 23:59:44.215564 systemd[1]: Started sshd@12-49.13.115.208:22-4.153.228.146:51260.service - OpenSSH per-connection server daemon (4.153.228.146:51260). Jan 16 23:59:44.837615 sshd[5631]: Accepted publickey for core from 4.153.228.146 port 51260 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 16 23:59:44.840208 sshd[5631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 23:59:44.851232 systemd-logind[1461]: New session 13 of user core. Jan 16 23:59:44.855671 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 16 23:59:45.381542 sshd[5631]: pam_unix(sshd:session): session closed for user core Jan 16 23:59:45.387674 systemd-logind[1461]: Session 13 logged out. Waiting for processes to exit. Jan 16 23:59:45.387789 systemd[1]: sshd@12-49.13.115.208:22-4.153.228.146:51260.service: Deactivated successfully. Jan 16 23:59:45.391064 systemd[1]: session-13.scope: Deactivated successfully. Jan 16 23:59:45.392174 systemd-logind[1461]: Removed session 13. Jan 16 23:59:45.492380 systemd[1]: Started sshd@13-49.13.115.208:22-4.153.228.146:48764.service - OpenSSH per-connection server daemon (4.153.228.146:48764). Jan 16 23:59:46.133930 sshd[5665]: Accepted publickey for core from 4.153.228.146 port 48764 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 16 23:59:46.137355 sshd[5665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 23:59:46.147327 systemd-logind[1461]: New session 14 of user core. Jan 16 23:59:46.152871 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 16 23:59:46.383095 kubelet[2574]: E0116 23:59:46.382453 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c6f969f4-6hcrh" podUID="2216c59e-7647-44aa-810f-40503d382780" Jan 16 23:59:46.848146 sshd[5665]: pam_unix(sshd:session): session closed for user core Jan 16 23:59:46.856697 systemd-logind[1461]: Session 14 logged out. Waiting for processes to exit. Jan 16 23:59:46.858654 systemd[1]: sshd@13-49.13.115.208:22-4.153.228.146:48764.service: Deactivated successfully. Jan 16 23:59:46.865767 systemd[1]: session-14.scope: Deactivated successfully. Jan 16 23:59:46.867847 systemd-logind[1461]: Removed session 14. Jan 16 23:59:46.960488 systemd[1]: Started sshd@14-49.13.115.208:22-4.153.228.146:48778.service - OpenSSH per-connection server daemon (4.153.228.146:48778). Jan 16 23:59:47.379655 kubelet[2574]: E0116 23:59:47.379600 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-866b5b959f-q6rnd" podUID="ab55bbc8-2f84-4b63-ae7a-3f7a0c596089" Jan 16 23:59:47.600484 sshd[5676]: Accepted publickey for core from 4.153.228.146 port 48778 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 16 23:59:47.601631 sshd[5676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 23:59:47.611053 systemd-logind[1461]: New session 15 of user core. Jan 16 23:59:47.615732 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 16 23:59:48.794530 sshd[5676]: pam_unix(sshd:session): session closed for user core Jan 16 23:59:48.799452 systemd[1]: session-15.scope: Deactivated successfully. Jan 16 23:59:48.803001 systemd[1]: sshd@14-49.13.115.208:22-4.153.228.146:48778.service: Deactivated successfully. Jan 16 23:59:48.807664 systemd-logind[1461]: Session 15 logged out. Waiting for processes to exit. Jan 16 23:59:48.809657 systemd-logind[1461]: Removed session 15. Jan 16 23:59:48.911057 systemd[1]: Started sshd@15-49.13.115.208:22-4.153.228.146:48794.service - OpenSSH per-connection server daemon (4.153.228.146:48794). Jan 16 23:59:49.563183 sshd[5695]: Accepted publickey for core from 4.153.228.146 port 48794 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 16 23:59:49.566511 sshd[5695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 23:59:49.574042 systemd-logind[1461]: New session 16 of user core. Jan 16 23:59:49.576947 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 16 23:59:50.270767 sshd[5695]: pam_unix(sshd:session): session closed for user core Jan 16 23:59:50.276683 systemd-logind[1461]: Session 16 logged out. Waiting for processes to exit. Jan 16 23:59:50.277637 systemd[1]: sshd@15-49.13.115.208:22-4.153.228.146:48794.service: Deactivated successfully. Jan 16 23:59:50.280669 systemd[1]: session-16.scope: Deactivated successfully. Jan 16 23:59:50.281983 systemd-logind[1461]: Removed session 16. Jan 16 23:59:50.380700 systemd[1]: Started sshd@16-49.13.115.208:22-4.153.228.146:48798.service - OpenSSH per-connection server daemon (4.153.228.146:48798). Jan 16 23:59:50.983883 sshd[5706]: Accepted publickey for core from 4.153.228.146 port 48798 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 16 23:59:50.986453 sshd[5706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 23:59:50.992300 systemd-logind[1461]: New session 17 of user core. Jan 16 23:59:51.001786 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 16 23:59:51.514082 sshd[5706]: pam_unix(sshd:session): session closed for user core Jan 16 23:59:51.520232 systemd[1]: sshd@16-49.13.115.208:22-4.153.228.146:48798.service: Deactivated successfully. Jan 16 23:59:51.523426 systemd[1]: session-17.scope: Deactivated successfully. Jan 16 23:59:51.525221 systemd-logind[1461]: Session 17 logged out. Waiting for processes to exit. Jan 16 23:59:51.526735 systemd-logind[1461]: Removed session 17. Jan 16 23:59:53.381416 kubelet[2574]: E0116 23:59:53.381286 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58dd6fc975-fz8dg" podUID="40cdc5e9-1abb-47b0-ad9d-ea94f986178b" Jan 16 23:59:54.380044 kubelet[2574]: E0116 23:59:54.379887 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hkcjr" podUID="494d2d41-870f-485e-a8b2-cbb0fecf4357" Jan 16 23:59:56.381828 kubelet[2574]: E0116 23:59:56.381705 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c6f969f4-kxjbr" podUID="28527141-9485-40ed-9795-772c961207d3" Jan 16 23:59:56.384384 kubelet[2574]: E0116 23:59:56.384238 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-b76rd" podUID="ebb15273-01f0-4342-86a4-e67c5f3e53d0" Jan 16 23:59:56.630586 systemd[1]: Started sshd@17-49.13.115.208:22-4.153.228.146:49880.service - OpenSSH per-connection server daemon (4.153.228.146:49880). Jan 16 23:59:57.266490 sshd[5723]: Accepted publickey for core from 4.153.228.146 port 49880 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 16 23:59:57.269921 sshd[5723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 23:59:57.278012 systemd-logind[1461]: New session 18 of user core. Jan 16 23:59:57.282675 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 16 23:59:57.794000 sshd[5723]: pam_unix(sshd:session): session closed for user core Jan 16 23:59:57.799779 systemd[1]: sshd@17-49.13.115.208:22-4.153.228.146:49880.service: Deactivated successfully. Jan 16 23:59:57.804660 systemd[1]: session-18.scope: Deactivated successfully. Jan 16 23:59:57.806151 systemd-logind[1461]: Session 18 logged out. Waiting for processes to exit. Jan 16 23:59:57.807316 systemd-logind[1461]: Removed session 18. Jan 16 23:59:58.382336 kubelet[2574]: E0116 23:59:58.381927 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c6f969f4-6hcrh" podUID="2216c59e-7647-44aa-810f-40503d382780" Jan 16 23:59:59.380486 kubelet[2574]: E0116 23:59:59.380234 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-866b5b959f-q6rnd" podUID="ab55bbc8-2f84-4b63-ae7a-3f7a0c596089" Jan 17 00:00:02.904738 systemd[1]: Started logrotate.service - Rotate and Compress System Logs. Jan 17 00:00:02.917242 systemd[1]: Started sshd@18-49.13.115.208:22-4.153.228.146:49888.service - OpenSSH per-connection server daemon (4.153.228.146:49888). Jan 17 00:00:02.925909 systemd[1]: logrotate.service: Deactivated successfully. Jan 17 00:00:03.525507 sshd[5739]: Accepted publickey for core from 4.153.228.146 port 49888 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:00:03.528315 sshd[5739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:00:03.539142 systemd-logind[1461]: New session 19 of user core. Jan 17 00:00:03.543732 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 00:00:04.045739 sshd[5739]: pam_unix(sshd:session): session closed for user core Jan 17 00:00:04.051668 systemd[1]: sshd@18-49.13.115.208:22-4.153.228.146:49888.service: Deactivated successfully. Jan 17 00:00:04.059068 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 00:00:04.063533 systemd-logind[1461]: Session 19 logged out. Waiting for processes to exit. Jan 17 00:00:04.065499 systemd-logind[1461]: Removed session 19. Jan 17 00:00:05.384755 kubelet[2574]: E0117 00:00:05.384599 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58dd6fc975-fz8dg" podUID="40cdc5e9-1abb-47b0-ad9d-ea94f986178b" Jan 17 00:00:06.387502 kubelet[2574]: E0117 00:00:06.386732 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hkcjr" podUID="494d2d41-870f-485e-a8b2-cbb0fecf4357" Jan 17 00:00:09.380936 kubelet[2574]: E0117 00:00:09.379557 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c6f969f4-kxjbr" podUID="28527141-9485-40ed-9795-772c961207d3" Jan 17 00:00:10.381647 kubelet[2574]: E0117 00:00:10.381549 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c6f969f4-6hcrh" podUID="2216c59e-7647-44aa-810f-40503d382780" Jan 17 00:00:11.381438 kubelet[2574]: E0117 00:00:11.381117 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-866b5b959f-q6rnd" podUID="ab55bbc8-2f84-4b63-ae7a-3f7a0c596089" Jan 17 00:00:11.382161 kubelet[2574]: E0117 00:00:11.382115 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-b76rd" podUID="ebb15273-01f0-4342-86a4-e67c5f3e53d0" Jan 17 00:00:14.285739 systemd[1]: run-containerd-runc-k8s.io-e535ceb6bdcdd55fbec6fe1cc5e631d9851e9ab2c515f51cb1bce71a8c58f9d3-runc.8K4V3G.mount: Deactivated successfully. Jan 17 00:00:18.383038 kubelet[2574]: E0117 00:00:18.382986 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hkcjr" podUID="494d2d41-870f-485e-a8b2-cbb0fecf4357" Jan 17 00:00:18.879764 systemd[1]: cri-containerd-31cd9c23b400b1b9a9869f41319a6c32d00898cf08bd43c1b3ab4ef0819230f6.scope: Deactivated successfully. Jan 17 00:00:18.880620 systemd[1]: cri-containerd-31cd9c23b400b1b9a9869f41319a6c32d00898cf08bd43c1b3ab4ef0819230f6.scope: Consumed 37.895s CPU time. Jan 17 00:00:18.907143 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-31cd9c23b400b1b9a9869f41319a6c32d00898cf08bd43c1b3ab4ef0819230f6-rootfs.mount: Deactivated successfully. Jan 17 00:00:18.917343 containerd[1486]: time="2026-01-17T00:00:18.917031206Z" level=info msg="shim disconnected" id=31cd9c23b400b1b9a9869f41319a6c32d00898cf08bd43c1b3ab4ef0819230f6 namespace=k8s.io Jan 17 00:00:18.917343 containerd[1486]: time="2026-01-17T00:00:18.917337849Z" level=warning msg="cleaning up after shim disconnected" id=31cd9c23b400b1b9a9869f41319a6c32d00898cf08bd43c1b3ab4ef0819230f6 namespace=k8s.io Jan 17 00:00:18.918002 containerd[1486]: time="2026-01-17T00:00:18.917356969Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:00:18.988247 kubelet[2574]: E0117 00:00:18.986816 2574 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:39942->10.0.0.2:2379: read: connection timed out" Jan 17 00:00:18.994006 systemd[1]: cri-containerd-625662a81f5a132aaa084e3ed25efda1292af29d47e39510e18d809baf25770b.scope: Deactivated successfully. Jan 17 00:00:18.994517 systemd[1]: cri-containerd-625662a81f5a132aaa084e3ed25efda1292af29d47e39510e18d809baf25770b.scope: Consumed 3.608s CPU time, 16.1M memory peak, 0B memory swap peak. Jan 17 00:00:19.017295 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-625662a81f5a132aaa084e3ed25efda1292af29d47e39510e18d809baf25770b-rootfs.mount: Deactivated successfully. Jan 17 00:00:19.022294 containerd[1486]: time="2026-01-17T00:00:19.022054724Z" level=info msg="shim disconnected" id=625662a81f5a132aaa084e3ed25efda1292af29d47e39510e18d809baf25770b namespace=k8s.io Jan 17 00:00:19.022294 containerd[1486]: time="2026-01-17T00:00:19.022111884Z" level=warning msg="cleaning up after shim disconnected" id=625662a81f5a132aaa084e3ed25efda1292af29d47e39510e18d809baf25770b namespace=k8s.io Jan 17 00:00:19.022294 containerd[1486]: time="2026-01-17T00:00:19.022120325Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:00:19.159034 kubelet[2574]: I0117 00:00:19.158816 2574 scope.go:117] "RemoveContainer" containerID="31cd9c23b400b1b9a9869f41319a6c32d00898cf08bd43c1b3ab4ef0819230f6" Jan 17 00:00:19.159034 kubelet[2574]: I0117 00:00:19.158981 2574 scope.go:117] "RemoveContainer" containerID="625662a81f5a132aaa084e3ed25efda1292af29d47e39510e18d809baf25770b" Jan 17 00:00:19.162918 containerd[1486]: time="2026-01-17T00:00:19.162697361Z" level=info msg="CreateContainer within sandbox \"216b420e37d65ec1159d71d51ea5d6d7076dd836bceda1a661826821b48e242b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 17 00:00:19.162918 containerd[1486]: time="2026-01-17T00:00:19.162731601Z" level=info msg="CreateContainer within sandbox \"1eba388e6ac7dc4c069d138e2aa2d0a07f523041a7adf9f904a42e88670d1c39\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 17 00:00:19.189860 kubelet[2574]: E0117 00:00:19.181285 2574 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:39780->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{calico-apiserver-7c6f969f4-kxjbr.188b5b7a56594b12 calico-apiserver 1728 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-apiserver,Name:calico-apiserver-7c6f969f4-kxjbr,UID:28527141-9485-40ed-9795-772c961207d3,APIVersion:v1,ResourceVersion:839,FieldPath:spec.containers{calico-apiserver},},Reason:BackOff,Message:Back-off pulling image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\",Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-32c338e5e2,},FirstTimestamp:2026-01-16 23:57:39 +0000 UTC,LastTimestamp:2026-01-17 00:00:09.379488687 +0000 UTC m=+199.113551515,Count:10,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-32c338e5e2,}" Jan 17 00:00:19.192039 containerd[1486]: time="2026-01-17T00:00:19.191913089Z" level=info msg="CreateContainer within sandbox \"216b420e37d65ec1159d71d51ea5d6d7076dd836bceda1a661826821b48e242b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"40c120bcf86aae21f4186af2fd4c74d2bd46bfe4f2f5438f351aaf3c54022ffa\"" Jan 17 00:00:19.193985 containerd[1486]: time="2026-01-17T00:00:19.192624736Z" level=info msg="StartContainer for \"40c120bcf86aae21f4186af2fd4c74d2bd46bfe4f2f5438f351aaf3c54022ffa\"" Jan 17 00:00:19.200644 containerd[1486]: time="2026-01-17T00:00:19.200587363Z" level=info msg="CreateContainer within sandbox \"1eba388e6ac7dc4c069d138e2aa2d0a07f523041a7adf9f904a42e88670d1c39\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"457cd8bf095dca39f6f70ee80952437f30a97b1a0291a20d34cbc79fb3397661\"" Jan 17 00:00:19.201353 containerd[1486]: time="2026-01-17T00:00:19.201321450Z" level=info msg="StartContainer for \"457cd8bf095dca39f6f70ee80952437f30a97b1a0291a20d34cbc79fb3397661\"" Jan 17 00:00:19.233720 systemd[1]: Started cri-containerd-40c120bcf86aae21f4186af2fd4c74d2bd46bfe4f2f5438f351aaf3c54022ffa.scope - libcontainer container 40c120bcf86aae21f4186af2fd4c74d2bd46bfe4f2f5438f351aaf3c54022ffa. Jan 17 00:00:19.245831 systemd[1]: Started cri-containerd-457cd8bf095dca39f6f70ee80952437f30a97b1a0291a20d34cbc79fb3397661.scope - libcontainer container 457cd8bf095dca39f6f70ee80952437f30a97b1a0291a20d34cbc79fb3397661. Jan 17 00:00:19.291105 containerd[1486]: time="2026-01-17T00:00:19.291038053Z" level=info msg="StartContainer for \"40c120bcf86aae21f4186af2fd4c74d2bd46bfe4f2f5438f351aaf3c54022ffa\" returns successfully" Jan 17 00:00:19.299011 containerd[1486]: time="2026-01-17T00:00:19.298422516Z" level=info msg="StartContainer for \"457cd8bf095dca39f6f70ee80952437f30a97b1a0291a20d34cbc79fb3397661\" returns successfully" Jan 17 00:00:19.995534 systemd[1]: cri-containerd-d3905ded49821b4e07cc00d8f512338c522369f966b47a278a91039b253959f0.scope: Deactivated successfully. Jan 17 00:00:19.997576 systemd[1]: cri-containerd-d3905ded49821b4e07cc00d8f512338c522369f966b47a278a91039b253959f0.scope: Consumed 4.418s CPU time, 17.9M memory peak, 0B memory swap peak. Jan 17 00:00:20.037061 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d3905ded49821b4e07cc00d8f512338c522369f966b47a278a91039b253959f0-rootfs.mount: Deactivated successfully. Jan 17 00:00:20.045053 containerd[1486]: time="2026-01-17T00:00:20.044729795Z" level=info msg="shim disconnected" id=d3905ded49821b4e07cc00d8f512338c522369f966b47a278a91039b253959f0 namespace=k8s.io Jan 17 00:00:20.045053 containerd[1486]: time="2026-01-17T00:00:20.044812636Z" level=warning msg="cleaning up after shim disconnected" id=d3905ded49821b4e07cc00d8f512338c522369f966b47a278a91039b253959f0 namespace=k8s.io Jan 17 00:00:20.045053 containerd[1486]: time="2026-01-17T00:00:20.044823676Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:00:20.162774 kubelet[2574]: I0117 00:00:20.162722 2574 scope.go:117] "RemoveContainer" containerID="d3905ded49821b4e07cc00d8f512338c522369f966b47a278a91039b253959f0" Jan 17 00:00:20.165766 containerd[1486]: time="2026-01-17T00:00:20.165722407Z" level=info msg="CreateContainer within sandbox \"fb98d834193eab1c89148f2c27e54fced0a6ab18ca72aaa02edb5073bef1d8cb\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 17 00:00:20.186478 containerd[1486]: time="2026-01-17T00:00:20.185631740Z" level=info msg="CreateContainer within sandbox \"fb98d834193eab1c89148f2c27e54fced0a6ab18ca72aaa02edb5073bef1d8cb\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"8aa2dc579def1835eaacd92640824325543ce974f31d8e31fa255ff1cb128991\"" Jan 17 00:00:20.186866 containerd[1486]: time="2026-01-17T00:00:20.186818230Z" level=info msg="StartContainer for \"8aa2dc579def1835eaacd92640824325543ce974f31d8e31fa255ff1cb128991\"" Jan 17 00:00:20.241900 systemd[1]: Started cri-containerd-8aa2dc579def1835eaacd92640824325543ce974f31d8e31fa255ff1cb128991.scope - libcontainer container 8aa2dc579def1835eaacd92640824325543ce974f31d8e31fa255ff1cb128991. Jan 17 00:00:20.307294 containerd[1486]: time="2026-01-17T00:00:20.307234037Z" level=info msg="StartContainer for \"8aa2dc579def1835eaacd92640824325543ce974f31d8e31fa255ff1cb128991\" returns successfully" Jan 17 00:00:20.384785 containerd[1486]: time="2026-01-17T00:00:20.384741111Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:00:20.458509 kubelet[2574]: I0117 00:00:20.458424 2574 status_manager.go:895] "Failed to get status for pod" podUID="ebf7c238877d49620b63bdf994b25361" pod="kube-system/kube-apiserver-ci-4081-3-6-n-32c338e5e2" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:39866->10.0.0.2:2379: read: connection timed out" Jan 17 00:00:23.380990 kubelet[2574]: E0117 00:00:23.380927 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-866b5b959f-q6rnd" podUID="ab55bbc8-2f84-4b63-ae7a-3f7a0c596089" Jan 17 00:00:24.381539 kubelet[2574]: E0117 00:00:24.381321 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c6f969f4-kxjbr" podUID="28527141-9485-40ed-9795-772c961207d3"