Jan 16 23:55:52.923143 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 16 23:55:52.923184 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jan 16 22:28:08 -00 2026 Jan 16 23:55:52.923198 kernel: KASLR enabled Jan 16 23:55:52.923204 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Jan 16 23:55:52.923210 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390c1018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 Jan 16 23:55:52.923216 kernel: random: crng init done Jan 16 23:55:52.923223 kernel: ACPI: Early table checksum verification disabled Jan 16 23:55:52.923229 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Jan 16 23:55:52.923235 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Jan 16 23:55:52.923243 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 23:55:52.923249 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 23:55:52.923255 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 23:55:52.923261 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 23:55:52.923267 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 23:55:52.923275 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 23:55:52.923283 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 23:55:52.923289 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 23:55:52.923296 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 23:55:52.923302 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Jan 16 23:55:52.923308 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Jan 16 23:55:52.923315 kernel: NUMA: Failed to initialise from firmware Jan 16 23:55:52.923321 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Jan 16 23:55:52.923328 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] Jan 16 23:55:52.923334 kernel: Zone ranges: Jan 16 23:55:52.923341 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 16 23:55:52.923349 kernel: DMA32 empty Jan 16 23:55:52.923355 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Jan 16 23:55:52.923361 kernel: Movable zone start for each node Jan 16 23:55:52.923377 kernel: Early memory node ranges Jan 16 23:55:52.923385 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Jan 16 23:55:52.923391 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Jan 16 23:55:52.923398 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Jan 16 23:55:52.923404 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Jan 16 23:55:52.923410 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Jan 16 23:55:52.923417 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Jan 16 23:55:52.923423 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Jan 16 23:55:52.923429 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Jan 16 23:55:52.923439 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Jan 16 23:55:52.923445 kernel: psci: probing for conduit method from ACPI. Jan 16 23:55:52.923452 kernel: psci: PSCIv1.1 detected in firmware. Jan 16 23:55:52.923461 kernel: psci: Using standard PSCI v0.2 function IDs Jan 16 23:55:52.923475 kernel: psci: Trusted OS migration not required Jan 16 23:55:52.923488 kernel: psci: SMC Calling Convention v1.1 Jan 16 23:55:52.923500 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 16 23:55:52.923507 kernel: percpu: Embedded 30 pages/cpu s85672 r8192 d29016 u122880 Jan 16 23:55:52.923514 kernel: pcpu-alloc: s85672 r8192 d29016 u122880 alloc=30*4096 Jan 16 23:55:52.923521 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 16 23:55:52.923527 kernel: Detected PIPT I-cache on CPU0 Jan 16 23:55:52.923534 kernel: CPU features: detected: GIC system register CPU interface Jan 16 23:55:52.923541 kernel: CPU features: detected: Hardware dirty bit management Jan 16 23:55:52.923548 kernel: CPU features: detected: Spectre-v4 Jan 16 23:55:52.923554 kernel: CPU features: detected: Spectre-BHB Jan 16 23:55:52.923561 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 16 23:55:52.923570 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 16 23:55:52.923577 kernel: CPU features: detected: ARM erratum 1418040 Jan 16 23:55:52.923584 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 16 23:55:52.923591 kernel: alternatives: applying boot alternatives Jan 16 23:55:52.923599 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=d499dc3f7d5d4118d4e4300ad00f17ad72271d2a2f6bb9119457036ac5212c83 Jan 16 23:55:52.923606 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 16 23:55:52.923613 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 16 23:55:52.923620 kernel: Fallback order for Node 0: 0 Jan 16 23:55:52.923627 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Jan 16 23:55:52.923640 kernel: Policy zone: Normal Jan 16 23:55:52.923653 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 16 23:55:52.923664 kernel: software IO TLB: area num 2. Jan 16 23:55:52.923671 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Jan 16 23:55:52.923678 kernel: Memory: 3882816K/4096000K available (10304K kernel code, 2180K rwdata, 8112K rodata, 39424K init, 897K bss, 213184K reserved, 0K cma-reserved) Jan 16 23:55:52.923692 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 16 23:55:52.923700 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 16 23:55:52.923707 kernel: rcu: RCU event tracing is enabled. Jan 16 23:55:52.923715 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 16 23:55:52.923721 kernel: Trampoline variant of Tasks RCU enabled. Jan 16 23:55:52.923728 kernel: Tracing variant of Tasks RCU enabled. Jan 16 23:55:52.923735 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 16 23:55:52.923742 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 16 23:55:52.923749 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 16 23:55:52.923761 kernel: GICv3: 256 SPIs implemented Jan 16 23:55:52.923767 kernel: GICv3: 0 Extended SPIs implemented Jan 16 23:55:52.923774 kernel: Root IRQ handler: gic_handle_irq Jan 16 23:55:52.923781 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 16 23:55:52.923788 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 16 23:55:52.923795 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 16 23:55:52.923802 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Jan 16 23:55:52.923817 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Jan 16 23:55:52.923830 kernel: GICv3: using LPI property table @0x00000001000e0000 Jan 16 23:55:52.923839 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Jan 16 23:55:52.923846 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 16 23:55:52.923857 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 16 23:55:52.923864 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 16 23:55:52.923871 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 16 23:55:52.923878 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 16 23:55:52.923885 kernel: Console: colour dummy device 80x25 Jan 16 23:55:52.923892 kernel: ACPI: Core revision 20230628 Jan 16 23:55:52.923900 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 16 23:55:52.923907 kernel: pid_max: default: 32768 minimum: 301 Jan 16 23:55:52.923914 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 16 23:55:52.923921 kernel: landlock: Up and running. Jan 16 23:55:52.923930 kernel: SELinux: Initializing. Jan 16 23:55:52.923937 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 16 23:55:52.923990 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 16 23:55:52.923998 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 16 23:55:52.924006 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 16 23:55:52.924013 kernel: rcu: Hierarchical SRCU implementation. Jan 16 23:55:52.924020 kernel: rcu: Max phase no-delay instances is 400. Jan 16 23:55:52.924028 kernel: Platform MSI: ITS@0x8080000 domain created Jan 16 23:55:52.924035 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 16 23:55:52.924044 kernel: Remapping and enabling EFI services. Jan 16 23:55:52.924059 kernel: smp: Bringing up secondary CPUs ... Jan 16 23:55:52.924067 kernel: Detected PIPT I-cache on CPU1 Jan 16 23:55:52.924075 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 16 23:55:52.924082 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Jan 16 23:55:52.924089 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 16 23:55:52.924096 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 16 23:55:52.924111 kernel: smp: Brought up 1 node, 2 CPUs Jan 16 23:55:52.924120 kernel: SMP: Total of 2 processors activated. Jan 16 23:55:52.924127 kernel: CPU features: detected: 32-bit EL0 Support Jan 16 23:55:52.924137 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 16 23:55:52.924144 kernel: CPU features: detected: Common not Private translations Jan 16 23:55:52.924157 kernel: CPU features: detected: CRC32 instructions Jan 16 23:55:52.924167 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 16 23:55:52.924174 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 16 23:55:52.924182 kernel: CPU features: detected: LSE atomic instructions Jan 16 23:55:52.924189 kernel: CPU features: detected: Privileged Access Never Jan 16 23:55:52.924197 kernel: CPU features: detected: RAS Extension Support Jan 16 23:55:52.924206 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 16 23:55:52.924222 kernel: CPU: All CPU(s) started at EL1 Jan 16 23:55:52.924237 kernel: alternatives: applying system-wide alternatives Jan 16 23:55:52.924254 kernel: devtmpfs: initialized Jan 16 23:55:52.924265 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 16 23:55:52.924273 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 16 23:55:52.924281 kernel: pinctrl core: initialized pinctrl subsystem Jan 16 23:55:52.924288 kernel: SMBIOS 3.0.0 present. Jan 16 23:55:52.924298 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Jan 16 23:55:52.924306 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 16 23:55:52.924313 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 16 23:55:52.924321 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 16 23:55:52.924328 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 16 23:55:52.924336 kernel: audit: initializing netlink subsys (disabled) Jan 16 23:55:52.924343 kernel: audit: type=2000 audit(0.011:1): state=initialized audit_enabled=0 res=1 Jan 16 23:55:52.924351 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 16 23:55:52.924358 kernel: cpuidle: using governor menu Jan 16 23:55:52.924367 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 16 23:55:52.924379 kernel: ASID allocator initialised with 32768 entries Jan 16 23:55:52.925012 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 16 23:55:52.925022 kernel: Serial: AMBA PL011 UART driver Jan 16 23:55:52.925030 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 16 23:55:52.925038 kernel: Modules: 0 pages in range for non-PLT usage Jan 16 23:55:52.925045 kernel: Modules: 509008 pages in range for PLT usage Jan 16 23:55:52.925063 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 16 23:55:52.925071 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 16 23:55:52.925086 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 16 23:55:52.925094 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 16 23:55:52.925112 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 16 23:55:52.925122 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 16 23:55:52.925129 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 16 23:55:52.925137 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 16 23:55:52.925144 kernel: ACPI: Added _OSI(Module Device) Jan 16 23:55:52.925152 kernel: ACPI: Added _OSI(Processor Device) Jan 16 23:55:52.925159 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 16 23:55:52.925169 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 16 23:55:52.925177 kernel: ACPI: Interpreter enabled Jan 16 23:55:52.925192 kernel: ACPI: Using GIC for interrupt routing Jan 16 23:55:52.925208 kernel: ACPI: MCFG table detected, 1 entries Jan 16 23:55:52.925218 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 16 23:55:52.925226 kernel: printk: console [ttyAMA0] enabled Jan 16 23:55:52.925233 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 16 23:55:52.925439 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 16 23:55:52.925606 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 16 23:55:52.925682 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 16 23:55:52.925748 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 16 23:55:52.925812 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 16 23:55:52.925822 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 16 23:55:52.925830 kernel: PCI host bridge to bus 0000:00 Jan 16 23:55:52.926974 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 16 23:55:52.927194 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 16 23:55:52.927312 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 16 23:55:52.927386 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 16 23:55:52.927540 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 16 23:55:52.927672 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Jan 16 23:55:52.927752 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Jan 16 23:55:52.927861 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Jan 16 23:55:52.928003 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 16 23:55:52.928085 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Jan 16 23:55:52.928177 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 16 23:55:52.928247 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Jan 16 23:55:52.928323 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 16 23:55:52.928390 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Jan 16 23:55:52.928471 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 16 23:55:52.928539 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Jan 16 23:55:52.928678 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 16 23:55:52.928749 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Jan 16 23:55:52.928883 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 16 23:55:52.929756 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Jan 16 23:55:52.929868 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 16 23:55:52.929937 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Jan 16 23:55:52.930040 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 16 23:55:52.930191 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Jan 16 23:55:52.930275 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Jan 16 23:55:52.930365 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Jan 16 23:55:52.930455 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Jan 16 23:55:52.930524 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Jan 16 23:55:52.930606 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Jan 16 23:55:52.930676 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Jan 16 23:55:52.930744 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 16 23:55:52.930825 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 16 23:55:52.930907 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 16 23:55:52.931000 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Jan 16 23:55:52.931081 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Jan 16 23:55:52.931223 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Jan 16 23:55:52.931307 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Jan 16 23:55:52.931388 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Jan 16 23:55:52.931460 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Jan 16 23:55:52.931553 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 16 23:55:52.931623 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Jan 16 23:55:52.931693 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Jan 16 23:55:52.931773 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Jan 16 23:55:52.931846 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Jan 16 23:55:52.931914 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Jan 16 23:55:52.932633 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Jan 16 23:55:52.932743 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Jan 16 23:55:52.932840 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Jan 16 23:55:52.932927 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 16 23:55:52.933128 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jan 16 23:55:52.933204 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Jan 16 23:55:52.933328 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Jan 16 23:55:52.933416 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jan 16 23:55:52.933485 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jan 16 23:55:52.933551 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Jan 16 23:55:52.933623 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 16 23:55:52.933690 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Jan 16 23:55:52.933757 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Jan 16 23:55:52.933828 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 16 23:55:52.933894 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Jan 16 23:55:52.935355 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jan 16 23:55:52.935457 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 16 23:55:52.935525 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Jan 16 23:55:52.935592 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Jan 16 23:55:52.935665 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 16 23:55:52.935730 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Jan 16 23:55:52.935796 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Jan 16 23:55:52.935880 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 16 23:55:52.938001 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Jan 16 23:55:52.938204 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Jan 16 23:55:52.938283 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 16 23:55:52.938374 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Jan 16 23:55:52.938448 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Jan 16 23:55:52.938554 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 16 23:55:52.938622 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Jan 16 23:55:52.938697 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Jan 16 23:55:52.938847 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Jan 16 23:55:52.938935 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Jan 16 23:55:52.940419 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Jan 16 23:55:52.940503 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Jan 16 23:55:52.940576 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Jan 16 23:55:52.940653 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Jan 16 23:55:52.940756 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Jan 16 23:55:52.940838 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Jan 16 23:55:52.940911 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Jan 16 23:55:52.942170 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Jan 16 23:55:52.942315 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Jan 16 23:55:52.942385 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 16 23:55:52.942469 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Jan 16 23:55:52.942565 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 16 23:55:52.942637 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Jan 16 23:55:52.942706 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 16 23:55:52.942781 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Jan 16 23:55:52.942848 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Jan 16 23:55:52.944477 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Jan 16 23:55:52.944658 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Jan 16 23:55:52.944737 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Jan 16 23:55:52.944842 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jan 16 23:55:52.944990 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Jan 16 23:55:52.945168 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jan 16 23:55:52.945267 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Jan 16 23:55:52.945337 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jan 16 23:55:52.945439 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Jan 16 23:55:52.945522 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jan 16 23:55:52.945592 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Jan 16 23:55:52.945659 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jan 16 23:55:52.945730 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Jan 16 23:55:52.945806 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jan 16 23:55:52.945896 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Jan 16 23:55:52.947218 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jan 16 23:55:52.947311 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Jan 16 23:55:52.947391 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jan 16 23:55:52.947463 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Jan 16 23:55:52.947530 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Jan 16 23:55:52.947602 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Jan 16 23:55:52.947680 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Jan 16 23:55:52.947749 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 16 23:55:52.947817 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Jan 16 23:55:52.947886 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 16 23:55:52.949038 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 16 23:55:52.949263 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Jan 16 23:55:52.949345 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Jan 16 23:55:52.949423 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Jan 16 23:55:52.949502 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 16 23:55:52.949568 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 16 23:55:52.949635 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Jan 16 23:55:52.949703 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Jan 16 23:55:52.949779 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Jan 16 23:55:52.949861 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Jan 16 23:55:52.949935 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 16 23:55:52.951161 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 16 23:55:52.951248 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Jan 16 23:55:52.951314 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Jan 16 23:55:52.951390 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Jan 16 23:55:52.951460 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 16 23:55:52.951526 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 16 23:55:52.951590 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Jan 16 23:55:52.951693 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Jan 16 23:55:52.951775 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Jan 16 23:55:52.951851 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Jan 16 23:55:52.951930 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 16 23:55:52.952022 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 16 23:55:52.952090 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Jan 16 23:55:52.952211 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Jan 16 23:55:52.952296 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Jan 16 23:55:52.952366 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Jan 16 23:55:52.952435 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 16 23:55:52.952530 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 16 23:55:52.952599 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Jan 16 23:55:52.952681 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 16 23:55:52.952768 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Jan 16 23:55:52.952851 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Jan 16 23:55:52.952935 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Jan 16 23:55:52.953778 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 16 23:55:52.953853 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 16 23:55:52.953927 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Jan 16 23:55:52.954151 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 16 23:55:52.955185 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 16 23:55:52.955268 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 16 23:55:52.955337 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Jan 16 23:55:52.955416 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 16 23:55:52.955490 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 16 23:55:52.955557 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Jan 16 23:55:52.955703 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Jan 16 23:55:52.955787 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Jan 16 23:55:52.955860 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 16 23:55:52.955933 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 16 23:55:52.956220 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 16 23:55:52.956300 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 16 23:55:52.956436 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Jan 16 23:55:52.956554 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Jan 16 23:55:52.956635 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Jan 16 23:55:52.956698 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Jan 16 23:55:52.956759 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Jan 16 23:55:52.956851 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Jan 16 23:55:52.956928 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Jan 16 23:55:52.958807 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Jan 16 23:55:52.958907 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jan 16 23:55:52.959048 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Jan 16 23:55:52.959204 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Jan 16 23:55:52.959302 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Jan 16 23:55:52.959418 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Jan 16 23:55:52.959490 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Jan 16 23:55:52.959577 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Jan 16 23:55:52.959641 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Jan 16 23:55:52.959746 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 16 23:55:52.959837 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Jan 16 23:55:52.959907 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Jan 16 23:55:52.961119 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 16 23:55:52.961232 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Jan 16 23:55:52.961299 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Jan 16 23:55:52.961364 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 16 23:55:52.961439 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Jan 16 23:55:52.961501 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Jan 16 23:55:52.961570 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Jan 16 23:55:52.961580 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 16 23:55:52.961589 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 16 23:55:52.961597 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 16 23:55:52.961607 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 16 23:55:52.961616 kernel: iommu: Default domain type: Translated Jan 16 23:55:52.961623 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 16 23:55:52.961631 kernel: efivars: Registered efivars operations Jan 16 23:55:52.961640 kernel: vgaarb: loaded Jan 16 23:55:52.961649 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 16 23:55:52.961657 kernel: VFS: Disk quotas dquot_6.6.0 Jan 16 23:55:52.961665 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 16 23:55:52.961673 kernel: pnp: PnP ACPI init Jan 16 23:55:52.961749 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 16 23:55:52.961803 kernel: pnp: PnP ACPI: found 1 devices Jan 16 23:55:52.961813 kernel: NET: Registered PF_INET protocol family Jan 16 23:55:52.961821 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 16 23:55:52.961833 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 16 23:55:52.961841 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 16 23:55:52.961849 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 16 23:55:52.961857 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 16 23:55:52.961865 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 16 23:55:52.961873 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 16 23:55:52.961881 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 16 23:55:52.961889 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 16 23:55:52.964040 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Jan 16 23:55:52.964090 kernel: PCI: CLS 0 bytes, default 64 Jan 16 23:55:52.964098 kernel: kvm [1]: HYP mode not available Jan 16 23:55:52.964153 kernel: Initialise system trusted keyrings Jan 16 23:55:52.964162 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 16 23:55:52.964171 kernel: Key type asymmetric registered Jan 16 23:55:52.964188 kernel: Asymmetric key parser 'x509' registered Jan 16 23:55:52.964196 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 16 23:55:52.964204 kernel: io scheduler mq-deadline registered Jan 16 23:55:52.964213 kernel: io scheduler kyber registered Jan 16 23:55:52.964224 kernel: io scheduler bfq registered Jan 16 23:55:52.964233 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 16 23:55:52.964442 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Jan 16 23:55:52.964523 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Jan 16 23:55:52.964594 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 16 23:55:52.964671 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Jan 16 23:55:52.964789 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Jan 16 23:55:52.964867 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 16 23:55:52.964940 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Jan 16 23:55:52.965060 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Jan 16 23:55:52.965146 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 16 23:55:52.965220 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Jan 16 23:55:52.965289 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Jan 16 23:55:52.965360 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 16 23:55:52.965435 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Jan 16 23:55:52.965504 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Jan 16 23:55:52.965657 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 16 23:55:52.965747 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Jan 16 23:55:52.965817 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Jan 16 23:55:52.965935 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 16 23:55:52.966091 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Jan 16 23:55:52.966185 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Jan 16 23:55:52.966297 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 16 23:55:52.966373 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Jan 16 23:55:52.966442 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Jan 16 23:55:52.966519 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 16 23:55:52.966531 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Jan 16 23:55:52.966601 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Jan 16 23:55:52.966707 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Jan 16 23:55:52.966780 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 16 23:55:52.966842 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 16 23:55:52.966856 kernel: ACPI: button: Power Button [PWRB] Jan 16 23:55:52.966864 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 16 23:55:52.966968 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Jan 16 23:55:52.967079 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Jan 16 23:55:52.967094 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 16 23:55:52.967151 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 16 23:55:52.967275 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Jan 16 23:55:52.967293 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Jan 16 23:55:52.967301 kernel: thunder_xcv, ver 1.0 Jan 16 23:55:52.967315 kernel: thunder_bgx, ver 1.0 Jan 16 23:55:52.967322 kernel: nicpf, ver 1.0 Jan 16 23:55:52.967330 kernel: nicvf, ver 1.0 Jan 16 23:55:52.967415 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 16 23:55:52.967501 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-16T23:55:52 UTC (1768607752) Jan 16 23:55:52.967514 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 16 23:55:52.967522 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 16 23:55:52.967530 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 16 23:55:52.967541 kernel: watchdog: Hard watchdog permanently disabled Jan 16 23:55:52.967549 kernel: NET: Registered PF_INET6 protocol family Jan 16 23:55:52.967557 kernel: Segment Routing with IPv6 Jan 16 23:55:52.967587 kernel: In-situ OAM (IOAM) with IPv6 Jan 16 23:55:52.967596 kernel: NET: Registered PF_PACKET protocol family Jan 16 23:55:52.967604 kernel: Key type dns_resolver registered Jan 16 23:55:52.967612 kernel: registered taskstats version 1 Jan 16 23:55:52.967619 kernel: Loading compiled-in X.509 certificates Jan 16 23:55:52.967627 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 0aabad27df82424bfffc9b1a502a9ae84b35bad4' Jan 16 23:55:52.967646 kernel: Key type .fscrypt registered Jan 16 23:55:52.967654 kernel: Key type fscrypt-provisioning registered Jan 16 23:55:52.967662 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 16 23:55:52.967670 kernel: ima: Allocated hash algorithm: sha1 Jan 16 23:55:52.967678 kernel: ima: No architecture policies found Jan 16 23:55:52.967686 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 16 23:55:52.967693 kernel: clk: Disabling unused clocks Jan 16 23:55:52.967701 kernel: Freeing unused kernel memory: 39424K Jan 16 23:55:52.967709 kernel: Run /init as init process Jan 16 23:55:52.967718 kernel: with arguments: Jan 16 23:55:52.967727 kernel: /init Jan 16 23:55:52.967735 kernel: with environment: Jan 16 23:55:52.967742 kernel: HOME=/ Jan 16 23:55:52.967750 kernel: TERM=linux Jan 16 23:55:52.967760 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 16 23:55:52.967770 systemd[1]: Detected virtualization kvm. Jan 16 23:55:52.967779 systemd[1]: Detected architecture arm64. Jan 16 23:55:52.967789 systemd[1]: Running in initrd. Jan 16 23:55:52.967797 systemd[1]: No hostname configured, using default hostname. Jan 16 23:55:52.967805 systemd[1]: Hostname set to . Jan 16 23:55:52.967814 systemd[1]: Initializing machine ID from VM UUID. Jan 16 23:55:52.967823 systemd[1]: Queued start job for default target initrd.target. Jan 16 23:55:52.967832 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 16 23:55:52.967841 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 16 23:55:52.967850 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 16 23:55:52.967860 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 16 23:55:52.967869 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 16 23:55:52.967878 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 16 23:55:52.967888 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 16 23:55:52.967896 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 16 23:55:52.967905 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 16 23:55:52.967913 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 16 23:55:52.967924 systemd[1]: Reached target paths.target - Path Units. Jan 16 23:55:52.967932 systemd[1]: Reached target slices.target - Slice Units. Jan 16 23:55:52.967940 systemd[1]: Reached target swap.target - Swaps. Jan 16 23:55:52.967993 systemd[1]: Reached target timers.target - Timer Units. Jan 16 23:55:52.968002 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 16 23:55:52.968010 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 16 23:55:52.968022 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 16 23:55:52.968031 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 16 23:55:52.968058 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 16 23:55:52.968068 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 16 23:55:52.968076 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 16 23:55:52.968085 systemd[1]: Reached target sockets.target - Socket Units. Jan 16 23:55:52.968093 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 16 23:55:52.968111 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 16 23:55:52.968121 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 16 23:55:52.968129 systemd[1]: Starting systemd-fsck-usr.service... Jan 16 23:55:52.968138 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 16 23:55:52.968174 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 16 23:55:52.968184 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 23:55:52.968192 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 16 23:55:52.968237 systemd-journald[236]: Collecting audit messages is disabled. Jan 16 23:55:52.968262 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 16 23:55:52.968270 systemd[1]: Finished systemd-fsck-usr.service. Jan 16 23:55:52.968280 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 16 23:55:52.968289 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 16 23:55:52.968299 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 23:55:52.968307 kernel: Bridge firewalling registered Jan 16 23:55:52.968316 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 16 23:55:52.968325 systemd-journald[236]: Journal started Jan 16 23:55:52.968345 systemd-journald[236]: Runtime Journal (/run/log/journal/942e734e1edd4663824490e0cf6a70d7) is 8.0M, max 76.6M, 68.6M free. Jan 16 23:55:52.934903 systemd-modules-load[237]: Inserted module 'overlay' Jan 16 23:55:52.960992 systemd-modules-load[237]: Inserted module 'br_netfilter' Jan 16 23:55:52.972982 systemd[1]: Started systemd-journald.service - Journal Service. Jan 16 23:55:52.974022 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 16 23:55:52.975252 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 16 23:55:52.987248 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 16 23:55:52.992263 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 16 23:55:52.996522 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 16 23:55:53.000260 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 16 23:55:53.003343 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 16 23:55:53.011400 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 16 23:55:53.017028 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 16 23:55:53.018990 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 16 23:55:53.032375 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 16 23:55:53.039992 dracut-cmdline[271]: dracut-dracut-053 Jan 16 23:55:53.044972 dracut-cmdline[271]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=d499dc3f7d5d4118d4e4300ad00f17ad72271d2a2f6bb9119457036ac5212c83 Jan 16 23:55:53.064532 systemd-resolved[276]: Positive Trust Anchors: Jan 16 23:55:53.065392 systemd-resolved[276]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 16 23:55:53.065434 systemd-resolved[276]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 16 23:55:53.076038 systemd-resolved[276]: Defaulting to hostname 'linux'. Jan 16 23:55:53.078579 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 16 23:55:53.079533 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 16 23:55:53.133058 kernel: SCSI subsystem initialized Jan 16 23:55:53.138025 kernel: Loading iSCSI transport class v2.0-870. Jan 16 23:55:53.147014 kernel: iscsi: registered transport (tcp) Jan 16 23:55:53.162034 kernel: iscsi: registered transport (qla4xxx) Jan 16 23:55:53.162146 kernel: QLogic iSCSI HBA Driver Jan 16 23:55:53.231062 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 16 23:55:53.237235 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 16 23:55:53.258991 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 16 23:55:53.259091 kernel: device-mapper: uevent: version 1.0.3 Jan 16 23:55:53.259127 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 16 23:55:53.318010 kernel: raid6: neonx8 gen() 15426 MB/s Jan 16 23:55:53.333060 kernel: raid6: neonx4 gen() 15219 MB/s Jan 16 23:55:53.349158 kernel: raid6: neonx2 gen() 13151 MB/s Jan 16 23:55:53.366010 kernel: raid6: neonx1 gen() 10385 MB/s Jan 16 23:55:53.383010 kernel: raid6: int64x8 gen() 6918 MB/s Jan 16 23:55:53.400003 kernel: raid6: int64x4 gen() 7318 MB/s Jan 16 23:55:53.417159 kernel: raid6: int64x2 gen() 6068 MB/s Jan 16 23:55:53.434010 kernel: raid6: int64x1 gen() 4962 MB/s Jan 16 23:55:53.434081 kernel: raid6: using algorithm neonx8 gen() 15426 MB/s Jan 16 23:55:53.451009 kernel: raid6: .... xor() 11767 MB/s, rmw enabled Jan 16 23:55:53.451077 kernel: raid6: using neon recovery algorithm Jan 16 23:55:53.456673 kernel: xor: measuring software checksum speed Jan 16 23:55:53.456740 kernel: 8regs : 19778 MB/sec Jan 16 23:55:53.457184 kernel: 32regs : 19547 MB/sec Jan 16 23:55:53.457987 kernel: arm64_neon : 26981 MB/sec Jan 16 23:55:53.458024 kernel: xor: using function: arm64_neon (26981 MB/sec) Jan 16 23:55:53.508985 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 16 23:55:53.534496 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 16 23:55:53.541264 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 16 23:55:53.557353 systemd-udevd[458]: Using default interface naming scheme 'v255'. Jan 16 23:55:53.566028 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 16 23:55:53.574442 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 16 23:55:53.596360 dracut-pre-trigger[464]: rd.md=0: removing MD RAID activation Jan 16 23:55:53.634031 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 16 23:55:53.640183 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 16 23:55:53.696660 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 16 23:55:53.705218 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 16 23:55:53.720958 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 16 23:55:53.725427 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 16 23:55:53.726747 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 16 23:55:53.728900 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 16 23:55:53.736310 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 16 23:55:53.754770 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 16 23:55:53.807153 kernel: scsi host0: Virtio SCSI HBA Jan 16 23:55:53.825012 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 16 23:55:53.825155 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 16 23:55:53.831290 kernel: ACPI: bus type USB registered Jan 16 23:55:53.833075 kernel: usbcore: registered new interface driver usbfs Jan 16 23:55:53.833154 kernel: usbcore: registered new interface driver hub Jan 16 23:55:53.833514 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 16 23:55:53.836140 kernel: usbcore: registered new device driver usb Jan 16 23:55:53.833700 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 16 23:55:53.838620 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 16 23:55:53.839500 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 16 23:55:53.839588 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 23:55:53.841807 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 23:55:53.850758 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 23:55:53.868572 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 23:55:53.880229 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 16 23:55:53.885154 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 16 23:55:53.885365 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jan 16 23:55:53.887051 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 16 23:55:53.890216 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 16 23:55:53.890450 kernel: sr 0:0:0:0: Power-on or device reset occurred Jan 16 23:55:53.892769 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jan 16 23:55:53.893020 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jan 16 23:55:53.894698 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Jan 16 23:55:53.894896 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 16 23:55:53.894915 kernel: hub 1-0:1.0: USB hub found Jan 16 23:55:53.895054 kernel: hub 1-0:1.0: 4 ports detected Jan 16 23:55:53.897531 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Jan 16 23:55:53.900216 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 16 23:55:53.900450 kernel: hub 2-0:1.0: USB hub found Jan 16 23:55:53.902984 kernel: hub 2-0:1.0: 4 ports detected Jan 16 23:55:53.903222 kernel: sd 0:0:0:1: Power-on or device reset occurred Jan 16 23:55:53.903351 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Jan 16 23:55:53.905433 kernel: sd 0:0:0:1: [sda] Write Protect is off Jan 16 23:55:53.905643 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Jan 16 23:55:53.905736 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 16 23:55:53.913047 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 16 23:55:53.913112 kernel: GPT:17805311 != 80003071 Jan 16 23:55:53.913125 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 16 23:55:53.913147 kernel: GPT:17805311 != 80003071 Jan 16 23:55:53.915450 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 16 23:55:53.915513 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 16 23:55:53.914638 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 16 23:55:53.918983 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Jan 16 23:55:53.963848 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (523) Jan 16 23:55:53.969219 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 16 23:55:53.975988 kernel: BTRFS: device fsid 257557f7-4bf9-4b29-86df-93ad67770d31 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (509) Jan 16 23:55:53.986446 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 16 23:55:53.993761 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 16 23:55:54.001656 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 16 23:55:54.002482 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jan 16 23:55:54.011294 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 16 23:55:54.023750 disk-uuid[575]: Primary Header is updated. Jan 16 23:55:54.023750 disk-uuid[575]: Secondary Entries is updated. Jan 16 23:55:54.023750 disk-uuid[575]: Secondary Header is updated. Jan 16 23:55:54.031022 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 16 23:55:54.037018 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 16 23:55:54.138485 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 16 23:55:54.272984 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Jan 16 23:55:54.273080 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jan 16 23:55:54.273467 kernel: usbcore: registered new interface driver usbhid Jan 16 23:55:54.273494 kernel: usbhid: USB HID core driver Jan 16 23:55:54.381993 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Jan 16 23:55:54.517993 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Jan 16 23:55:54.572009 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Jan 16 23:55:55.043143 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 16 23:55:55.043212 disk-uuid[576]: The operation has completed successfully. Jan 16 23:55:55.102723 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 16 23:55:55.102844 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 16 23:55:55.115242 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 16 23:55:55.122670 sh[591]: Success Jan 16 23:55:55.135177 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 16 23:55:55.199795 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 16 23:55:55.210847 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 16 23:55:55.212384 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 16 23:55:55.245211 kernel: BTRFS info (device dm-0): first mount of filesystem 257557f7-4bf9-4b29-86df-93ad67770d31 Jan 16 23:55:55.245301 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 16 23:55:55.245325 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 16 23:55:55.246352 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 16 23:55:55.246390 kernel: BTRFS info (device dm-0): using free space tree Jan 16 23:55:55.255040 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 16 23:55:55.257970 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 16 23:55:55.259824 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 16 23:55:55.270477 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 16 23:55:55.274277 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 16 23:55:55.294155 kernel: BTRFS info (device sda6): first mount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 16 23:55:55.294235 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 16 23:55:55.294251 kernel: BTRFS info (device sda6): using free space tree Jan 16 23:55:55.299972 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 16 23:55:55.300047 kernel: BTRFS info (device sda6): auto enabling async discard Jan 16 23:55:55.313972 kernel: BTRFS info (device sda6): last unmount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 16 23:55:55.314351 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 16 23:55:55.321510 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 16 23:55:55.328430 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 16 23:55:55.437454 ignition[679]: Ignition 2.19.0 Jan 16 23:55:55.437464 ignition[679]: Stage: fetch-offline Jan 16 23:55:55.437505 ignition[679]: no configs at "/usr/lib/ignition/base.d" Jan 16 23:55:55.440204 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 16 23:55:55.437513 ignition[679]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 16 23:55:55.442245 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 16 23:55:55.437666 ignition[679]: parsed url from cmdline: "" Jan 16 23:55:55.437670 ignition[679]: no config URL provided Jan 16 23:55:55.437674 ignition[679]: reading system config file "/usr/lib/ignition/user.ign" Jan 16 23:55:55.437682 ignition[679]: no config at "/usr/lib/ignition/user.ign" Jan 16 23:55:55.437687 ignition[679]: failed to fetch config: resource requires networking Jan 16 23:55:55.438064 ignition[679]: Ignition finished successfully Jan 16 23:55:55.453243 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 16 23:55:55.478738 systemd-networkd[778]: lo: Link UP Jan 16 23:55:55.478755 systemd-networkd[778]: lo: Gained carrier Jan 16 23:55:55.482228 systemd-networkd[778]: Enumeration completed Jan 16 23:55:55.482855 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 16 23:55:55.484524 systemd[1]: Reached target network.target - Network. Jan 16 23:55:55.485636 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 16 23:55:55.485639 systemd-networkd[778]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 16 23:55:55.487259 systemd-networkd[778]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 16 23:55:55.487263 systemd-networkd[778]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 16 23:55:55.488046 systemd-networkd[778]: eth0: Link UP Jan 16 23:55:55.488050 systemd-networkd[778]: eth0: Gained carrier Jan 16 23:55:55.488059 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 16 23:55:55.492469 systemd-networkd[778]: eth1: Link UP Jan 16 23:55:55.492474 systemd-networkd[778]: eth1: Gained carrier Jan 16 23:55:55.492487 systemd-networkd[778]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 16 23:55:55.495278 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 16 23:55:55.517599 ignition[781]: Ignition 2.19.0 Jan 16 23:55:55.518850 ignition[781]: Stage: fetch Jan 16 23:55:55.519222 ignition[781]: no configs at "/usr/lib/ignition/base.d" Jan 16 23:55:55.519238 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 16 23:55:55.519366 ignition[781]: parsed url from cmdline: "" Jan 16 23:55:55.519370 ignition[781]: no config URL provided Jan 16 23:55:55.519376 ignition[781]: reading system config file "/usr/lib/ignition/user.ign" Jan 16 23:55:55.519386 ignition[781]: no config at "/usr/lib/ignition/user.ign" Jan 16 23:55:55.519412 ignition[781]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jan 16 23:55:55.520317 ignition[781]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 16 23:55:55.540102 systemd-networkd[778]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Jan 16 23:55:55.556075 systemd-networkd[778]: eth0: DHCPv4 address 46.224.42.239/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 16 23:55:55.721150 ignition[781]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Jan 16 23:55:55.733857 ignition[781]: GET result: OK Jan 16 23:55:55.734025 ignition[781]: parsing config with SHA512: c9f7f44a04fe7ec0900e55788587f4e382aa417d5eca379a8ba526149633fb5f8593a32b70a6bfa7130ccd60411b290246ef317de691080598a17aa670ca8ebd Jan 16 23:55:55.744011 unknown[781]: fetched base config from "system" Jan 16 23:55:55.744019 unknown[781]: fetched base config from "system" Jan 16 23:55:55.744025 unknown[781]: fetched user config from "hetzner" Jan 16 23:55:55.752117 ignition[781]: fetch: fetch complete Jan 16 23:55:55.757912 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 16 23:55:55.752124 ignition[781]: fetch: fetch passed Jan 16 23:55:55.752213 ignition[781]: Ignition finished successfully Jan 16 23:55:55.773417 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 16 23:55:55.790802 ignition[788]: Ignition 2.19.0 Jan 16 23:55:55.790812 ignition[788]: Stage: kargs Jan 16 23:55:55.791112 ignition[788]: no configs at "/usr/lib/ignition/base.d" Jan 16 23:55:55.791129 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 16 23:55:55.792167 ignition[788]: kargs: kargs passed Jan 16 23:55:55.792231 ignition[788]: Ignition finished successfully Jan 16 23:55:55.795815 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 16 23:55:55.803295 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 16 23:55:55.820692 ignition[794]: Ignition 2.19.0 Jan 16 23:55:55.820705 ignition[794]: Stage: disks Jan 16 23:55:55.821001 ignition[794]: no configs at "/usr/lib/ignition/base.d" Jan 16 23:55:55.821015 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 16 23:55:55.822143 ignition[794]: disks: disks passed Jan 16 23:55:55.825032 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 16 23:55:55.822207 ignition[794]: Ignition finished successfully Jan 16 23:55:55.829225 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 16 23:55:55.830163 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 16 23:55:55.831423 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 16 23:55:55.832673 systemd[1]: Reached target sysinit.target - System Initialization. Jan 16 23:55:55.834441 systemd[1]: Reached target basic.target - Basic System. Jan 16 23:55:55.840183 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 16 23:55:55.879917 systemd-fsck[803]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 16 23:55:55.884251 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 16 23:55:55.891135 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 16 23:55:55.959359 kernel: EXT4-fs (sda9): mounted filesystem b70ce012-b356-4603-a688-ee0b3b7de551 r/w with ordered data mode. Quota mode: none. Jan 16 23:55:55.958614 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 16 23:55:55.960962 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 16 23:55:55.969421 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 16 23:55:55.975125 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 16 23:55:55.977604 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 16 23:55:55.978636 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 16 23:55:55.978670 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 16 23:55:55.989013 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (811) Jan 16 23:55:55.991002 kernel: BTRFS info (device sda6): first mount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 16 23:55:55.991074 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 16 23:55:55.991088 kernel: BTRFS info (device sda6): using free space tree Jan 16 23:55:55.997794 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 16 23:55:56.000983 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 16 23:55:56.001045 kernel: BTRFS info (device sda6): auto enabling async discard Jan 16 23:55:56.010326 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 16 23:55:56.016058 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 16 23:55:56.068972 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Jan 16 23:55:56.078371 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory Jan 16 23:55:56.079439 coreos-metadata[813]: Jan 16 23:55:56.078 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jan 16 23:55:56.080763 coreos-metadata[813]: Jan 16 23:55:56.079 INFO Fetch successful Jan 16 23:55:56.080763 coreos-metadata[813]: Jan 16 23:55:56.079 INFO wrote hostname ci-4081-3-6-n-fe2a5b3650 to /sysroot/etc/hostname Jan 16 23:55:56.085190 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 16 23:55:56.089610 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory Jan 16 23:55:56.095252 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Jan 16 23:55:56.207457 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 16 23:55:56.213085 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 16 23:55:56.218043 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 16 23:55:56.224968 kernel: BTRFS info (device sda6): last unmount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 16 23:55:56.243770 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 16 23:55:56.249669 ignition[928]: INFO : Ignition 2.19.0 Jan 16 23:55:56.249669 ignition[928]: INFO : Stage: mount Jan 16 23:55:56.251749 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 16 23:55:56.251749 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 16 23:55:56.254770 ignition[928]: INFO : mount: mount passed Jan 16 23:55:56.254770 ignition[928]: INFO : Ignition finished successfully Jan 16 23:55:56.255478 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 16 23:55:56.257980 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 16 23:55:56.270185 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 16 23:55:56.279126 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 16 23:55:56.294977 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (939) Jan 16 23:55:56.298027 kernel: BTRFS info (device sda6): first mount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 16 23:55:56.298127 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 16 23:55:56.298158 kernel: BTRFS info (device sda6): using free space tree Jan 16 23:55:56.301314 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 16 23:55:56.301363 kernel: BTRFS info (device sda6): auto enabling async discard Jan 16 23:55:56.304603 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 16 23:55:56.335631 ignition[956]: INFO : Ignition 2.19.0 Jan 16 23:55:56.335631 ignition[956]: INFO : Stage: files Jan 16 23:55:56.336837 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 16 23:55:56.336837 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 16 23:55:56.338973 ignition[956]: DEBUG : files: compiled without relabeling support, skipping Jan 16 23:55:56.339821 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 16 23:55:56.339821 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 16 23:55:56.344651 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 16 23:55:56.345792 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 16 23:55:56.347796 unknown[956]: wrote ssh authorized keys file for user: core Jan 16 23:55:56.349998 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 16 23:55:56.352137 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 16 23:55:56.352137 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jan 16 23:55:56.441671 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 16 23:55:56.526014 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 16 23:55:56.526014 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 16 23:55:56.531307 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 16 23:55:56.531307 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 16 23:55:56.531307 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 16 23:55:56.531307 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 16 23:55:56.531307 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 16 23:55:56.531307 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 16 23:55:56.531307 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 16 23:55:56.531307 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 16 23:55:56.531307 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 16 23:55:56.531307 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 16 23:55:56.531307 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 16 23:55:56.531307 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 16 23:55:56.531307 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jan 16 23:55:56.782158 systemd-networkd[778]: eth1: Gained IPv6LL Jan 16 23:55:56.824266 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 16 23:55:57.231409 systemd-networkd[778]: eth0: Gained IPv6LL Jan 16 23:55:57.326629 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 16 23:55:57.328618 ignition[956]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 16 23:55:57.331836 ignition[956]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 16 23:55:57.331836 ignition[956]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 16 23:55:57.331836 ignition[956]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 16 23:55:57.331836 ignition[956]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 16 23:55:57.331836 ignition[956]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 16 23:55:57.337217 ignition[956]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 16 23:55:57.337217 ignition[956]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 16 23:55:57.337217 ignition[956]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Jan 16 23:55:57.337217 ignition[956]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Jan 16 23:55:57.337217 ignition[956]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 16 23:55:57.337217 ignition[956]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 16 23:55:57.337217 ignition[956]: INFO : files: files passed Jan 16 23:55:57.337217 ignition[956]: INFO : Ignition finished successfully Jan 16 23:55:57.336789 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 16 23:55:57.345243 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 16 23:55:57.349296 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 16 23:55:57.353374 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 16 23:55:57.354247 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 16 23:55:57.374568 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 16 23:55:57.374568 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 16 23:55:57.377553 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 16 23:55:57.380171 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 16 23:55:57.381107 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 16 23:55:57.392284 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 16 23:55:57.428082 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 16 23:55:57.428312 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 16 23:55:57.430630 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 16 23:55:57.432237 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 16 23:55:57.433897 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 16 23:55:57.439243 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 16 23:55:57.454175 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 16 23:55:57.461267 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 16 23:55:57.474428 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 16 23:55:57.475886 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 16 23:55:57.476884 systemd[1]: Stopped target timers.target - Timer Units. Jan 16 23:55:57.477918 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 16 23:55:57.478104 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 16 23:55:57.479587 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 16 23:55:57.480254 systemd[1]: Stopped target basic.target - Basic System. Jan 16 23:55:57.481435 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 16 23:55:57.482554 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 16 23:55:57.483659 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 16 23:55:57.484688 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 16 23:55:57.485772 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 16 23:55:57.486952 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 16 23:55:57.488001 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 16 23:55:57.489156 systemd[1]: Stopped target swap.target - Swaps. Jan 16 23:55:57.490018 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 16 23:55:57.490158 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 16 23:55:57.491488 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 16 23:55:57.492142 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 16 23:55:57.493176 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 16 23:55:57.494969 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 16 23:55:57.496015 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 16 23:55:57.496148 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 16 23:55:57.497844 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 16 23:55:57.497971 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 16 23:55:57.499322 systemd[1]: ignition-files.service: Deactivated successfully. Jan 16 23:55:57.499421 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 16 23:55:57.500464 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 16 23:55:57.500558 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 16 23:55:57.510350 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 16 23:55:57.514227 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 16 23:55:57.514763 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 16 23:55:57.514893 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 16 23:55:57.515899 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 16 23:55:57.516232 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 16 23:55:57.528639 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 16 23:55:57.529683 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 16 23:55:57.534609 ignition[1009]: INFO : Ignition 2.19.0 Jan 16 23:55:57.534609 ignition[1009]: INFO : Stage: umount Jan 16 23:55:57.534609 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 16 23:55:57.534609 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 16 23:55:57.537855 ignition[1009]: INFO : umount: umount passed Jan 16 23:55:57.537855 ignition[1009]: INFO : Ignition finished successfully Jan 16 23:55:57.541556 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 16 23:55:57.541694 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 16 23:55:57.545737 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 16 23:55:57.546535 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 16 23:55:57.546589 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 16 23:55:57.547370 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 16 23:55:57.547420 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 16 23:55:57.548031 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 16 23:55:57.548131 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 16 23:55:57.549110 systemd[1]: Stopped target network.target - Network. Jan 16 23:55:57.550066 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 16 23:55:57.550114 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 16 23:55:57.551103 systemd[1]: Stopped target paths.target - Path Units. Jan 16 23:55:57.551852 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 16 23:55:57.556222 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 16 23:55:57.558358 systemd[1]: Stopped target slices.target - Slice Units. Jan 16 23:55:57.559321 systemd[1]: Stopped target sockets.target - Socket Units. Jan 16 23:55:57.559837 systemd[1]: iscsid.socket: Deactivated successfully. Jan 16 23:55:57.559883 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 16 23:55:57.561105 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 16 23:55:57.561144 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 16 23:55:57.562220 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 16 23:55:57.562266 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 16 23:55:57.563452 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 16 23:55:57.563492 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 16 23:55:57.564783 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 16 23:55:57.566162 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 16 23:55:57.568009 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 16 23:55:57.568134 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 16 23:55:57.569165 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 16 23:55:57.569251 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 16 23:55:57.572070 systemd-networkd[778]: eth0: DHCPv6 lease lost Jan 16 23:55:57.577066 systemd-networkd[778]: eth1: DHCPv6 lease lost Jan 16 23:55:57.578519 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 16 23:55:57.579419 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 16 23:55:57.580665 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 16 23:55:57.580788 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 16 23:55:57.584406 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 16 23:55:57.584474 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 16 23:55:57.590091 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 16 23:55:57.590586 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 16 23:55:57.590657 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 16 23:55:57.593210 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 16 23:55:57.593260 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 16 23:55:57.594191 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 16 23:55:57.594230 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 16 23:55:57.595403 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 16 23:55:57.595438 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 16 23:55:57.600373 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 16 23:55:57.620862 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 16 23:55:57.621188 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 16 23:55:57.624100 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 16 23:55:57.624263 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 16 23:55:57.625614 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 16 23:55:57.625663 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 16 23:55:57.626347 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 16 23:55:57.626377 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 16 23:55:57.627452 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 16 23:55:57.627500 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 16 23:55:57.629016 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 16 23:55:57.629071 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 16 23:55:57.630525 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 16 23:55:57.630573 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 16 23:55:57.637323 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 16 23:55:57.640727 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 16 23:55:57.640798 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 16 23:55:57.642864 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 16 23:55:57.642929 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 16 23:55:57.644758 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 16 23:55:57.644815 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 16 23:55:57.646069 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 16 23:55:57.646109 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 23:55:57.647314 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 16 23:55:57.647419 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 16 23:55:57.648665 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 16 23:55:57.655185 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 16 23:55:57.663605 systemd[1]: Switching root. Jan 16 23:55:57.701012 systemd-journald[236]: Journal stopped Jan 16 23:55:58.703831 systemd-journald[236]: Received SIGTERM from PID 1 (systemd). Jan 16 23:55:58.703895 kernel: SELinux: policy capability network_peer_controls=1 Jan 16 23:55:58.703917 kernel: SELinux: policy capability open_perms=1 Jan 16 23:55:58.703927 kernel: SELinux: policy capability extended_socket_class=1 Jan 16 23:55:58.703936 kernel: SELinux: policy capability always_check_network=0 Jan 16 23:55:58.703962 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 16 23:55:58.703973 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 16 23:55:58.703982 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 16 23:55:58.703992 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 16 23:55:58.704073 systemd[1]: Successfully loaded SELinux policy in 36.696ms. Jan 16 23:55:58.704105 kernel: audit: type=1403 audit(1768607757.853:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 16 23:55:58.704117 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.456ms. Jan 16 23:55:58.704128 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 16 23:55:58.704139 systemd[1]: Detected virtualization kvm. Jan 16 23:55:58.704150 systemd[1]: Detected architecture arm64. Jan 16 23:55:58.704160 systemd[1]: Detected first boot. Jan 16 23:55:58.704171 systemd[1]: Hostname set to . Jan 16 23:55:58.704183 systemd[1]: Initializing machine ID from VM UUID. Jan 16 23:55:58.704194 zram_generator::config[1051]: No configuration found. Jan 16 23:55:58.704205 systemd[1]: Populated /etc with preset unit settings. Jan 16 23:55:58.704217 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 16 23:55:58.704227 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 16 23:55:58.704237 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 16 23:55:58.704249 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 16 23:55:58.704265 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 16 23:55:58.704276 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 16 23:55:58.704289 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 16 23:55:58.704299 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 16 23:55:58.704311 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 16 23:55:58.704322 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 16 23:55:58.704333 systemd[1]: Created slice user.slice - User and Session Slice. Jan 16 23:55:58.704343 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 16 23:55:58.706064 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 16 23:55:58.706110 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 16 23:55:58.706127 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 16 23:55:58.706138 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 16 23:55:58.706149 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 16 23:55:58.706160 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 16 23:55:58.706171 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 16 23:55:58.706181 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 16 23:55:58.706193 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 16 23:55:58.706206 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 16 23:55:58.706216 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 16 23:55:58.706227 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 16 23:55:58.706237 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 16 23:55:58.706248 systemd[1]: Reached target slices.target - Slice Units. Jan 16 23:55:58.706258 systemd[1]: Reached target swap.target - Swaps. Jan 16 23:55:58.706269 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 16 23:55:58.706280 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 16 23:55:58.706291 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 16 23:55:58.706303 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 16 23:55:58.706314 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 16 23:55:58.706325 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 16 23:55:58.706335 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 16 23:55:58.706346 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 16 23:55:58.706357 systemd[1]: Mounting media.mount - External Media Directory... Jan 16 23:55:58.706368 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 16 23:55:58.706378 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 16 23:55:58.706389 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 16 23:55:58.706401 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 16 23:55:58.706412 systemd[1]: Reached target machines.target - Containers. Jan 16 23:55:58.706422 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 16 23:55:58.706433 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 23:55:58.706449 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 16 23:55:58.706464 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 16 23:55:58.706477 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 16 23:55:58.706488 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 16 23:55:58.706499 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 16 23:55:58.706509 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 16 23:55:58.706520 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 16 23:55:58.706531 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 16 23:55:58.706542 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 16 23:55:58.706554 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 16 23:55:58.706991 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 16 23:55:58.707009 systemd[1]: Stopped systemd-fsck-usr.service. Jan 16 23:55:58.707021 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 16 23:55:58.707048 kernel: fuse: init (API version 7.39) Jan 16 23:55:58.707064 kernel: ACPI: bus type drm_connector registered Jan 16 23:55:58.707074 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 16 23:55:58.707085 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 16 23:55:58.707096 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 16 23:55:58.707111 kernel: loop: module loaded Jan 16 23:55:58.707122 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 16 23:55:58.707133 systemd[1]: verity-setup.service: Deactivated successfully. Jan 16 23:55:58.707144 systemd[1]: Stopped verity-setup.service. Jan 16 23:55:58.707154 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 16 23:55:58.707165 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 16 23:55:58.707175 systemd[1]: Mounted media.mount - External Media Directory. Jan 16 23:55:58.707187 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 16 23:55:58.707200 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 16 23:55:58.707210 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 16 23:55:58.707221 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 16 23:55:58.707232 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 16 23:55:58.707243 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 16 23:55:58.707253 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 16 23:55:58.707304 systemd-journald[1121]: Collecting audit messages is disabled. Jan 16 23:55:58.707328 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 16 23:55:58.707341 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 16 23:55:58.707352 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 16 23:55:58.707365 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 16 23:55:58.707377 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 16 23:55:58.707388 systemd-journald[1121]: Journal started Jan 16 23:55:58.707411 systemd-journald[1121]: Runtime Journal (/run/log/journal/942e734e1edd4663824490e0cf6a70d7) is 8.0M, max 76.6M, 68.6M free. Jan 16 23:55:58.397719 systemd[1]: Queued start job for default target multi-user.target. Jan 16 23:55:58.416833 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 16 23:55:58.418181 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 16 23:55:58.709084 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 16 23:55:58.711137 systemd[1]: Started systemd-journald.service - Journal Service. Jan 16 23:55:58.712726 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 16 23:55:58.712898 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 16 23:55:58.715521 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 16 23:55:58.715690 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 16 23:55:58.717225 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 16 23:55:58.718558 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 16 23:55:58.719797 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 16 23:55:58.737190 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 16 23:55:58.746243 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 16 23:55:58.750146 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 16 23:55:58.753083 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 16 23:55:58.753135 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 16 23:55:58.757105 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 16 23:55:58.766164 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 16 23:55:58.773370 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 16 23:55:58.774208 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 23:55:58.782573 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 16 23:55:58.794937 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 16 23:55:58.798270 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 16 23:55:58.807283 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 16 23:55:58.810089 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 16 23:55:58.812165 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 16 23:55:58.824326 systemd-journald[1121]: Time spent on flushing to /var/log/journal/942e734e1edd4663824490e0cf6a70d7 is 139.176ms for 1120 entries. Jan 16 23:55:58.824326 systemd-journald[1121]: System Journal (/var/log/journal/942e734e1edd4663824490e0cf6a70d7) is 8.0M, max 584.8M, 576.8M free. Jan 16 23:55:59.003384 systemd-journald[1121]: Received client request to flush runtime journal. Jan 16 23:55:59.003481 kernel: loop0: detected capacity change from 0 to 114432 Jan 16 23:55:59.003506 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 16 23:55:59.003521 kernel: loop1: detected capacity change from 0 to 207008 Jan 16 23:55:58.825449 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 16 23:55:58.832389 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 16 23:55:58.839104 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 16 23:55:58.842119 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 16 23:55:58.847377 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 16 23:55:58.851761 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 16 23:55:58.870370 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 16 23:55:58.894034 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 16 23:55:58.896385 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 16 23:55:58.906309 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 16 23:55:58.967425 udevadm[1171]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 16 23:55:58.976458 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 16 23:55:58.985104 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 16 23:55:58.987513 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 16 23:55:58.998181 systemd-tmpfiles[1166]: ACLs are not supported, ignoring. Jan 16 23:55:58.998191 systemd-tmpfiles[1166]: ACLs are not supported, ignoring. Jan 16 23:55:59.014423 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 16 23:55:59.025785 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 16 23:55:59.041526 kernel: loop2: detected capacity change from 0 to 8 Jan 16 23:55:59.042726 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 16 23:55:59.070222 kernel: loop3: detected capacity change from 0 to 114328 Jan 16 23:55:59.119829 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 16 23:55:59.129006 kernel: loop4: detected capacity change from 0 to 114432 Jan 16 23:55:59.133144 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 16 23:55:59.142984 kernel: loop5: detected capacity change from 0 to 207008 Jan 16 23:55:59.167995 kernel: loop6: detected capacity change from 0 to 8 Jan 16 23:55:59.173392 kernel: loop7: detected capacity change from 0 to 114328 Jan 16 23:55:59.175092 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Jan 16 23:55:59.175111 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Jan 16 23:55:59.183321 (sd-merge)[1191]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Jan 16 23:55:59.183770 (sd-merge)[1191]: Merged extensions into '/usr'. Jan 16 23:55:59.183979 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 16 23:55:59.193936 systemd[1]: Reloading requested from client PID 1165 ('systemd-sysext') (unit systemd-sysext.service)... Jan 16 23:55:59.194470 systemd[1]: Reloading... Jan 16 23:55:59.348542 zram_generator::config[1221]: No configuration found. Jan 16 23:55:59.348635 ldconfig[1160]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 16 23:55:59.467060 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 16 23:55:59.515553 systemd[1]: Reloading finished in 320 ms. Jan 16 23:55:59.541030 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 16 23:55:59.542980 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 16 23:55:59.554278 systemd[1]: Starting ensure-sysext.service... Jan 16 23:55:59.557923 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 16 23:55:59.560556 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 16 23:55:59.565219 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 16 23:55:59.573377 systemd[1]: Reloading requested from client PID 1257 ('systemctl') (unit ensure-sysext.service)... Jan 16 23:55:59.573527 systemd[1]: Reloading... Jan 16 23:55:59.596415 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 16 23:55:59.596727 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 16 23:55:59.600400 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 16 23:55:59.600687 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Jan 16 23:55:59.600741 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Jan 16 23:55:59.603808 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot. Jan 16 23:55:59.603826 systemd-tmpfiles[1258]: Skipping /boot Jan 16 23:55:59.617438 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot. Jan 16 23:55:59.617457 systemd-tmpfiles[1258]: Skipping /boot Jan 16 23:55:59.644903 systemd-udevd[1260]: Using default interface naming scheme 'v255'. Jan 16 23:55:59.655990 zram_generator::config[1286]: No configuration found. Jan 16 23:55:59.824924 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 16 23:55:59.885864 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 16 23:55:59.886184 systemd[1]: Reloading finished in 312 ms. Jan 16 23:55:59.902057 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 16 23:55:59.904973 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 16 23:55:59.911984 kernel: mousedev: PS/2 mouse device common for all mice Jan 16 23:55:59.933611 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 16 23:55:59.936424 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 16 23:55:59.940359 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 16 23:55:59.947148 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 16 23:55:59.951258 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 16 23:55:59.957237 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 16 23:55:59.964261 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 23:55:59.967646 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 16 23:55:59.974381 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 16 23:55:59.979327 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 16 23:55:59.982132 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 23:55:59.993378 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 16 23:55:59.997086 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 23:55:59.999269 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 23:56:00.001355 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 23:56:00.004897 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 16 23:56:00.005606 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 23:56:00.008229 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Jan 16 23:56:00.008608 systemd[1]: Finished ensure-sysext.service. Jan 16 23:56:00.015803 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 23:56:00.015925 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 23:56:00.022225 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 16 23:56:00.023584 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 16 23:56:00.025041 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 16 23:56:00.040217 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 16 23:56:00.041825 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 16 23:56:00.070674 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 16 23:56:00.099161 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 16 23:56:00.099328 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 16 23:56:00.102217 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 16 23:56:00.115742 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 16 23:56:00.115969 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 16 23:56:00.121328 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 16 23:56:00.121483 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 16 23:56:00.126407 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 16 23:56:00.136459 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 16 23:56:00.137547 augenrules[1398]: No rules Jan 16 23:56:00.147157 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 16 23:56:00.148248 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 16 23:56:00.180973 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1295) Jan 16 23:56:00.192980 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Jan 16 23:56:00.193098 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 16 23:56:00.193137 kernel: [drm] features: -context_init Jan 16 23:56:00.193150 kernel: [drm] number of scanouts: 1 Jan 16 23:56:00.194162 kernel: [drm] number of cap sets: 0 Jan 16 23:56:00.199224 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Jan 16 23:56:00.205309 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 23:56:00.209780 kernel: Console: switching to colour frame buffer device 160x50 Jan 16 23:56:00.214038 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 16 23:56:00.221218 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 16 23:56:00.221840 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 16 23:56:00.246498 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 16 23:56:00.248992 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 23:56:00.257334 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 23:56:00.259587 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 16 23:56:00.272315 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 16 23:56:00.306338 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 16 23:56:00.340219 systemd-networkd[1366]: lo: Link UP Jan 16 23:56:00.340228 systemd-networkd[1366]: lo: Gained carrier Jan 16 23:56:00.341582 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 16 23:56:00.344441 systemd[1]: Reached target time-set.target - System Time Set. Jan 16 23:56:00.345670 systemd-networkd[1366]: Enumeration completed Jan 16 23:56:00.345825 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 16 23:56:00.346464 systemd-networkd[1366]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 16 23:56:00.346467 systemd-networkd[1366]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 16 23:56:00.348499 systemd-timesyncd[1381]: No network connectivity, watching for changes. Jan 16 23:56:00.349107 systemd-networkd[1366]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 16 23:56:00.349120 systemd-networkd[1366]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 16 23:56:00.349680 systemd-networkd[1366]: eth0: Link UP Jan 16 23:56:00.349684 systemd-networkd[1366]: eth0: Gained carrier Jan 16 23:56:00.349701 systemd-networkd[1366]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 16 23:56:00.354351 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 16 23:56:00.355414 systemd-networkd[1366]: eth1: Link UP Jan 16 23:56:00.355678 systemd-networkd[1366]: eth1: Gained carrier Jan 16 23:56:00.355705 systemd-networkd[1366]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 16 23:56:00.360086 systemd-resolved[1367]: Positive Trust Anchors: Jan 16 23:56:00.360107 systemd-resolved[1367]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 16 23:56:00.360143 systemd-resolved[1367]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 16 23:56:00.364096 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 16 23:56:00.369675 systemd-resolved[1367]: Using system hostname 'ci-4081-3-6-n-fe2a5b3650'. Jan 16 23:56:00.372710 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 16 23:56:00.375461 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 16 23:56:00.376226 systemd[1]: Reached target network.target - Network. Jan 16 23:56:00.377787 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 16 23:56:00.387679 lvm[1428]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 16 23:56:00.397106 systemd-networkd[1366]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Jan 16 23:56:00.398658 systemd-timesyncd[1381]: Network configuration changed, trying to establish connection. Jan 16 23:56:00.401499 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 23:56:00.414901 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 16 23:56:00.416238 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 16 23:56:00.417104 systemd-networkd[1366]: eth0: DHCPv4 address 46.224.42.239/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 16 23:56:00.417128 systemd[1]: Reached target sysinit.target - System Initialization. Jan 16 23:56:00.417808 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 16 23:56:00.417915 systemd-timesyncd[1381]: Network configuration changed, trying to establish connection. Jan 16 23:56:00.418622 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 16 23:56:00.419551 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 16 23:56:00.420502 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 16 23:56:00.420803 systemd-timesyncd[1381]: Network configuration changed, trying to establish connection. Jan 16 23:56:00.421257 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 16 23:56:00.421882 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 16 23:56:00.421920 systemd[1]: Reached target paths.target - Path Units. Jan 16 23:56:00.422496 systemd[1]: Reached target timers.target - Timer Units. Jan 16 23:56:00.423653 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 16 23:56:00.425843 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 16 23:56:00.431716 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 16 23:56:00.435101 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 16 23:56:00.436793 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 16 23:56:00.437823 systemd[1]: Reached target sockets.target - Socket Units. Jan 16 23:56:00.438547 systemd[1]: Reached target basic.target - Basic System. Jan 16 23:56:00.439227 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 16 23:56:00.439265 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 16 23:56:00.445378 systemd[1]: Starting containerd.service - containerd container runtime... Jan 16 23:56:00.451221 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 16 23:56:00.456300 lvm[1435]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 16 23:56:00.455343 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 16 23:56:00.465310 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 16 23:56:00.468314 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 16 23:56:00.469197 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 16 23:56:00.473251 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 16 23:56:00.478399 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 16 23:56:00.483325 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jan 16 23:56:00.490975 jq[1439]: false Jan 16 23:56:00.498382 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 16 23:56:00.501689 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 16 23:56:00.508388 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 16 23:56:00.510831 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 16 23:56:00.512228 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 16 23:56:00.515191 systemd[1]: Starting update-engine.service - Update Engine... Jan 16 23:56:00.518316 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 16 23:56:00.524436 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 16 23:56:00.524614 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 16 23:56:00.536375 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 16 23:56:00.549889 dbus-daemon[1438]: [system] SELinux support is enabled Jan 16 23:56:00.550545 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 16 23:56:00.564064 jq[1451]: true Jan 16 23:56:00.556431 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 16 23:56:00.556471 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 16 23:56:00.559210 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 16 23:56:00.559233 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 16 23:56:00.567311 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 16 23:56:00.567530 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 16 23:56:00.576743 extend-filesystems[1440]: Found loop4 Jan 16 23:56:00.576743 extend-filesystems[1440]: Found loop5 Jan 16 23:56:00.583890 extend-filesystems[1440]: Found loop6 Jan 16 23:56:00.583890 extend-filesystems[1440]: Found loop7 Jan 16 23:56:00.583890 extend-filesystems[1440]: Found sda Jan 16 23:56:00.583890 extend-filesystems[1440]: Found sda1 Jan 16 23:56:00.583890 extend-filesystems[1440]: Found sda2 Jan 16 23:56:00.583890 extend-filesystems[1440]: Found sda3 Jan 16 23:56:00.583890 extend-filesystems[1440]: Found usr Jan 16 23:56:00.583890 extend-filesystems[1440]: Found sda4 Jan 16 23:56:00.583890 extend-filesystems[1440]: Found sda6 Jan 16 23:56:00.583890 extend-filesystems[1440]: Found sda7 Jan 16 23:56:00.583890 extend-filesystems[1440]: Found sda9 Jan 16 23:56:00.583890 extend-filesystems[1440]: Checking size of /dev/sda9 Jan 16 23:56:00.627162 extend-filesystems[1440]: Resized partition /dev/sda9 Jan 16 23:56:00.630214 jq[1460]: true Jan 16 23:56:00.636959 extend-filesystems[1480]: resize2fs 1.47.1 (20-May-2024) Jan 16 23:56:00.634339 (ntainerd)[1469]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 16 23:56:00.644018 tar[1453]: linux-arm64/LICENSE Jan 16 23:56:00.644018 tar[1453]: linux-arm64/helm Jan 16 23:56:00.654986 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Jan 16 23:56:00.648097 systemd[1]: motdgen.service: Deactivated successfully. Jan 16 23:56:00.649561 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 16 23:56:00.659097 coreos-metadata[1437]: Jan 16 23:56:00.658 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jan 16 23:56:00.671013 coreos-metadata[1437]: Jan 16 23:56:00.668 INFO Fetch successful Jan 16 23:56:00.671013 coreos-metadata[1437]: Jan 16 23:56:00.669 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jan 16 23:56:00.673192 coreos-metadata[1437]: Jan 16 23:56:00.671 INFO Fetch successful Jan 16 23:56:00.729337 systemd-logind[1448]: New seat seat0. Jan 16 23:56:00.747465 systemd-logind[1448]: Watching system buttons on /dev/input/event0 (Power Button) Jan 16 23:56:00.747498 systemd-logind[1448]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Jan 16 23:56:00.755549 update_engine[1449]: I20260116 23:56:00.750864 1449 main.cc:92] Flatcar Update Engine starting Jan 16 23:56:00.797686 update_engine[1449]: I20260116 23:56:00.766045 1449 update_check_scheduler.cc:74] Next update check in 5m37s Jan 16 23:56:00.784511 systemd[1]: Started systemd-logind.service - User Login Management. Jan 16 23:56:00.785575 systemd[1]: Started update-engine.service - Update Engine. Jan 16 23:56:00.797983 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 16 23:56:00.830968 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Jan 16 23:56:00.842554 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 16 23:56:00.868695 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1311) Jan 16 23:56:00.844475 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 16 23:56:00.875066 bash[1501]: Updated "/home/core/.ssh/authorized_keys" Jan 16 23:56:00.879240 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 16 23:56:00.881538 extend-filesystems[1480]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 16 23:56:00.881538 extend-filesystems[1480]: old_desc_blocks = 1, new_desc_blocks = 5 Jan 16 23:56:00.881538 extend-filesystems[1480]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Jan 16 23:56:00.887843 extend-filesystems[1440]: Resized filesystem in /dev/sda9 Jan 16 23:56:00.887843 extend-filesystems[1440]: Found sr0 Jan 16 23:56:00.889865 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 16 23:56:00.890860 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 16 23:56:00.911660 systemd[1]: Starting sshkeys.service... Jan 16 23:56:00.943555 locksmithd[1503]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 16 23:56:00.959470 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 16 23:56:00.972510 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 16 23:56:01.049029 coreos-metadata[1522]: Jan 16 23:56:01.048 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jan 16 23:56:01.051162 coreos-metadata[1522]: Jan 16 23:56:01.051 INFO Fetch successful Jan 16 23:56:01.057706 unknown[1522]: wrote ssh authorized keys file for user: core Jan 16 23:56:01.096485 update-ssh-keys[1526]: Updated "/home/core/.ssh/authorized_keys" Jan 16 23:56:01.101450 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 16 23:56:01.106472 systemd[1]: Finished sshkeys.service. Jan 16 23:56:01.148455 containerd[1469]: time="2026-01-16T23:56:01.148217840Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 16 23:56:01.225068 containerd[1469]: time="2026-01-16T23:56:01.224297000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 16 23:56:01.232154 containerd[1469]: time="2026-01-16T23:56:01.232096600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 16 23:56:01.232324 containerd[1469]: time="2026-01-16T23:56:01.232307680Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 16 23:56:01.232406 containerd[1469]: time="2026-01-16T23:56:01.232392680Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 16 23:56:01.233575 containerd[1469]: time="2026-01-16T23:56:01.233354120Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 16 23:56:01.233739 containerd[1469]: time="2026-01-16T23:56:01.233722440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 16 23:56:01.234180 containerd[1469]: time="2026-01-16T23:56:01.234155480Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 16 23:56:01.234291 containerd[1469]: time="2026-01-16T23:56:01.234276360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 16 23:56:01.235229 containerd[1469]: time="2026-01-16T23:56:01.235140960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 16 23:56:01.235383 containerd[1469]: time="2026-01-16T23:56:01.235366720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 16 23:56:01.235600 containerd[1469]: time="2026-01-16T23:56:01.235583120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 16 23:56:01.235664 containerd[1469]: time="2026-01-16T23:56:01.235652440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 16 23:56:01.236266 containerd[1469]: time="2026-01-16T23:56:01.236233640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 16 23:56:01.237912 containerd[1469]: time="2026-01-16T23:56:01.236579400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 16 23:56:01.237912 containerd[1469]: time="2026-01-16T23:56:01.236834880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 16 23:56:01.237912 containerd[1469]: time="2026-01-16T23:56:01.236853400Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 16 23:56:01.237912 containerd[1469]: time="2026-01-16T23:56:01.236982720Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 16 23:56:01.237912 containerd[1469]: time="2026-01-16T23:56:01.237048200Z" level=info msg="metadata content store policy set" policy=shared Jan 16 23:56:01.245608 containerd[1469]: time="2026-01-16T23:56:01.245561000Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 16 23:56:01.245781 containerd[1469]: time="2026-01-16T23:56:01.245767600Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 16 23:56:01.246103 containerd[1469]: time="2026-01-16T23:56:01.246084240Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 16 23:56:01.246235 containerd[1469]: time="2026-01-16T23:56:01.246217040Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 16 23:56:01.247478 containerd[1469]: time="2026-01-16T23:56:01.247061400Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 16 23:56:01.247478 containerd[1469]: time="2026-01-16T23:56:01.247286320Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 16 23:56:01.247618 containerd[1469]: time="2026-01-16T23:56:01.247586080Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 16 23:56:01.247768 containerd[1469]: time="2026-01-16T23:56:01.247746280Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 16 23:56:01.247798 containerd[1469]: time="2026-01-16T23:56:01.247771080Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 16 23:56:01.247798 containerd[1469]: time="2026-01-16T23:56:01.247787320Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 16 23:56:01.247914 containerd[1469]: time="2026-01-16T23:56:01.247802520Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 16 23:56:01.247914 containerd[1469]: time="2026-01-16T23:56:01.247816960Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 16 23:56:01.247914 containerd[1469]: time="2026-01-16T23:56:01.247831160Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 16 23:56:01.247914 containerd[1469]: time="2026-01-16T23:56:01.247847160Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 16 23:56:01.247914 containerd[1469]: time="2026-01-16T23:56:01.247863120Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 16 23:56:01.247914 containerd[1469]: time="2026-01-16T23:56:01.247876840Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 16 23:56:01.247914 containerd[1469]: time="2026-01-16T23:56:01.247889560Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 16 23:56:01.247914 containerd[1469]: time="2026-01-16T23:56:01.247902840Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 16 23:56:01.248105 containerd[1469]: time="2026-01-16T23:56:01.247925320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 16 23:56:01.248105 containerd[1469]: time="2026-01-16T23:56:01.247940040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 16 23:56:01.249331 containerd[1469]: time="2026-01-16T23:56:01.248968560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 16 23:56:01.249331 containerd[1469]: time="2026-01-16T23:56:01.249008360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 16 23:56:01.249331 containerd[1469]: time="2026-01-16T23:56:01.249024200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 16 23:56:01.249331 containerd[1469]: time="2026-01-16T23:56:01.249038880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 16 23:56:01.249331 containerd[1469]: time="2026-01-16T23:56:01.249052680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 16 23:56:01.249331 containerd[1469]: time="2026-01-16T23:56:01.249067600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 16 23:56:01.249331 containerd[1469]: time="2026-01-16T23:56:01.249081440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 16 23:56:01.249331 containerd[1469]: time="2026-01-16T23:56:01.249099600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 16 23:56:01.249331 containerd[1469]: time="2026-01-16T23:56:01.249114960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 16 23:56:01.249331 containerd[1469]: time="2026-01-16T23:56:01.249127160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 16 23:56:01.249331 containerd[1469]: time="2026-01-16T23:56:01.249139840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 16 23:56:01.249331 containerd[1469]: time="2026-01-16T23:56:01.249168040Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 16 23:56:01.249331 containerd[1469]: time="2026-01-16T23:56:01.249194040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 16 23:56:01.249331 containerd[1469]: time="2026-01-16T23:56:01.249218680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 16 23:56:01.249331 containerd[1469]: time="2026-01-16T23:56:01.249231440Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 16 23:56:01.249899 containerd[1469]: time="2026-01-16T23:56:01.249409920Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 16 23:56:01.249899 containerd[1469]: time="2026-01-16T23:56:01.249430960Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 16 23:56:01.249899 containerd[1469]: time="2026-01-16T23:56:01.249442800Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 16 23:56:01.249899 containerd[1469]: time="2026-01-16T23:56:01.249455320Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 16 23:56:01.249899 containerd[1469]: time="2026-01-16T23:56:01.249466280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 16 23:56:01.249899 containerd[1469]: time="2026-01-16T23:56:01.249479920Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 16 23:56:01.249899 containerd[1469]: time="2026-01-16T23:56:01.249490600Z" level=info msg="NRI interface is disabled by configuration." Jan 16 23:56:01.249899 containerd[1469]: time="2026-01-16T23:56:01.249503080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 16 23:56:01.250168 containerd[1469]: time="2026-01-16T23:56:01.249875040Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 16 23:56:01.250168 containerd[1469]: time="2026-01-16T23:56:01.249935160Z" level=info msg="Connect containerd service" Jan 16 23:56:01.252965 containerd[1469]: time="2026-01-16T23:56:01.252009880Z" level=info msg="using legacy CRI server" Jan 16 23:56:01.252965 containerd[1469]: time="2026-01-16T23:56:01.252038360Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 16 23:56:01.252965 containerd[1469]: time="2026-01-16T23:56:01.252145400Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 16 23:56:01.255819 containerd[1469]: time="2026-01-16T23:56:01.255245240Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 16 23:56:01.255819 containerd[1469]: time="2026-01-16T23:56:01.255768800Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 16 23:56:01.255819 containerd[1469]: time="2026-01-16T23:56:01.255810680Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 16 23:56:01.255962 containerd[1469]: time="2026-01-16T23:56:01.255904920Z" level=info msg="Start subscribing containerd event" Jan 16 23:56:01.257660 containerd[1469]: time="2026-01-16T23:56:01.256978560Z" level=info msg="Start recovering state" Jan 16 23:56:01.257660 containerd[1469]: time="2026-01-16T23:56:01.257080040Z" level=info msg="Start event monitor" Jan 16 23:56:01.257660 containerd[1469]: time="2026-01-16T23:56:01.257096640Z" level=info msg="Start snapshots syncer" Jan 16 23:56:01.257660 containerd[1469]: time="2026-01-16T23:56:01.257106920Z" level=info msg="Start cni network conf syncer for default" Jan 16 23:56:01.257660 containerd[1469]: time="2026-01-16T23:56:01.257114560Z" level=info msg="Start streaming server" Jan 16 23:56:01.257389 systemd[1]: Started containerd.service - containerd container runtime. Jan 16 23:56:01.258583 containerd[1469]: time="2026-01-16T23:56:01.258076120Z" level=info msg="containerd successfully booted in 0.114520s" Jan 16 23:56:01.406651 sshd_keygen[1477]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 16 23:56:01.418244 tar[1453]: linux-arm64/README.md Jan 16 23:56:01.431603 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 16 23:56:01.434118 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 16 23:56:01.445791 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 16 23:56:01.454182 systemd[1]: issuegen.service: Deactivated successfully. Jan 16 23:56:01.454418 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 16 23:56:01.466405 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 16 23:56:01.480124 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 16 23:56:01.487447 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 16 23:56:01.490618 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 16 23:56:01.492762 systemd[1]: Reached target getty.target - Login Prompts. Jan 16 23:56:01.838374 systemd-networkd[1366]: eth0: Gained IPv6LL Jan 16 23:56:01.839841 systemd-timesyncd[1381]: Network configuration changed, trying to establish connection. Jan 16 23:56:01.846435 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 16 23:56:01.850123 systemd[1]: Reached target network-online.target - Network is Online. Jan 16 23:56:01.865854 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 23:56:01.874296 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 16 23:56:01.903013 systemd-networkd[1366]: eth1: Gained IPv6LL Jan 16 23:56:01.904157 systemd-timesyncd[1381]: Network configuration changed, trying to establish connection. Jan 16 23:56:01.927646 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 16 23:56:02.713701 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 23:56:02.715105 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 16 23:56:02.720320 systemd[1]: Startup finished in 840ms (kernel) + 5.153s (initrd) + 4.903s (userspace) = 10.897s. Jan 16 23:56:02.721653 (kubelet)[1568]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 23:56:03.247338 kubelet[1568]: E0116 23:56:03.247208 1568 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 23:56:03.250620 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 23:56:03.250848 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 23:56:13.260509 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 16 23:56:13.270885 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 23:56:13.413888 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 23:56:13.420895 (kubelet)[1587]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 23:56:13.472210 kubelet[1587]: E0116 23:56:13.472158 1587 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 23:56:13.476634 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 23:56:13.476891 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 23:56:23.510840 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 16 23:56:23.521332 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 23:56:23.681191 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 23:56:23.685860 (kubelet)[1602]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 23:56:23.732044 kubelet[1602]: E0116 23:56:23.731979 1602 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 23:56:23.735321 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 23:56:23.735478 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 23:56:29.976646 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 16 23:56:29.989341 systemd[1]: Started sshd@0-46.224.42.239:22-4.153.228.146:58062.service - OpenSSH per-connection server daemon (4.153.228.146:58062). Jan 16 23:56:30.617557 sshd[1610]: Accepted publickey for core from 4.153.228.146 port 58062 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 16 23:56:30.620540 sshd[1610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 23:56:30.637587 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 16 23:56:30.638418 systemd-logind[1448]: New session 1 of user core. Jan 16 23:56:30.651933 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 16 23:56:30.668027 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 16 23:56:30.687500 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 16 23:56:30.693616 (systemd)[1614]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 16 23:56:30.811115 systemd[1614]: Queued start job for default target default.target. Jan 16 23:56:30.821734 systemd[1614]: Created slice app.slice - User Application Slice. Jan 16 23:56:30.822144 systemd[1614]: Reached target paths.target - Paths. Jan 16 23:56:30.822219 systemd[1614]: Reached target timers.target - Timers. Jan 16 23:56:30.825263 systemd[1614]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 16 23:56:30.854573 systemd[1614]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 16 23:56:30.854758 systemd[1614]: Reached target sockets.target - Sockets. Jan 16 23:56:30.854775 systemd[1614]: Reached target basic.target - Basic System. Jan 16 23:56:30.854841 systemd[1614]: Reached target default.target - Main User Target. Jan 16 23:56:30.854875 systemd[1614]: Startup finished in 153ms. Jan 16 23:56:30.854981 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 16 23:56:30.866430 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 16 23:56:31.328437 systemd[1]: Started sshd@1-46.224.42.239:22-4.153.228.146:58076.service - OpenSSH per-connection server daemon (4.153.228.146:58076). Jan 16 23:56:31.959772 sshd[1625]: Accepted publickey for core from 4.153.228.146 port 58076 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 16 23:56:31.962668 sshd[1625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 23:56:31.967851 systemd-logind[1448]: New session 2 of user core. Jan 16 23:56:31.984264 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 16 23:56:32.237089 systemd-timesyncd[1381]: Contacted time server 85.215.189.120:123 (2.flatcar.pool.ntp.org). Jan 16 23:56:32.237197 systemd-timesyncd[1381]: Initial clock synchronization to Fri 2026-01-16 23:56:32.018394 UTC. Jan 16 23:56:32.408963 sshd[1625]: pam_unix(sshd:session): session closed for user core Jan 16 23:56:32.415147 systemd[1]: sshd@1-46.224.42.239:22-4.153.228.146:58076.service: Deactivated successfully. Jan 16 23:56:32.420233 systemd[1]: session-2.scope: Deactivated successfully. Jan 16 23:56:32.421232 systemd-logind[1448]: Session 2 logged out. Waiting for processes to exit. Jan 16 23:56:32.422516 systemd-logind[1448]: Removed session 2. Jan 16 23:56:32.526468 systemd[1]: Started sshd@2-46.224.42.239:22-4.153.228.146:58088.service - OpenSSH per-connection server daemon (4.153.228.146:58088). Jan 16 23:56:33.145314 sshd[1632]: Accepted publickey for core from 4.153.228.146 port 58088 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 16 23:56:33.148039 sshd[1632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 23:56:33.154528 systemd-logind[1448]: New session 3 of user core. Jan 16 23:56:33.161480 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 16 23:56:33.578868 sshd[1632]: pam_unix(sshd:session): session closed for user core Jan 16 23:56:33.588850 systemd[1]: session-3.scope: Deactivated successfully. Jan 16 23:56:33.591361 systemd[1]: sshd@2-46.224.42.239:22-4.153.228.146:58088.service: Deactivated successfully. Jan 16 23:56:33.597424 systemd-logind[1448]: Session 3 logged out. Waiting for processes to exit. Jan 16 23:56:33.598656 systemd-logind[1448]: Removed session 3. Jan 16 23:56:33.681258 systemd[1]: Started sshd@3-46.224.42.239:22-4.153.228.146:58098.service - OpenSSH per-connection server daemon (4.153.228.146:58098). Jan 16 23:56:33.760731 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 16 23:56:33.772708 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 23:56:33.907973 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 23:56:33.913700 (kubelet)[1649]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 23:56:33.958744 kubelet[1649]: E0116 23:56:33.958655 1649 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 23:56:33.961084 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 23:56:33.961303 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 23:56:34.284446 sshd[1639]: Accepted publickey for core from 4.153.228.146 port 58098 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 16 23:56:34.286494 sshd[1639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 23:56:34.292011 systemd-logind[1448]: New session 4 of user core. Jan 16 23:56:34.301293 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 16 23:56:34.701595 sshd[1639]: pam_unix(sshd:session): session closed for user core Jan 16 23:56:34.706276 systemd[1]: sshd@3-46.224.42.239:22-4.153.228.146:58098.service: Deactivated successfully. Jan 16 23:56:34.711074 systemd[1]: session-4.scope: Deactivated successfully. Jan 16 23:56:34.716678 systemd-logind[1448]: Session 4 logged out. Waiting for processes to exit. Jan 16 23:56:34.718693 systemd-logind[1448]: Removed session 4. Jan 16 23:56:34.816385 systemd[1]: Started sshd@4-46.224.42.239:22-4.153.228.146:60394.service - OpenSSH per-connection server daemon (4.153.228.146:60394). Jan 16 23:56:35.400302 sshd[1661]: Accepted publickey for core from 4.153.228.146 port 60394 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 16 23:56:35.401543 sshd[1661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 23:56:35.406031 systemd-logind[1448]: New session 5 of user core. Jan 16 23:56:35.415247 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 16 23:56:35.743588 sudo[1664]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 16 23:56:35.749817 sudo[1664]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 23:56:35.768482 sudo[1664]: pam_unix(sudo:session): session closed for user root Jan 16 23:56:35.863664 sshd[1661]: pam_unix(sshd:session): session closed for user core Jan 16 23:56:35.869505 systemd[1]: sshd@4-46.224.42.239:22-4.153.228.146:60394.service: Deactivated successfully. Jan 16 23:56:35.872734 systemd[1]: session-5.scope: Deactivated successfully. Jan 16 23:56:35.873835 systemd-logind[1448]: Session 5 logged out. Waiting for processes to exit. Jan 16 23:56:35.874818 systemd-logind[1448]: Removed session 5. Jan 16 23:56:35.981626 systemd[1]: Started sshd@5-46.224.42.239:22-4.153.228.146:60404.service - OpenSSH per-connection server daemon (4.153.228.146:60404). Jan 16 23:56:36.585812 sshd[1669]: Accepted publickey for core from 4.153.228.146 port 60404 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 16 23:56:36.587842 sshd[1669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 23:56:36.594934 systemd-logind[1448]: New session 6 of user core. Jan 16 23:56:36.601332 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 16 23:56:36.921386 sudo[1673]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 16 23:56:36.921898 sudo[1673]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 23:56:36.926054 sudo[1673]: pam_unix(sudo:session): session closed for user root Jan 16 23:56:36.932056 sudo[1672]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 16 23:56:36.932358 sudo[1672]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 23:56:36.949452 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 16 23:56:36.951235 auditctl[1676]: No rules Jan 16 23:56:36.952661 systemd[1]: audit-rules.service: Deactivated successfully. Jan 16 23:56:36.952868 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 16 23:56:36.955887 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 16 23:56:36.997007 augenrules[1694]: No rules Jan 16 23:56:37.000128 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 16 23:56:37.001829 sudo[1672]: pam_unix(sudo:session): session closed for user root Jan 16 23:56:37.102282 sshd[1669]: pam_unix(sshd:session): session closed for user core Jan 16 23:56:37.106321 systemd-logind[1448]: Session 6 logged out. Waiting for processes to exit. Jan 16 23:56:37.106554 systemd[1]: sshd@5-46.224.42.239:22-4.153.228.146:60404.service: Deactivated successfully. Jan 16 23:56:37.108644 systemd[1]: session-6.scope: Deactivated successfully. Jan 16 23:56:37.111534 systemd-logind[1448]: Removed session 6. Jan 16 23:56:37.214416 systemd[1]: Started sshd@6-46.224.42.239:22-4.153.228.146:60412.service - OpenSSH per-connection server daemon (4.153.228.146:60412). Jan 16 23:56:37.809898 sshd[1702]: Accepted publickey for core from 4.153.228.146 port 60412 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 16 23:56:37.815234 sshd[1702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 23:56:37.822642 systemd-logind[1448]: New session 7 of user core. Jan 16 23:56:37.827303 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 16 23:56:38.146537 sudo[1705]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 16 23:56:38.147389 sudo[1705]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 23:56:38.467465 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 16 23:56:38.467493 (dockerd)[1720]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 16 23:56:38.732849 dockerd[1720]: time="2026-01-16T23:56:38.732344522Z" level=info msg="Starting up" Jan 16 23:56:38.813863 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2971988544-merged.mount: Deactivated successfully. Jan 16 23:56:38.831117 systemd[1]: var-lib-docker-metacopy\x2dcheck2323076385-merged.mount: Deactivated successfully. Jan 16 23:56:38.841137 dockerd[1720]: time="2026-01-16T23:56:38.841073847Z" level=info msg="Loading containers: start." Jan 16 23:56:38.969083 kernel: Initializing XFRM netlink socket Jan 16 23:56:39.054818 systemd-networkd[1366]: docker0: Link UP Jan 16 23:56:39.074011 dockerd[1720]: time="2026-01-16T23:56:39.073477390Z" level=info msg="Loading containers: done." Jan 16 23:56:39.090231 dockerd[1720]: time="2026-01-16T23:56:39.090066971Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 16 23:56:39.090420 dockerd[1720]: time="2026-01-16T23:56:39.090303478Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 16 23:56:39.090481 dockerd[1720]: time="2026-01-16T23:56:39.090460807Z" level=info msg="Daemon has completed initialization" Jan 16 23:56:39.131309 dockerd[1720]: time="2026-01-16T23:56:39.131077531Z" level=info msg="API listen on /run/docker.sock" Jan 16 23:56:39.131705 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 16 23:56:40.162211 containerd[1469]: time="2026-01-16T23:56:40.162114386Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 16 23:56:40.861658 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3710270614.mount: Deactivated successfully. Jan 16 23:56:41.671989 containerd[1469]: time="2026-01-16T23:56:41.670595830Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:56:41.673005 containerd[1469]: time="2026-01-16T23:56:41.672966228Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=26442080" Jan 16 23:56:41.675340 containerd[1469]: time="2026-01-16T23:56:41.675296367Z" level=info msg="ImageCreate event name:\"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:56:41.680169 containerd[1469]: time="2026-01-16T23:56:41.680121684Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:56:41.681497 containerd[1469]: time="2026-01-16T23:56:41.681458151Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"26438581\" in 1.519291951s" Jan 16 23:56:41.681640 containerd[1469]: time="2026-01-16T23:56:41.681622197Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\"" Jan 16 23:56:41.682467 containerd[1469]: time="2026-01-16T23:56:41.682438311Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 16 23:56:42.845075 containerd[1469]: time="2026-01-16T23:56:42.845003516Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:56:42.847981 containerd[1469]: time="2026-01-16T23:56:42.847011700Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=22622106" Jan 16 23:56:42.848301 containerd[1469]: time="2026-01-16T23:56:42.848267906Z" level=info msg="ImageCreate event name:\"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:56:42.852397 containerd[1469]: time="2026-01-16T23:56:42.852349374Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:56:42.854290 containerd[1469]: time="2026-01-16T23:56:42.854245565Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"24206567\" in 1.171586885s" Jan 16 23:56:42.854446 containerd[1469]: time="2026-01-16T23:56:42.854429007Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\"" Jan 16 23:56:42.855204 containerd[1469]: time="2026-01-16T23:56:42.855165077Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 16 23:56:43.854647 containerd[1469]: time="2026-01-16T23:56:43.854536783Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:56:43.856691 containerd[1469]: time="2026-01-16T23:56:43.856330252Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=17616767" Jan 16 23:56:43.859101 containerd[1469]: time="2026-01-16T23:56:43.858386421Z" level=info msg="ImageCreate event name:\"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:56:43.864057 containerd[1469]: time="2026-01-16T23:56:43.864002800Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:56:43.864985 containerd[1469]: time="2026-01-16T23:56:43.864912977Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"19201246\" in 1.009697583s" Jan 16 23:56:43.864985 containerd[1469]: time="2026-01-16T23:56:43.864981958Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\"" Jan 16 23:56:43.866768 containerd[1469]: time="2026-01-16T23:56:43.866733212Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 16 23:56:44.011406 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 16 23:56:44.017391 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 23:56:44.170388 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 23:56:44.177435 (kubelet)[1933]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 23:56:44.226885 kubelet[1933]: E0116 23:56:44.226800 1933 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 23:56:44.231555 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 23:56:44.231880 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 23:56:44.880810 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3167553454.mount: Deactivated successfully. Jan 16 23:56:45.227548 containerd[1469]: time="2026-01-16T23:56:45.227115142Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:56:45.230183 containerd[1469]: time="2026-01-16T23:56:45.230085442Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=27558750" Jan 16 23:56:45.231820 containerd[1469]: time="2026-01-16T23:56:45.231701150Z" level=info msg="ImageCreate event name:\"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:56:45.238751 containerd[1469]: time="2026-01-16T23:56:45.237128786Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:56:45.238751 containerd[1469]: time="2026-01-16T23:56:45.238482264Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"27557743\" in 1.371704502s" Jan 16 23:56:45.238751 containerd[1469]: time="2026-01-16T23:56:45.238552715Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\"" Jan 16 23:56:45.239828 containerd[1469]: time="2026-01-16T23:56:45.239759086Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 16 23:56:45.829179 update_engine[1449]: I20260116 23:56:45.829041 1449 update_attempter.cc:509] Updating boot flags... Jan 16 23:56:45.938492 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1957) Jan 16 23:56:46.014142 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1569880706.mount: Deactivated successfully. Jan 16 23:56:46.043721 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1959) Jan 16 23:56:46.761559 containerd[1469]: time="2026-01-16T23:56:46.760122678Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:56:46.763168 containerd[1469]: time="2026-01-16T23:56:46.763125643Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951714" Jan 16 23:56:46.766063 containerd[1469]: time="2026-01-16T23:56:46.764912446Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:56:46.769437 containerd[1469]: time="2026-01-16T23:56:46.769385982Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:56:46.771933 containerd[1469]: time="2026-01-16T23:56:46.771883072Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.532060967s" Jan 16 23:56:46.772117 containerd[1469]: time="2026-01-16T23:56:46.772098591Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jan 16 23:56:46.773237 containerd[1469]: time="2026-01-16T23:56:46.773207112Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 16 23:56:47.298711 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3643289373.mount: Deactivated successfully. Jan 16 23:56:47.306505 containerd[1469]: time="2026-01-16T23:56:47.306200183Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:56:47.308109 containerd[1469]: time="2026-01-16T23:56:47.307870404Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Jan 16 23:56:47.309430 containerd[1469]: time="2026-01-16T23:56:47.309381018Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:56:47.313960 containerd[1469]: time="2026-01-16T23:56:47.313037323Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:56:47.313960 containerd[1469]: time="2026-01-16T23:56:47.313813124Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 540.434474ms" Jan 16 23:56:47.313960 containerd[1469]: time="2026-01-16T23:56:47.313844671Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 16 23:56:47.314841 containerd[1469]: time="2026-01-16T23:56:47.314817041Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 16 23:56:47.922207 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3008604400.mount: Deactivated successfully. Jan 16 23:56:49.383880 containerd[1469]: time="2026-01-16T23:56:49.383745381Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:56:49.385955 containerd[1469]: time="2026-01-16T23:56:49.385854961Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943239" Jan 16 23:56:49.386701 containerd[1469]: time="2026-01-16T23:56:49.386612831Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:56:49.390928 containerd[1469]: time="2026-01-16T23:56:49.390845467Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:56:49.393970 containerd[1469]: time="2026-01-16T23:56:49.393214208Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.078358359s" Jan 16 23:56:49.393970 containerd[1469]: time="2026-01-16T23:56:49.393277843Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jan 16 23:56:54.261471 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 16 23:56:54.271339 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 23:56:54.516371 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 23:56:54.519228 (kubelet)[2105]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 23:56:54.564690 kubelet[2105]: E0116 23:56:54.564520 2105 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 23:56:54.570142 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 23:56:54.570295 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 23:56:56.367034 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 23:56:56.378246 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 23:56:56.416621 systemd[1]: Reloading requested from client PID 2118 ('systemctl') (unit session-7.scope)... Jan 16 23:56:56.416775 systemd[1]: Reloading... Jan 16 23:56:56.544793 zram_generator::config[2159]: No configuration found. Jan 16 23:56:56.655540 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 16 23:56:56.730543 systemd[1]: Reloading finished in 313 ms. Jan 16 23:56:56.785585 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 16 23:56:56.785801 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 16 23:56:56.786424 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 23:56:56.789216 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 23:56:56.921940 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 23:56:56.933819 (kubelet)[2208]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 16 23:56:56.986648 kubelet[2208]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 23:56:56.988127 kubelet[2208]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 16 23:56:56.988127 kubelet[2208]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 23:56:56.988127 kubelet[2208]: I0116 23:56:56.987208 2208 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 16 23:56:57.362892 kubelet[2208]: I0116 23:56:57.362841 2208 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 16 23:56:57.363087 kubelet[2208]: I0116 23:56:57.363074 2208 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 16 23:56:57.363573 kubelet[2208]: I0116 23:56:57.363550 2208 server.go:954] "Client rotation is on, will bootstrap in background" Jan 16 23:56:57.389984 kubelet[2208]: E0116 23:56:57.389924 2208 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://46.224.42.239:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 46.224.42.239:6443: connect: connection refused" logger="UnhandledError" Jan 16 23:56:57.394431 kubelet[2208]: I0116 23:56:57.394384 2208 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 16 23:56:57.403666 kubelet[2208]: E0116 23:56:57.403628 2208 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 16 23:56:57.403867 kubelet[2208]: I0116 23:56:57.403853 2208 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 16 23:56:57.406680 kubelet[2208]: I0116 23:56:57.406646 2208 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 16 23:56:57.407865 kubelet[2208]: I0116 23:56:57.407815 2208 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 16 23:56:57.408226 kubelet[2208]: I0116 23:56:57.408017 2208 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-fe2a5b3650","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 16 23:56:57.408434 kubelet[2208]: I0116 23:56:57.408422 2208 topology_manager.go:138] "Creating topology manager with none policy" Jan 16 23:56:57.408497 kubelet[2208]: I0116 23:56:57.408489 2208 container_manager_linux.go:304] "Creating device plugin manager" Jan 16 23:56:57.408742 kubelet[2208]: I0116 23:56:57.408729 2208 state_mem.go:36] "Initialized new in-memory state store" Jan 16 23:56:57.412382 kubelet[2208]: I0116 23:56:57.412347 2208 kubelet.go:446] "Attempting to sync node with API server" Jan 16 23:56:57.412849 kubelet[2208]: I0116 23:56:57.412516 2208 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 16 23:56:57.412849 kubelet[2208]: I0116 23:56:57.412543 2208 kubelet.go:352] "Adding apiserver pod source" Jan 16 23:56:57.412849 kubelet[2208]: I0116 23:56:57.412554 2208 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 16 23:56:57.419556 kubelet[2208]: W0116 23:56:57.419483 2208 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://46.224.42.239:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-fe2a5b3650&limit=500&resourceVersion=0": dial tcp 46.224.42.239:6443: connect: connection refused Jan 16 23:56:57.420246 kubelet[2208]: E0116 23:56:57.419577 2208 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://46.224.42.239:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-fe2a5b3650&limit=500&resourceVersion=0\": dial tcp 46.224.42.239:6443: connect: connection refused" logger="UnhandledError" Jan 16 23:56:57.420246 kubelet[2208]: I0116 23:56:57.419676 2208 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 16 23:56:57.420638 kubelet[2208]: I0116 23:56:57.420603 2208 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 16 23:56:57.420766 kubelet[2208]: W0116 23:56:57.420752 2208 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 16 23:56:57.423174 kubelet[2208]: I0116 23:56:57.423059 2208 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 16 23:56:57.423174 kubelet[2208]: I0116 23:56:57.423107 2208 server.go:1287] "Started kubelet" Jan 16 23:56:57.425236 kubelet[2208]: W0116 23:56:57.425197 2208 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://46.224.42.239:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 46.224.42.239:6443: connect: connection refused Jan 16 23:56:57.425376 kubelet[2208]: E0116 23:56:57.425355 2208 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://46.224.42.239:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 46.224.42.239:6443: connect: connection refused" logger="UnhandledError" Jan 16 23:56:57.425617 kubelet[2208]: I0116 23:56:57.425510 2208 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 16 23:56:57.428156 kubelet[2208]: I0116 23:56:57.428070 2208 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 16 23:56:57.428632 kubelet[2208]: I0116 23:56:57.428458 2208 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 16 23:56:57.429755 kubelet[2208]: E0116 23:56:57.429489 2208 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://46.224.42.239:6443/api/v1/namespaces/default/events\": dial tcp 46.224.42.239:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-6-n-fe2a5b3650.188b5b70807b572f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-n-fe2a5b3650,UID:ci-4081-3-6-n-fe2a5b3650,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-fe2a5b3650,},FirstTimestamp:2026-01-16 23:56:57.423083311 +0000 UTC m=+0.482887625,LastTimestamp:2026-01-16 23:56:57.423083311 +0000 UTC m=+0.482887625,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-fe2a5b3650,}" Jan 16 23:56:57.430840 kubelet[2208]: I0116 23:56:57.430668 2208 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 16 23:56:57.432605 kubelet[2208]: I0116 23:56:57.431795 2208 server.go:479] "Adding debug handlers to kubelet server" Jan 16 23:56:57.432930 kubelet[2208]: I0116 23:56:57.432898 2208 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 16 23:56:57.435719 kubelet[2208]: E0116 23:56:57.435462 2208 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-fe2a5b3650\" not found" Jan 16 23:56:57.435719 kubelet[2208]: I0116 23:56:57.435516 2208 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 16 23:56:57.435719 kubelet[2208]: I0116 23:56:57.435704 2208 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 16 23:56:57.435850 kubelet[2208]: I0116 23:56:57.435750 2208 reconciler.go:26] "Reconciler: start to sync state" Jan 16 23:56:57.437371 kubelet[2208]: W0116 23:56:57.436281 2208 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://46.224.42.239:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 46.224.42.239:6443: connect: connection refused Jan 16 23:56:57.437371 kubelet[2208]: E0116 23:56:57.436340 2208 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://46.224.42.239:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 46.224.42.239:6443: connect: connection refused" logger="UnhandledError" Jan 16 23:56:57.437371 kubelet[2208]: I0116 23:56:57.436494 2208 factory.go:221] Registration of the systemd container factory successfully Jan 16 23:56:57.437371 kubelet[2208]: I0116 23:56:57.436575 2208 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 16 23:56:57.437371 kubelet[2208]: E0116 23:56:57.436755 2208 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 16 23:56:57.438345 kubelet[2208]: E0116 23:56:57.438287 2208 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://46.224.42.239:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-fe2a5b3650?timeout=10s\": dial tcp 46.224.42.239:6443: connect: connection refused" interval="200ms" Jan 16 23:56:57.439062 kubelet[2208]: I0116 23:56:57.439035 2208 factory.go:221] Registration of the containerd container factory successfully Jan 16 23:56:57.455023 kubelet[2208]: I0116 23:56:57.453037 2208 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 16 23:56:57.459163 kubelet[2208]: I0116 23:56:57.459063 2208 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 16 23:56:57.459163 kubelet[2208]: I0116 23:56:57.459158 2208 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 16 23:56:57.459423 kubelet[2208]: I0116 23:56:57.459197 2208 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 16 23:56:57.459423 kubelet[2208]: I0116 23:56:57.459204 2208 kubelet.go:2382] "Starting kubelet main sync loop" Jan 16 23:56:57.459423 kubelet[2208]: E0116 23:56:57.459258 2208 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 16 23:56:57.468524 kubelet[2208]: W0116 23:56:57.468214 2208 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://46.224.42.239:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 46.224.42.239:6443: connect: connection refused Jan 16 23:56:57.468524 kubelet[2208]: E0116 23:56:57.468284 2208 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://46.224.42.239:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 46.224.42.239:6443: connect: connection refused" logger="UnhandledError" Jan 16 23:56:57.470009 kubelet[2208]: I0116 23:56:57.469724 2208 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 16 23:56:57.470009 kubelet[2208]: I0116 23:56:57.469741 2208 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 16 23:56:57.470009 kubelet[2208]: I0116 23:56:57.469761 2208 state_mem.go:36] "Initialized new in-memory state store" Jan 16 23:56:57.472318 kubelet[2208]: I0116 23:56:57.472288 2208 policy_none.go:49] "None policy: Start" Jan 16 23:56:57.472449 kubelet[2208]: I0116 23:56:57.472436 2208 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 16 23:56:57.472521 kubelet[2208]: I0116 23:56:57.472512 2208 state_mem.go:35] "Initializing new in-memory state store" Jan 16 23:56:57.482394 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 16 23:56:57.500892 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 16 23:56:57.505073 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 16 23:56:57.517064 kubelet[2208]: I0116 23:56:57.516865 2208 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 16 23:56:57.518108 kubelet[2208]: I0116 23:56:57.517435 2208 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 16 23:56:57.518108 kubelet[2208]: I0116 23:56:57.517467 2208 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 16 23:56:57.520474 kubelet[2208]: I0116 23:56:57.520247 2208 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 16 23:56:57.521132 kubelet[2208]: E0116 23:56:57.521053 2208 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 16 23:56:57.522779 kubelet[2208]: E0116 23:56:57.522757 2208 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-6-n-fe2a5b3650\" not found" Jan 16 23:56:57.578291 systemd[1]: Created slice kubepods-burstable-podc2cc036723e3d4b75bccc3328e52b6de.slice - libcontainer container kubepods-burstable-podc2cc036723e3d4b75bccc3328e52b6de.slice. Jan 16 23:56:57.585065 kubelet[2208]: E0116 23:56:57.584836 2208 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-fe2a5b3650\" not found" node="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:56:57.587375 systemd[1]: Created slice kubepods-burstable-pod3e4c1a7372e4c44cc56827f901e126e5.slice - libcontainer container kubepods-burstable-pod3e4c1a7372e4c44cc56827f901e126e5.slice. Jan 16 23:56:57.597234 kubelet[2208]: E0116 23:56:57.597180 2208 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-fe2a5b3650\" not found" node="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:56:57.601746 systemd[1]: Created slice kubepods-burstable-pod52d565efd703938a784538cf58aea068.slice - libcontainer container kubepods-burstable-pod52d565efd703938a784538cf58aea068.slice. Jan 16 23:56:57.604177 kubelet[2208]: E0116 23:56:57.604136 2208 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-fe2a5b3650\" not found" node="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:56:57.621008 kubelet[2208]: I0116 23:56:57.620723 2208 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:56:57.622963 kubelet[2208]: E0116 23:56:57.621259 2208 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://46.224.42.239:6443/api/v1/nodes\": dial tcp 46.224.42.239:6443: connect: connection refused" node="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:56:57.639464 kubelet[2208]: E0116 23:56:57.639387 2208 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://46.224.42.239:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-fe2a5b3650?timeout=10s\": dial tcp 46.224.42.239:6443: connect: connection refused" interval="400ms" Jan 16 23:56:57.737480 kubelet[2208]: I0116 23:56:57.737002 2208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c2cc036723e3d4b75bccc3328e52b6de-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-fe2a5b3650\" (UID: \"c2cc036723e3d4b75bccc3328e52b6de\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-fe2a5b3650" Jan 16 23:56:57.737480 kubelet[2208]: I0116 23:56:57.737063 2208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c2cc036723e3d4b75bccc3328e52b6de-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-fe2a5b3650\" (UID: \"c2cc036723e3d4b75bccc3328e52b6de\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-fe2a5b3650" Jan 16 23:56:57.737480 kubelet[2208]: I0116 23:56:57.737096 2208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c2cc036723e3d4b75bccc3328e52b6de-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-fe2a5b3650\" (UID: \"c2cc036723e3d4b75bccc3328e52b6de\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-fe2a5b3650" Jan 16 23:56:57.737480 kubelet[2208]: I0116 23:56:57.737143 2208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3e4c1a7372e4c44cc56827f901e126e5-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-fe2a5b3650\" (UID: \"3e4c1a7372e4c44cc56827f901e126e5\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-fe2a5b3650" Jan 16 23:56:57.737480 kubelet[2208]: I0116 23:56:57.737173 2208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3e4c1a7372e4c44cc56827f901e126e5-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-fe2a5b3650\" (UID: \"3e4c1a7372e4c44cc56827f901e126e5\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-fe2a5b3650" Jan 16 23:56:57.738851 kubelet[2208]: I0116 23:56:57.737203 2208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3e4c1a7372e4c44cc56827f901e126e5-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-fe2a5b3650\" (UID: \"3e4c1a7372e4c44cc56827f901e126e5\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-fe2a5b3650" Jan 16 23:56:57.738851 kubelet[2208]: I0116 23:56:57.737231 2208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/52d565efd703938a784538cf58aea068-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-fe2a5b3650\" (UID: \"52d565efd703938a784538cf58aea068\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-fe2a5b3650" Jan 16 23:56:57.738851 kubelet[2208]: I0116 23:56:57.737257 2208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3e4c1a7372e4c44cc56827f901e126e5-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-fe2a5b3650\" (UID: \"3e4c1a7372e4c44cc56827f901e126e5\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-fe2a5b3650" Jan 16 23:56:57.738851 kubelet[2208]: I0116 23:56:57.737288 2208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3e4c1a7372e4c44cc56827f901e126e5-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-fe2a5b3650\" (UID: \"3e4c1a7372e4c44cc56827f901e126e5\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-fe2a5b3650" Jan 16 23:56:57.824835 kubelet[2208]: I0116 23:56:57.824783 2208 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:56:57.825400 kubelet[2208]: E0116 23:56:57.825368 2208 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://46.224.42.239:6443/api/v1/nodes\": dial tcp 46.224.42.239:6443: connect: connection refused" node="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:56:57.881573 systemd[1]: Started sshd@7-46.224.42.239:22-185.156.73.233:20210.service - OpenSSH per-connection server daemon (185.156.73.233:20210). Jan 16 23:56:57.887118 containerd[1469]: time="2026-01-16T23:56:57.887001915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-fe2a5b3650,Uid:c2cc036723e3d4b75bccc3328e52b6de,Namespace:kube-system,Attempt:0,}" Jan 16 23:56:57.898668 containerd[1469]: time="2026-01-16T23:56:57.898557346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-fe2a5b3650,Uid:3e4c1a7372e4c44cc56827f901e126e5,Namespace:kube-system,Attempt:0,}" Jan 16 23:56:57.906740 containerd[1469]: time="2026-01-16T23:56:57.906179856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-fe2a5b3650,Uid:52d565efd703938a784538cf58aea068,Namespace:kube-system,Attempt:0,}" Jan 16 23:56:58.040190 kubelet[2208]: E0116 23:56:58.040110 2208 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://46.224.42.239:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-fe2a5b3650?timeout=10s\": dial tcp 46.224.42.239:6443: connect: connection refused" interval="800ms" Jan 16 23:56:58.231130 kubelet[2208]: I0116 23:56:58.230591 2208 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:56:58.231130 kubelet[2208]: E0116 23:56:58.231021 2208 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://46.224.42.239:6443/api/v1/nodes\": dial tcp 46.224.42.239:6443: connect: connection refused" node="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:56:58.345800 kubelet[2208]: W0116 23:56:58.345723 2208 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://46.224.42.239:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-fe2a5b3650&limit=500&resourceVersion=0": dial tcp 46.224.42.239:6443: connect: connection refused Jan 16 23:56:58.345800 kubelet[2208]: E0116 23:56:58.345800 2208 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://46.224.42.239:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-fe2a5b3650&limit=500&resourceVersion=0\": dial tcp 46.224.42.239:6443: connect: connection refused" logger="UnhandledError" Jan 16 23:56:58.429522 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3446967800.mount: Deactivated successfully. Jan 16 23:56:58.436983 containerd[1469]: time="2026-01-16T23:56:58.436827279Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 23:56:58.442261 containerd[1469]: time="2026-01-16T23:56:58.441861356Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Jan 16 23:56:58.443714 containerd[1469]: time="2026-01-16T23:56:58.443181876Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 23:56:58.445174 containerd[1469]: time="2026-01-16T23:56:58.445116999Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 23:56:58.451414 containerd[1469]: time="2026-01-16T23:56:58.451250410Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 16 23:56:58.451708 containerd[1469]: time="2026-01-16T23:56:58.451674439Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 23:56:58.454983 containerd[1469]: time="2026-01-16T23:56:58.454162946Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 23:56:58.454983 containerd[1469]: time="2026-01-16T23:56:58.454932080Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 16 23:56:58.456723 containerd[1469]: time="2026-01-16T23:56:58.456682742Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 569.589846ms" Jan 16 23:56:58.463764 containerd[1469]: time="2026-01-16T23:56:58.463636638Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 564.962979ms" Jan 16 23:56:58.465292 containerd[1469]: time="2026-01-16T23:56:58.464514706Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 558.152131ms" Jan 16 23:56:58.507297 kubelet[2208]: E0116 23:56:58.507048 2208 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://46.224.42.239:6443/api/v1/namespaces/default/events\": dial tcp 46.224.42.239:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-6-n-fe2a5b3650.188b5b70807b572f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-n-fe2a5b3650,UID:ci-4081-3-6-n-fe2a5b3650,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-fe2a5b3650,},FirstTimestamp:2026-01-16 23:56:57.423083311 +0000 UTC m=+0.482887625,LastTimestamp:2026-01-16 23:56:57.423083311 +0000 UTC m=+0.482887625,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-fe2a5b3650,}" Jan 16 23:56:58.593358 kubelet[2208]: W0116 23:56:58.593286 2208 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://46.224.42.239:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 46.224.42.239:6443: connect: connection refused Jan 16 23:56:58.593358 kubelet[2208]: E0116 23:56:58.593352 2208 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://46.224.42.239:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 46.224.42.239:6443: connect: connection refused" logger="UnhandledError" Jan 16 23:56:58.599074 containerd[1469]: time="2026-01-16T23:56:58.598771738Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 23:56:58.599588 containerd[1469]: time="2026-01-16T23:56:58.598983732Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 23:56:58.599588 containerd[1469]: time="2026-01-16T23:56:58.599027170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:56:58.599793 containerd[1469]: time="2026-01-16T23:56:58.599739279Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:56:58.602523 containerd[1469]: time="2026-01-16T23:56:58.602211162Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 23:56:58.602770 containerd[1469]: time="2026-01-16T23:56:58.602732936Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 23:56:58.602818 containerd[1469]: time="2026-01-16T23:56:58.602772857Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:56:58.605027 containerd[1469]: time="2026-01-16T23:56:58.604702346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:56:58.607347 containerd[1469]: time="2026-01-16T23:56:58.607102418Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 23:56:58.607347 containerd[1469]: time="2026-01-16T23:56:58.607165997Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 23:56:58.607347 containerd[1469]: time="2026-01-16T23:56:58.607177026Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:56:58.607347 containerd[1469]: time="2026-01-16T23:56:58.607272373Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:56:58.632163 systemd[1]: Started cri-containerd-7ff48d85cb350cd537e4ddf46380b212559767d506543f875eda01254032738c.scope - libcontainer container 7ff48d85cb350cd537e4ddf46380b212559767d506543f875eda01254032738c. Jan 16 23:56:58.651141 systemd[1]: Started cri-containerd-71db887d63d25f8aeeabac328e539ccddd21c8b1447df2c3519e4ccf6c9aff5f.scope - libcontainer container 71db887d63d25f8aeeabac328e539ccddd21c8b1447df2c3519e4ccf6c9aff5f. Jan 16 23:56:58.653535 systemd[1]: Started cri-containerd-d6d2ea255279972550f109cf520a4a8af840e82e22adad406c55c9afb40de60d.scope - libcontainer container d6d2ea255279972550f109cf520a4a8af840e82e22adad406c55c9afb40de60d. Jan 16 23:56:58.691799 kubelet[2208]: W0116 23:56:58.691592 2208 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://46.224.42.239:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 46.224.42.239:6443: connect: connection refused Jan 16 23:56:58.691799 kubelet[2208]: E0116 23:56:58.691741 2208 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://46.224.42.239:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 46.224.42.239:6443: connect: connection refused" logger="UnhandledError" Jan 16 23:56:58.726070 containerd[1469]: time="2026-01-16T23:56:58.726001425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-fe2a5b3650,Uid:52d565efd703938a784538cf58aea068,Namespace:kube-system,Attempt:0,} returns sandbox id \"d6d2ea255279972550f109cf520a4a8af840e82e22adad406c55c9afb40de60d\"" Jan 16 23:56:58.732740 containerd[1469]: time="2026-01-16T23:56:58.732148982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-fe2a5b3650,Uid:3e4c1a7372e4c44cc56827f901e126e5,Namespace:kube-system,Attempt:0,} returns sandbox id \"7ff48d85cb350cd537e4ddf46380b212559767d506543f875eda01254032738c\"" Jan 16 23:56:58.736885 containerd[1469]: time="2026-01-16T23:56:58.736822610Z" level=info msg="CreateContainer within sandbox \"d6d2ea255279972550f109cf520a4a8af840e82e22adad406c55c9afb40de60d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 16 23:56:58.738419 containerd[1469]: time="2026-01-16T23:56:58.738378741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-fe2a5b3650,Uid:c2cc036723e3d4b75bccc3328e52b6de,Namespace:kube-system,Attempt:0,} returns sandbox id \"71db887d63d25f8aeeabac328e539ccddd21c8b1447df2c3519e4ccf6c9aff5f\"" Jan 16 23:56:58.739192 containerd[1469]: time="2026-01-16T23:56:58.738903911Z" level=info msg="CreateContainer within sandbox \"7ff48d85cb350cd537e4ddf46380b212559767d506543f875eda01254032738c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 16 23:56:58.745324 containerd[1469]: time="2026-01-16T23:56:58.745162801Z" level=info msg="CreateContainer within sandbox \"71db887d63d25f8aeeabac328e539ccddd21c8b1447df2c3519e4ccf6c9aff5f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 16 23:56:58.769316 containerd[1469]: time="2026-01-16T23:56:58.769070055Z" level=info msg="CreateContainer within sandbox \"d6d2ea255279972550f109cf520a4a8af840e82e22adad406c55c9afb40de60d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b051169f2f8275f46d737fbecc5e375306f19a586a4fc98bbcf41b49ffc744a1\"" Jan 16 23:56:58.770132 containerd[1469]: time="2026-01-16T23:56:58.770094461Z" level=info msg="StartContainer for \"b051169f2f8275f46d737fbecc5e375306f19a586a4fc98bbcf41b49ffc744a1\"" Jan 16 23:56:58.773562 containerd[1469]: time="2026-01-16T23:56:58.773294758Z" level=info msg="CreateContainer within sandbox \"7ff48d85cb350cd537e4ddf46380b212559767d506543f875eda01254032738c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b0387641e8c7b6d69bf0e25fbfa10c9bd3eb8c58a47622b8b556a0b76120c394\"" Jan 16 23:56:58.774987 containerd[1469]: time="2026-01-16T23:56:58.774836302Z" level=info msg="CreateContainer within sandbox \"71db887d63d25f8aeeabac328e539ccddd21c8b1447df2c3519e4ccf6c9aff5f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a503e827b0434c723811d423ecd1dfb94e9f18291556bb0be7c36a60b62f0179\"" Jan 16 23:56:58.775911 containerd[1469]: time="2026-01-16T23:56:58.775854035Z" level=info msg="StartContainer for \"b0387641e8c7b6d69bf0e25fbfa10c9bd3eb8c58a47622b8b556a0b76120c394\"" Jan 16 23:56:58.781555 containerd[1469]: time="2026-01-16T23:56:58.781509950Z" level=info msg="StartContainer for \"a503e827b0434c723811d423ecd1dfb94e9f18291556bb0be7c36a60b62f0179\"" Jan 16 23:56:58.808143 systemd[1]: Started cri-containerd-b051169f2f8275f46d737fbecc5e375306f19a586a4fc98bbcf41b49ffc744a1.scope - libcontainer container b051169f2f8275f46d737fbecc5e375306f19a586a4fc98bbcf41b49ffc744a1. Jan 16 23:56:58.816446 systemd[1]: Started cri-containerd-b0387641e8c7b6d69bf0e25fbfa10c9bd3eb8c58a47622b8b556a0b76120c394.scope - libcontainer container b0387641e8c7b6d69bf0e25fbfa10c9bd3eb8c58a47622b8b556a0b76120c394. Jan 16 23:56:58.835198 systemd[1]: Started cri-containerd-a503e827b0434c723811d423ecd1dfb94e9f18291556bb0be7c36a60b62f0179.scope - libcontainer container a503e827b0434c723811d423ecd1dfb94e9f18291556bb0be7c36a60b62f0179. Jan 16 23:56:58.842427 kubelet[2208]: E0116 23:56:58.841867 2208 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://46.224.42.239:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-fe2a5b3650?timeout=10s\": dial tcp 46.224.42.239:6443: connect: connection refused" interval="1.6s" Jan 16 23:56:58.889486 containerd[1469]: time="2026-01-16T23:56:58.889260848Z" level=info msg="StartContainer for \"b0387641e8c7b6d69bf0e25fbfa10c9bd3eb8c58a47622b8b556a0b76120c394\" returns successfully" Jan 16 23:56:58.896637 containerd[1469]: time="2026-01-16T23:56:58.896255744Z" level=info msg="StartContainer for \"a503e827b0434c723811d423ecd1dfb94e9f18291556bb0be7c36a60b62f0179\" returns successfully" Jan 16 23:56:58.896868 kubelet[2208]: W0116 23:56:58.896817 2208 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://46.224.42.239:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 46.224.42.239:6443: connect: connection refused Jan 16 23:56:58.896980 kubelet[2208]: E0116 23:56:58.896879 2208 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://46.224.42.239:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 46.224.42.239:6443: connect: connection refused" logger="UnhandledError" Jan 16 23:56:58.903919 containerd[1469]: time="2026-01-16T23:56:58.903721624Z" level=info msg="StartContainer for \"b051169f2f8275f46d737fbecc5e375306f19a586a4fc98bbcf41b49ffc744a1\" returns successfully" Jan 16 23:56:59.033235 kubelet[2208]: I0116 23:56:59.033061 2208 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:56:59.476410 kubelet[2208]: E0116 23:56:59.476125 2208 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-fe2a5b3650\" not found" node="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:56:59.481971 kubelet[2208]: E0116 23:56:59.480999 2208 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-fe2a5b3650\" not found" node="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:56:59.485484 kubelet[2208]: E0116 23:56:59.485212 2208 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-fe2a5b3650\" not found" node="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:56:59.956768 sshd[2240]: Invalid user admin from 185.156.73.233 port 20210 Jan 16 23:57:00.017531 sshd[2240]: Connection closed by invalid user admin 185.156.73.233 port 20210 [preauth] Jan 16 23:57:00.018297 systemd[1]: sshd@7-46.224.42.239:22-185.156.73.233:20210.service: Deactivated successfully. Jan 16 23:57:00.486370 kubelet[2208]: E0116 23:57:00.486013 2208 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-fe2a5b3650\" not found" node="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:00.488323 kubelet[2208]: E0116 23:57:00.488105 2208 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-fe2a5b3650\" not found" node="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:01.428646 kubelet[2208]: I0116 23:57:01.428348 2208 apiserver.go:52] "Watching apiserver" Jan 16 23:57:01.488310 kubelet[2208]: E0116 23:57:01.488277 2208 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-fe2a5b3650\" not found" node="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:01.528200 kubelet[2208]: E0116 23:57:01.528138 2208 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-6-n-fe2a5b3650\" not found" node="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:01.536614 kubelet[2208]: I0116 23:57:01.536567 2208 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 16 23:57:01.553291 kubelet[2208]: I0116 23:57:01.553237 2208 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:01.553291 kubelet[2208]: E0116 23:57:01.553290 2208 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081-3-6-n-fe2a5b3650\": node \"ci-4081-3-6-n-fe2a5b3650\" not found" Jan 16 23:57:01.638148 kubelet[2208]: I0116 23:57:01.637809 2208 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:01.692229 kubelet[2208]: E0116 23:57:01.691791 2208 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-fe2a5b3650\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:01.692229 kubelet[2208]: I0116 23:57:01.691829 2208 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:01.701498 kubelet[2208]: E0116 23:57:01.701186 2208 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-6-n-fe2a5b3650\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:01.701498 kubelet[2208]: I0116 23:57:01.701221 2208 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:01.703934 kubelet[2208]: E0116 23:57:01.703895 2208 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-n-fe2a5b3650\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:03.200283 kubelet[2208]: I0116 23:57:03.200237 2208 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:03.884841 systemd[1]: Reloading requested from client PID 2481 ('systemctl') (unit session-7.scope)... Jan 16 23:57:03.884858 systemd[1]: Reloading... Jan 16 23:57:04.003995 zram_generator::config[2521]: No configuration found. Jan 16 23:57:04.108045 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 16 23:57:04.192478 systemd[1]: Reloading finished in 307 ms. Jan 16 23:57:04.245225 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 23:57:04.259838 systemd[1]: kubelet.service: Deactivated successfully. Jan 16 23:57:04.260196 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 23:57:04.266720 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 23:57:04.404314 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 23:57:04.416395 (kubelet)[2566]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 16 23:57:04.483314 kubelet[2566]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 23:57:04.483314 kubelet[2566]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 16 23:57:04.483314 kubelet[2566]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 23:57:04.483314 kubelet[2566]: I0116 23:57:04.482358 2566 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 16 23:57:04.497461 kubelet[2566]: I0116 23:57:04.497354 2566 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 16 23:57:04.497461 kubelet[2566]: I0116 23:57:04.497399 2566 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 16 23:57:04.497806 kubelet[2566]: I0116 23:57:04.497688 2566 server.go:954] "Client rotation is on, will bootstrap in background" Jan 16 23:57:04.499171 kubelet[2566]: I0116 23:57:04.499121 2566 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 16 23:57:04.503184 kubelet[2566]: I0116 23:57:04.502758 2566 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 16 23:57:04.508927 kubelet[2566]: E0116 23:57:04.508826 2566 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 16 23:57:04.509254 kubelet[2566]: I0116 23:57:04.509226 2566 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 16 23:57:04.513156 kubelet[2566]: I0116 23:57:04.513102 2566 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 16 23:57:04.513513 kubelet[2566]: I0116 23:57:04.513472 2566 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 16 23:57:04.513908 kubelet[2566]: I0116 23:57:04.513517 2566 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-fe2a5b3650","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 16 23:57:04.514108 kubelet[2566]: I0116 23:57:04.513912 2566 topology_manager.go:138] "Creating topology manager with none policy" Jan 16 23:57:04.514108 kubelet[2566]: I0116 23:57:04.513932 2566 container_manager_linux.go:304] "Creating device plugin manager" Jan 16 23:57:04.514108 kubelet[2566]: I0116 23:57:04.514028 2566 state_mem.go:36] "Initialized new in-memory state store" Jan 16 23:57:04.514289 kubelet[2566]: I0116 23:57:04.514231 2566 kubelet.go:446] "Attempting to sync node with API server" Jan 16 23:57:04.514289 kubelet[2566]: I0116 23:57:04.514252 2566 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 16 23:57:04.514289 kubelet[2566]: I0116 23:57:04.514278 2566 kubelet.go:352] "Adding apiserver pod source" Jan 16 23:57:04.515428 kubelet[2566]: I0116 23:57:04.514292 2566 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 16 23:57:04.527962 kubelet[2566]: I0116 23:57:04.526309 2566 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 16 23:57:04.527962 kubelet[2566]: I0116 23:57:04.527446 2566 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 16 23:57:04.528242 kubelet[2566]: I0116 23:57:04.528216 2566 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 16 23:57:04.528280 kubelet[2566]: I0116 23:57:04.528269 2566 server.go:1287] "Started kubelet" Jan 16 23:57:04.532359 kubelet[2566]: I0116 23:57:04.531793 2566 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 16 23:57:04.538320 kubelet[2566]: I0116 23:57:04.538264 2566 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 16 23:57:04.542964 kubelet[2566]: I0116 23:57:04.539329 2566 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 16 23:57:04.542964 kubelet[2566]: I0116 23:57:04.539714 2566 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 16 23:57:04.542964 kubelet[2566]: I0116 23:57:04.540066 2566 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 16 23:57:04.545958 kubelet[2566]: I0116 23:57:04.544273 2566 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 16 23:57:04.546430 kubelet[2566]: E0116 23:57:04.546406 2566 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-fe2a5b3650\" not found" Jan 16 23:57:04.548105 kubelet[2566]: I0116 23:57:04.548082 2566 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 16 23:57:04.548472 kubelet[2566]: I0116 23:57:04.548459 2566 reconciler.go:26] "Reconciler: start to sync state" Jan 16 23:57:04.568102 kubelet[2566]: I0116 23:57:04.568071 2566 server.go:479] "Adding debug handlers to kubelet server" Jan 16 23:57:04.569104 kubelet[2566]: I0116 23:57:04.568767 2566 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 16 23:57:04.577578 kubelet[2566]: I0116 23:57:04.577530 2566 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 16 23:57:04.580297 kubelet[2566]: I0116 23:57:04.580264 2566 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 16 23:57:04.580520 kubelet[2566]: I0116 23:57:04.580507 2566 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 16 23:57:04.580616 kubelet[2566]: I0116 23:57:04.580605 2566 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 16 23:57:04.580682 kubelet[2566]: I0116 23:57:04.580673 2566 kubelet.go:2382] "Starting kubelet main sync loop" Jan 16 23:57:04.580794 kubelet[2566]: E0116 23:57:04.580775 2566 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 16 23:57:04.581450 kubelet[2566]: I0116 23:57:04.581418 2566 factory.go:221] Registration of the containerd container factory successfully Jan 16 23:57:04.581450 kubelet[2566]: I0116 23:57:04.581440 2566 factory.go:221] Registration of the systemd container factory successfully Jan 16 23:57:04.593537 kubelet[2566]: E0116 23:57:04.593505 2566 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 16 23:57:04.649001 kubelet[2566]: I0116 23:57:04.648974 2566 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 16 23:57:04.649191 kubelet[2566]: I0116 23:57:04.649177 2566 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 16 23:57:04.649282 kubelet[2566]: I0116 23:57:04.649272 2566 state_mem.go:36] "Initialized new in-memory state store" Jan 16 23:57:04.649572 kubelet[2566]: I0116 23:57:04.649554 2566 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 16 23:57:04.649662 kubelet[2566]: I0116 23:57:04.649638 2566 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 16 23:57:04.649717 kubelet[2566]: I0116 23:57:04.649710 2566 policy_none.go:49] "None policy: Start" Jan 16 23:57:04.649774 kubelet[2566]: I0116 23:57:04.649766 2566 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 16 23:57:04.649835 kubelet[2566]: I0116 23:57:04.649827 2566 state_mem.go:35] "Initializing new in-memory state store" Jan 16 23:57:04.650238 kubelet[2566]: I0116 23:57:04.650216 2566 state_mem.go:75] "Updated machine memory state" Jan 16 23:57:04.656412 kubelet[2566]: I0116 23:57:04.656373 2566 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 16 23:57:04.659133 kubelet[2566]: I0116 23:57:04.658891 2566 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 16 23:57:04.660413 kubelet[2566]: I0116 23:57:04.659388 2566 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 16 23:57:04.660413 kubelet[2566]: I0116 23:57:04.660005 2566 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 16 23:57:04.665247 kubelet[2566]: E0116 23:57:04.662741 2566 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 16 23:57:04.681439 kubelet[2566]: I0116 23:57:04.681404 2566 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:04.682022 kubelet[2566]: I0116 23:57:04.681404 2566 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:04.682712 kubelet[2566]: I0116 23:57:04.682605 2566 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:04.694743 kubelet[2566]: E0116 23:57:04.694697 2566 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-fe2a5b3650\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:04.774987 kubelet[2566]: I0116 23:57:04.774900 2566 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:04.789036 kubelet[2566]: I0116 23:57:04.788726 2566 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:04.789036 kubelet[2566]: I0116 23:57:04.788846 2566 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:04.850570 kubelet[2566]: I0116 23:57:04.850367 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/52d565efd703938a784538cf58aea068-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-fe2a5b3650\" (UID: \"52d565efd703938a784538cf58aea068\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:04.850570 kubelet[2566]: I0116 23:57:04.850438 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c2cc036723e3d4b75bccc3328e52b6de-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-fe2a5b3650\" (UID: \"c2cc036723e3d4b75bccc3328e52b6de\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:04.850570 kubelet[2566]: I0116 23:57:04.850476 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3e4c1a7372e4c44cc56827f901e126e5-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-fe2a5b3650\" (UID: \"3e4c1a7372e4c44cc56827f901e126e5\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:04.850570 kubelet[2566]: I0116 23:57:04.850532 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3e4c1a7372e4c44cc56827f901e126e5-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-fe2a5b3650\" (UID: \"3e4c1a7372e4c44cc56827f901e126e5\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:04.851492 kubelet[2566]: I0116 23:57:04.851066 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3e4c1a7372e4c44cc56827f901e126e5-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-fe2a5b3650\" (UID: \"3e4c1a7372e4c44cc56827f901e126e5\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:04.851492 kubelet[2566]: I0116 23:57:04.851165 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c2cc036723e3d4b75bccc3328e52b6de-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-fe2a5b3650\" (UID: \"c2cc036723e3d4b75bccc3328e52b6de\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:04.851492 kubelet[2566]: I0116 23:57:04.851228 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c2cc036723e3d4b75bccc3328e52b6de-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-fe2a5b3650\" (UID: \"c2cc036723e3d4b75bccc3328e52b6de\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:04.851492 kubelet[2566]: I0116 23:57:04.851424 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3e4c1a7372e4c44cc56827f901e126e5-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-fe2a5b3650\" (UID: \"3e4c1a7372e4c44cc56827f901e126e5\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:04.851844 kubelet[2566]: I0116 23:57:04.851547 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3e4c1a7372e4c44cc56827f901e126e5-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-fe2a5b3650\" (UID: \"3e4c1a7372e4c44cc56827f901e126e5\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:05.515274 kubelet[2566]: I0116 23:57:05.515228 2566 apiserver.go:52] "Watching apiserver" Jan 16 23:57:05.549631 kubelet[2566]: I0116 23:57:05.549526 2566 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 16 23:57:05.623767 kubelet[2566]: I0116 23:57:05.623564 2566 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:05.624155 kubelet[2566]: I0116 23:57:05.624123 2566 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:05.643984 kubelet[2566]: E0116 23:57:05.641295 2566 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-fe2a5b3650\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:05.643984 kubelet[2566]: E0116 23:57:05.641596 2566 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-n-fe2a5b3650\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:05.658347 kubelet[2566]: I0116 23:57:05.658245 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-6-n-fe2a5b3650" podStartSLOduration=1.6581820280000001 podStartE2EDuration="1.658182028s" podCreationTimestamp="2026-01-16 23:57:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-16 23:57:05.65708873 +0000 UTC m=+1.235222433" watchObservedRunningTime="2026-01-16 23:57:05.658182028 +0000 UTC m=+1.236315651" Jan 16 23:57:05.679132 kubelet[2566]: I0116 23:57:05.678998 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-fe2a5b3650" podStartSLOduration=1.678980212 podStartE2EDuration="1.678980212s" podCreationTimestamp="2026-01-16 23:57:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-16 23:57:05.678640511 +0000 UTC m=+1.256774134" watchObservedRunningTime="2026-01-16 23:57:05.678980212 +0000 UTC m=+1.257113835" Jan 16 23:57:05.720658 kubelet[2566]: I0116 23:57:05.720163 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-6-n-fe2a5b3650" podStartSLOduration=2.720118646 podStartE2EDuration="2.720118646s" podCreationTimestamp="2026-01-16 23:57:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-16 23:57:05.699347581 +0000 UTC m=+1.277481204" watchObservedRunningTime="2026-01-16 23:57:05.720118646 +0000 UTC m=+1.298252269" Jan 16 23:57:11.213575 kubelet[2566]: I0116 23:57:11.213499 2566 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 16 23:57:11.215346 containerd[1469]: time="2026-01-16T23:57:11.214440153Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 16 23:57:11.216675 kubelet[2566]: I0116 23:57:11.216551 2566 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 16 23:57:11.823898 systemd[1]: Created slice kubepods-besteffort-pod5a137092_ed6a_4675_8dbd_57c835616daf.slice - libcontainer container kubepods-besteffort-pod5a137092_ed6a_4675_8dbd_57c835616daf.slice. Jan 16 23:57:11.849476 kubelet[2566]: W0116 23:57:11.849192 2566 reflector.go:569] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4081-3-6-n-fe2a5b3650" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-6-n-fe2a5b3650' and this object Jan 16 23:57:11.849476 kubelet[2566]: E0116 23:57:11.849263 2566 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:ci-4081-3-6-n-fe2a5b3650\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-6-n-fe2a5b3650' and this object" logger="UnhandledError" Jan 16 23:57:11.849476 kubelet[2566]: W0116 23:57:11.849321 2566 reflector.go:569] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081-3-6-n-fe2a5b3650" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-6-n-fe2a5b3650' and this object Jan 16 23:57:11.849476 kubelet[2566]: E0116 23:57:11.849335 2566 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4081-3-6-n-fe2a5b3650\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-6-n-fe2a5b3650' and this object" logger="UnhandledError" Jan 16 23:57:11.849476 kubelet[2566]: I0116 23:57:11.849434 2566 status_manager.go:890] "Failed to get status for pod" podUID="5a137092-ed6a-4675-8dbd-57c835616daf" pod="kube-system/kube-proxy-pbwhb" err="pods \"kube-proxy-pbwhb\" is forbidden: User \"system:node:ci-4081-3-6-n-fe2a5b3650\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-6-n-fe2a5b3650' and this object" Jan 16 23:57:11.898268 kubelet[2566]: I0116 23:57:11.898072 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx596\" (UniqueName: \"kubernetes.io/projected/5a137092-ed6a-4675-8dbd-57c835616daf-kube-api-access-lx596\") pod \"kube-proxy-pbwhb\" (UID: \"5a137092-ed6a-4675-8dbd-57c835616daf\") " pod="kube-system/kube-proxy-pbwhb" Jan 16 23:57:11.898268 kubelet[2566]: I0116 23:57:11.898132 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5a137092-ed6a-4675-8dbd-57c835616daf-kube-proxy\") pod \"kube-proxy-pbwhb\" (UID: \"5a137092-ed6a-4675-8dbd-57c835616daf\") " pod="kube-system/kube-proxy-pbwhb" Jan 16 23:57:11.898268 kubelet[2566]: I0116 23:57:11.898153 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5a137092-ed6a-4675-8dbd-57c835616daf-lib-modules\") pod \"kube-proxy-pbwhb\" (UID: \"5a137092-ed6a-4675-8dbd-57c835616daf\") " pod="kube-system/kube-proxy-pbwhb" Jan 16 23:57:11.898268 kubelet[2566]: I0116 23:57:11.898172 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5a137092-ed6a-4675-8dbd-57c835616daf-xtables-lock\") pod \"kube-proxy-pbwhb\" (UID: \"5a137092-ed6a-4675-8dbd-57c835616daf\") " pod="kube-system/kube-proxy-pbwhb" Jan 16 23:57:12.247003 systemd[1]: Created slice kubepods-besteffort-poded6e7289_f33f_49c8_90a5_7f7da073b6f1.slice - libcontainer container kubepods-besteffort-poded6e7289_f33f_49c8_90a5_7f7da073b6f1.slice. Jan 16 23:57:12.250156 kubelet[2566]: W0116 23:57:12.248492 2566 reflector.go:569] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:ci-4081-3-6-n-fe2a5b3650" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ci-4081-3-6-n-fe2a5b3650' and this object Jan 16 23:57:12.250156 kubelet[2566]: E0116 23:57:12.248562 2566 reflector.go:166] "Unhandled Error" err="object-\"tigera-operator\"/\"kubernetes-services-endpoint\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kubernetes-services-endpoint\" is forbidden: User \"system:node:ci-4081-3-6-n-fe2a5b3650\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ci-4081-3-6-n-fe2a5b3650' and this object" logger="UnhandledError" Jan 16 23:57:12.301722 kubelet[2566]: I0116 23:57:12.301627 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm4hs\" (UniqueName: \"kubernetes.io/projected/ed6e7289-f33f-49c8-90a5-7f7da073b6f1-kube-api-access-nm4hs\") pod \"tigera-operator-7dcd859c48-slgsh\" (UID: \"ed6e7289-f33f-49c8-90a5-7f7da073b6f1\") " pod="tigera-operator/tigera-operator-7dcd859c48-slgsh" Jan 16 23:57:12.301722 kubelet[2566]: I0116 23:57:12.301707 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ed6e7289-f33f-49c8-90a5-7f7da073b6f1-var-lib-calico\") pod \"tigera-operator-7dcd859c48-slgsh\" (UID: \"ed6e7289-f33f-49c8-90a5-7f7da073b6f1\") " pod="tigera-operator/tigera-operator-7dcd859c48-slgsh" Jan 16 23:57:12.554734 containerd[1469]: time="2026-01-16T23:57:12.554222650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-slgsh,Uid:ed6e7289-f33f-49c8-90a5-7f7da073b6f1,Namespace:tigera-operator,Attempt:0,}" Jan 16 23:57:12.590124 containerd[1469]: time="2026-01-16T23:57:12.588880043Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 23:57:12.590124 containerd[1469]: time="2026-01-16T23:57:12.588975400Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 23:57:12.590124 containerd[1469]: time="2026-01-16T23:57:12.588989359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:57:12.590124 containerd[1469]: time="2026-01-16T23:57:12.589071116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:57:12.616284 systemd[1]: Started cri-containerd-9ffddec2d6a41feda0db6ff22926a3c5ead6779b53ec352c37c1aaf672a32055.scope - libcontainer container 9ffddec2d6a41feda0db6ff22926a3c5ead6779b53ec352c37c1aaf672a32055. Jan 16 23:57:12.663615 containerd[1469]: time="2026-01-16T23:57:12.663421831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-slgsh,Uid:ed6e7289-f33f-49c8-90a5-7f7da073b6f1,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"9ffddec2d6a41feda0db6ff22926a3c5ead6779b53ec352c37c1aaf672a32055\"" Jan 16 23:57:12.666649 containerd[1469]: time="2026-01-16T23:57:12.666514913Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 16 23:57:13.000820 kubelet[2566]: E0116 23:57:13.000655 2566 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Jan 16 23:57:13.000820 kubelet[2566]: E0116 23:57:13.000782 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5a137092-ed6a-4675-8dbd-57c835616daf-kube-proxy podName:5a137092-ed6a-4675-8dbd-57c835616daf nodeName:}" failed. No retries permitted until 2026-01-16 23:57:13.500749245 +0000 UTC m=+9.078882868 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/5a137092-ed6a-4675-8dbd-57c835616daf-kube-proxy") pod "kube-proxy-pbwhb" (UID: "5a137092-ed6a-4675-8dbd-57c835616daf") : failed to sync configmap cache: timed out waiting for the condition Jan 16 23:57:13.638479 containerd[1469]: time="2026-01-16T23:57:13.638425959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pbwhb,Uid:5a137092-ed6a-4675-8dbd-57c835616daf,Namespace:kube-system,Attempt:0,}" Jan 16 23:57:13.670113 containerd[1469]: time="2026-01-16T23:57:13.670000254Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 23:57:13.670394 containerd[1469]: time="2026-01-16T23:57:13.670332642Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 23:57:13.670760 containerd[1469]: time="2026-01-16T23:57:13.670556514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:57:13.670802 containerd[1469]: time="2026-01-16T23:57:13.670767266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:57:13.698248 systemd[1]: Started cri-containerd-603ae0bad69996beab941512df22c803c711fc4bc83c691b1f319cc52d82e42a.scope - libcontainer container 603ae0bad69996beab941512df22c803c711fc4bc83c691b1f319cc52d82e42a. Jan 16 23:57:13.731023 containerd[1469]: time="2026-01-16T23:57:13.730971603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pbwhb,Uid:5a137092-ed6a-4675-8dbd-57c835616daf,Namespace:kube-system,Attempt:0,} returns sandbox id \"603ae0bad69996beab941512df22c803c711fc4bc83c691b1f319cc52d82e42a\"" Jan 16 23:57:13.736503 containerd[1469]: time="2026-01-16T23:57:13.736320409Z" level=info msg="CreateContainer within sandbox \"603ae0bad69996beab941512df22c803c711fc4bc83c691b1f319cc52d82e42a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 16 23:57:13.757576 containerd[1469]: time="2026-01-16T23:57:13.757190012Z" level=info msg="CreateContainer within sandbox \"603ae0bad69996beab941512df22c803c711fc4bc83c691b1f319cc52d82e42a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8e2474f5e2851d1eb4654938fe9094ec9e75eeca839a038df945326d20f52400\"" Jan 16 23:57:13.761391 containerd[1469]: time="2026-01-16T23:57:13.758220095Z" level=info msg="StartContainer for \"8e2474f5e2851d1eb4654938fe9094ec9e75eeca839a038df945326d20f52400\"" Jan 16 23:57:13.759381 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2740759314.mount: Deactivated successfully. Jan 16 23:57:13.794237 systemd[1]: Started cri-containerd-8e2474f5e2851d1eb4654938fe9094ec9e75eeca839a038df945326d20f52400.scope - libcontainer container 8e2474f5e2851d1eb4654938fe9094ec9e75eeca839a038df945326d20f52400. Jan 16 23:57:13.836814 containerd[1469]: time="2026-01-16T23:57:13.836757047Z" level=info msg="StartContainer for \"8e2474f5e2851d1eb4654938fe9094ec9e75eeca839a038df945326d20f52400\" returns successfully" Jan 16 23:57:14.676620 kubelet[2566]: I0116 23:57:14.676361 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pbwhb" podStartSLOduration=3.676246748 podStartE2EDuration="3.676246748s" podCreationTimestamp="2026-01-16 23:57:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-16 23:57:14.674611204 +0000 UTC m=+10.252744827" watchObservedRunningTime="2026-01-16 23:57:14.676246748 +0000 UTC m=+10.254380371" Jan 16 23:57:14.761001 containerd[1469]: time="2026-01-16T23:57:14.760802040Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:57:14.762990 containerd[1469]: time="2026-01-16T23:57:14.762819531Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Jan 16 23:57:14.768334 containerd[1469]: time="2026-01-16T23:57:14.767307696Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:57:14.773111 containerd[1469]: time="2026-01-16T23:57:14.773046379Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:57:14.774065 containerd[1469]: time="2026-01-16T23:57:14.774026905Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.107452714s" Jan 16 23:57:14.774196 containerd[1469]: time="2026-01-16T23:57:14.774179500Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Jan 16 23:57:14.778727 containerd[1469]: time="2026-01-16T23:57:14.778663666Z" level=info msg="CreateContainer within sandbox \"9ffddec2d6a41feda0db6ff22926a3c5ead6779b53ec352c37c1aaf672a32055\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 16 23:57:14.803792 containerd[1469]: time="2026-01-16T23:57:14.803742243Z" level=info msg="CreateContainer within sandbox \"9ffddec2d6a41feda0db6ff22926a3c5ead6779b53ec352c37c1aaf672a32055\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"0ffa7bef398892635dc1edee70a9b83960b09ed469815fcf569af244792b7f6f\"" Jan 16 23:57:14.808054 containerd[1469]: time="2026-01-16T23:57:14.807981937Z" level=info msg="StartContainer for \"0ffa7bef398892635dc1edee70a9b83960b09ed469815fcf569af244792b7f6f\"" Jan 16 23:57:14.853327 systemd[1]: Started cri-containerd-0ffa7bef398892635dc1edee70a9b83960b09ed469815fcf569af244792b7f6f.scope - libcontainer container 0ffa7bef398892635dc1edee70a9b83960b09ed469815fcf569af244792b7f6f. Jan 16 23:57:14.884985 containerd[1469]: time="2026-01-16T23:57:14.884739778Z" level=info msg="StartContainer for \"0ffa7bef398892635dc1edee70a9b83960b09ed469815fcf569af244792b7f6f\" returns successfully" Jan 16 23:57:21.227599 sudo[1705]: pam_unix(sudo:session): session closed for user root Jan 16 23:57:21.323473 sshd[1702]: pam_unix(sshd:session): session closed for user core Jan 16 23:57:21.331587 systemd[1]: sshd@6-46.224.42.239:22-4.153.228.146:60412.service: Deactivated successfully. Jan 16 23:57:21.335898 systemd[1]: session-7.scope: Deactivated successfully. Jan 16 23:57:21.337205 systemd[1]: session-7.scope: Consumed 8.897s CPU time, 152.1M memory peak, 0B memory swap peak. Jan 16 23:57:21.340955 systemd-logind[1448]: Session 7 logged out. Waiting for processes to exit. Jan 16 23:57:21.344519 systemd-logind[1448]: Removed session 7. Jan 16 23:57:34.110731 kubelet[2566]: I0116 23:57:34.109211 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-slgsh" podStartSLOduration=19.999068203 podStartE2EDuration="22.109189623s" podCreationTimestamp="2026-01-16 23:57:12 +0000 UTC" firstStartedPulling="2026-01-16 23:57:12.66579966 +0000 UTC m=+8.243933283" lastFinishedPulling="2026-01-16 23:57:14.77592104 +0000 UTC m=+10.354054703" observedRunningTime="2026-01-16 23:57:15.668533276 +0000 UTC m=+11.246666899" watchObservedRunningTime="2026-01-16 23:57:34.109189623 +0000 UTC m=+29.687323206" Jan 16 23:57:34.123305 systemd[1]: Created slice kubepods-besteffort-pod8f7d6773_acfe_4b45_8555_86c68d6cf24a.slice - libcontainer container kubepods-besteffort-pod8f7d6773_acfe_4b45_8555_86c68d6cf24a.slice. Jan 16 23:57:34.142528 kubelet[2566]: I0116 23:57:34.142485 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fc6rh\" (UniqueName: \"kubernetes.io/projected/8f7d6773-acfe-4b45-8555-86c68d6cf24a-kube-api-access-fc6rh\") pod \"calico-typha-66dd57c865-p2tm8\" (UID: \"8f7d6773-acfe-4b45-8555-86c68d6cf24a\") " pod="calico-system/calico-typha-66dd57c865-p2tm8" Jan 16 23:57:34.142775 kubelet[2566]: I0116 23:57:34.142762 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f7d6773-acfe-4b45-8555-86c68d6cf24a-tigera-ca-bundle\") pod \"calico-typha-66dd57c865-p2tm8\" (UID: \"8f7d6773-acfe-4b45-8555-86c68d6cf24a\") " pod="calico-system/calico-typha-66dd57c865-p2tm8" Jan 16 23:57:34.142914 kubelet[2566]: I0116 23:57:34.142867 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/8f7d6773-acfe-4b45-8555-86c68d6cf24a-typha-certs\") pod \"calico-typha-66dd57c865-p2tm8\" (UID: \"8f7d6773-acfe-4b45-8555-86c68d6cf24a\") " pod="calico-system/calico-typha-66dd57c865-p2tm8" Jan 16 23:57:34.308225 systemd[1]: Created slice kubepods-besteffort-pod571700f0_8042_498f_b62b_412eb3c52dce.slice - libcontainer container kubepods-besteffort-pod571700f0_8042_498f_b62b_412eb3c52dce.slice. Jan 16 23:57:34.345115 kubelet[2566]: I0116 23:57:34.344823 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/571700f0-8042-498f-b62b-412eb3c52dce-var-lib-calico\") pod \"calico-node-z2nlx\" (UID: \"571700f0-8042-498f-b62b-412eb3c52dce\") " pod="calico-system/calico-node-z2nlx" Jan 16 23:57:34.345115 kubelet[2566]: I0116 23:57:34.344902 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/571700f0-8042-498f-b62b-412eb3c52dce-cni-net-dir\") pod \"calico-node-z2nlx\" (UID: \"571700f0-8042-498f-b62b-412eb3c52dce\") " pod="calico-system/calico-node-z2nlx" Jan 16 23:57:34.345115 kubelet[2566]: I0116 23:57:34.344941 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/571700f0-8042-498f-b62b-412eb3c52dce-cni-bin-dir\") pod \"calico-node-z2nlx\" (UID: \"571700f0-8042-498f-b62b-412eb3c52dce\") " pod="calico-system/calico-node-z2nlx" Jan 16 23:57:34.345115 kubelet[2566]: I0116 23:57:34.344996 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/571700f0-8042-498f-b62b-412eb3c52dce-policysync\") pod \"calico-node-z2nlx\" (UID: \"571700f0-8042-498f-b62b-412eb3c52dce\") " pod="calico-system/calico-node-z2nlx" Jan 16 23:57:34.345115 kubelet[2566]: I0116 23:57:34.345022 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/571700f0-8042-498f-b62b-412eb3c52dce-lib-modules\") pod \"calico-node-z2nlx\" (UID: \"571700f0-8042-498f-b62b-412eb3c52dce\") " pod="calico-system/calico-node-z2nlx" Jan 16 23:57:34.346058 kubelet[2566]: I0116 23:57:34.345041 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/571700f0-8042-498f-b62b-412eb3c52dce-flexvol-driver-host\") pod \"calico-node-z2nlx\" (UID: \"571700f0-8042-498f-b62b-412eb3c52dce\") " pod="calico-system/calico-node-z2nlx" Jan 16 23:57:34.346058 kubelet[2566]: I0116 23:57:34.345058 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/571700f0-8042-498f-b62b-412eb3c52dce-tigera-ca-bundle\") pod \"calico-node-z2nlx\" (UID: \"571700f0-8042-498f-b62b-412eb3c52dce\") " pod="calico-system/calico-node-z2nlx" Jan 16 23:57:34.346058 kubelet[2566]: I0116 23:57:34.345120 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/571700f0-8042-498f-b62b-412eb3c52dce-xtables-lock\") pod \"calico-node-z2nlx\" (UID: \"571700f0-8042-498f-b62b-412eb3c52dce\") " pod="calico-system/calico-node-z2nlx" Jan 16 23:57:34.346058 kubelet[2566]: I0116 23:57:34.345301 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/571700f0-8042-498f-b62b-412eb3c52dce-cni-log-dir\") pod \"calico-node-z2nlx\" (UID: \"571700f0-8042-498f-b62b-412eb3c52dce\") " pod="calico-system/calico-node-z2nlx" Jan 16 23:57:34.346058 kubelet[2566]: I0116 23:57:34.345377 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/571700f0-8042-498f-b62b-412eb3c52dce-node-certs\") pod \"calico-node-z2nlx\" (UID: \"571700f0-8042-498f-b62b-412eb3c52dce\") " pod="calico-system/calico-node-z2nlx" Jan 16 23:57:34.346906 kubelet[2566]: I0116 23:57:34.345449 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpfnd\" (UniqueName: \"kubernetes.io/projected/571700f0-8042-498f-b62b-412eb3c52dce-kube-api-access-xpfnd\") pod \"calico-node-z2nlx\" (UID: \"571700f0-8042-498f-b62b-412eb3c52dce\") " pod="calico-system/calico-node-z2nlx" Jan 16 23:57:34.346906 kubelet[2566]: I0116 23:57:34.345651 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/571700f0-8042-498f-b62b-412eb3c52dce-var-run-calico\") pod \"calico-node-z2nlx\" (UID: \"571700f0-8042-498f-b62b-412eb3c52dce\") " pod="calico-system/calico-node-z2nlx" Jan 16 23:57:34.430602 containerd[1469]: time="2026-01-16T23:57:34.430427841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-66dd57c865-p2tm8,Uid:8f7d6773-acfe-4b45-8555-86c68d6cf24a,Namespace:calico-system,Attempt:0,}" Jan 16 23:57:34.462050 kubelet[2566]: E0116 23:57:34.458187 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:34.462050 kubelet[2566]: W0116 23:57:34.458213 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:34.462050 kubelet[2566]: E0116 23:57:34.458316 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:34.462860 kubelet[2566]: E0116 23:57:34.462759 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:34.462860 kubelet[2566]: W0116 23:57:34.462784 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:34.462860 kubelet[2566]: E0116 23:57:34.462812 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:34.485543 containerd[1469]: time="2026-01-16T23:57:34.485129115Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 23:57:34.485543 containerd[1469]: time="2026-01-16T23:57:34.485192674Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 23:57:34.485543 containerd[1469]: time="2026-01-16T23:57:34.485208194Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:57:34.485543 containerd[1469]: time="2026-01-16T23:57:34.485291632Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:57:34.513564 kubelet[2566]: E0116 23:57:34.513537 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:34.513808 kubelet[2566]: W0116 23:57:34.513754 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:34.514093 kubelet[2566]: E0116 23:57:34.513968 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:34.546620 systemd[1]: Started cri-containerd-ee595c685be042d2f4c1224168dca56571346a46dafb631f7e66d50a74a9f29a.scope - libcontainer container ee595c685be042d2f4c1224168dca56571346a46dafb631f7e66d50a74a9f29a. Jan 16 23:57:34.553204 kubelet[2566]: E0116 23:57:34.552222 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j4ltk" podUID="f9b64606-aa04-4801-bf16-55e0f797524c" Jan 16 23:57:34.618092 containerd[1469]: time="2026-01-16T23:57:34.617968293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-z2nlx,Uid:571700f0-8042-498f-b62b-412eb3c52dce,Namespace:calico-system,Attempt:0,}" Jan 16 23:57:34.638528 kubelet[2566]: E0116 23:57:34.638187 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:34.638528 kubelet[2566]: W0116 23:57:34.638237 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:34.638528 kubelet[2566]: E0116 23:57:34.638262 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:34.639350 kubelet[2566]: E0116 23:57:34.639317 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:34.639350 kubelet[2566]: W0116 23:57:34.639343 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:34.639636 kubelet[2566]: E0116 23:57:34.639365 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:34.640108 kubelet[2566]: E0116 23:57:34.640014 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:34.640108 kubelet[2566]: W0116 23:57:34.640030 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:34.640108 kubelet[2566]: E0116 23:57:34.640043 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:34.640448 kubelet[2566]: E0116 23:57:34.640205 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:34.640448 kubelet[2566]: W0116 23:57:34.640214 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:34.640448 kubelet[2566]: E0116 23:57:34.640223 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:34.640448 kubelet[2566]: E0116 23:57:34.640379 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:34.640448 kubelet[2566]: W0116 23:57:34.640451 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:34.641259 kubelet[2566]: E0116 23:57:34.640461 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:34.641259 kubelet[2566]: E0116 23:57:34.641036 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:34.641259 kubelet[2566]: W0116 23:57:34.641051 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:34.641259 kubelet[2566]: E0116 23:57:34.641064 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:34.641259 kubelet[2566]: E0116 23:57:34.641241 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:34.641259 kubelet[2566]: W0116 23:57:34.641250 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:34.641259 kubelet[2566]: E0116 23:57:34.641258 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:34.643084 kubelet[2566]: E0116 23:57:34.641398 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:34.643084 kubelet[2566]: W0116 23:57:34.641408 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:34.643084 kubelet[2566]: E0116 23:57:34.641416 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:34.643084 kubelet[2566]: E0116 23:57:34.642606 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:34.643084 kubelet[2566]: W0116 23:57:34.642659 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:34.643084 kubelet[2566]: E0116 23:57:34.642716 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:34.643084 kubelet[2566]: E0116 23:57:34.643043 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:34.643084 kubelet[2566]: W0116 23:57:34.643058 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:34.643305 kubelet[2566]: E0116 23:57:34.643097 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:34.643672 kubelet[2566]: E0116 23:57:34.643355 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:34.643672 kubelet[2566]: W0116 23:57:34.643373 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:34.643672 kubelet[2566]: E0116 23:57:34.643408 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:34.643672 kubelet[2566]: E0116 23:57:34.643580 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:34.643672 kubelet[2566]: W0116 23:57:34.643588 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:34.643672 kubelet[2566]: E0116 23:57:34.643596 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:34.644032 kubelet[2566]: E0116 23:57:34.643913 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:34.644032 kubelet[2566]: W0116 23:57:34.643931 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:34.644032 kubelet[2566]: E0116 23:57:34.643949 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:34.644410 kubelet[2566]: E0116 23:57:34.644359 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:34.644595 kubelet[2566]: W0116 23:57:34.644413 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:34.644595 kubelet[2566]: E0116 23:57:34.644428 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:34.644802 kubelet[2566]: E0116 23:57:34.644773 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:34.644802 kubelet[2566]: W0116 23:57:34.644791 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:34.644982 kubelet[2566]: E0116 23:57:34.644812 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:34.645109 kubelet[2566]: E0116 23:57:34.645086 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:34.645109 kubelet[2566]: W0116 23:57:34.645102 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:34.645249 kubelet[2566]: E0116 23:57:34.645224 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:34.646080 kubelet[2566]: E0116 23:57:34.646058 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:34.646080 kubelet[2566]: W0116 23:57:34.646075 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:34.646561 kubelet[2566]: E0116 23:57:34.646087 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:34.646561 kubelet[2566]: E0116 23:57:34.646305 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:34.646561 kubelet[2566]: W0116 23:57:34.646315 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:34.646561 kubelet[2566]: E0116 23:57:34.646324 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:34.646561 kubelet[2566]: E0116 23:57:34.646534 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:34.646561 kubelet[2566]: W0116 23:57:34.646542 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:34.646561 kubelet[2566]: E0116 23:57:34.646551 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:34.647234 kubelet[2566]: E0116 23:57:34.646775 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:34.647234 kubelet[2566]: W0116 23:57:34.646785 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:34.647234 kubelet[2566]: E0116 23:57:34.646794 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:34.649283 kubelet[2566]: E0116 23:57:34.648578 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:34.649283 kubelet[2566]: W0116 23:57:34.648613 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:34.649283 kubelet[2566]: E0116 23:57:34.648632 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:34.649283 kubelet[2566]: I0116 23:57:34.648707 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54dj2\" (UniqueName: \"kubernetes.io/projected/f9b64606-aa04-4801-bf16-55e0f797524c-kube-api-access-54dj2\") pod \"csi-node-driver-j4ltk\" (UID: \"f9b64606-aa04-4801-bf16-55e0f797524c\") " pod="calico-system/csi-node-driver-j4ltk" Jan 16 23:57:34.649283 kubelet[2566]: E0116 23:57:34.649090 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:34.650102 kubelet[2566]: W0116 23:57:34.649757 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:34.650102 kubelet[2566]: E0116 23:57:34.649789 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:34.650102 kubelet[2566]: I0116 23:57:34.649816 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f9b64606-aa04-4801-bf16-55e0f797524c-registration-dir\") pod \"csi-node-driver-j4ltk\" (UID: \"f9b64606-aa04-4801-bf16-55e0f797524c\") " pod="calico-system/csi-node-driver-j4ltk" Jan 16 23:57:34.654046 kubelet[2566]: E0116 23:57:34.650449 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:34.654181 kubelet[2566]: W0116 23:57:34.654111 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:34.654181 kubelet[2566]: E0116 23:57:34.654164 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:34.654226 kubelet[2566]: I0116 23:57:34.654196 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f9b64606-aa04-4801-bf16-55e0f797524c-kubelet-dir\") pod \"csi-node-driver-j4ltk\" (UID: \"f9b64606-aa04-4801-bf16-55e0f797524c\") " pod="calico-system/csi-node-driver-j4ltk" Jan 16 23:57:34.656050 kubelet[2566]: E0116 23:57:34.654684 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:34.656050 kubelet[2566]: W0116 23:57:34.654702 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:34.656050 kubelet[2566]: E0116 23:57:34.654755 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:34.656050 kubelet[2566]: I0116 23:57:34.654788 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/f9b64606-aa04-4801-bf16-55e0f797524c-varrun\") pod \"csi-node-driver-j4ltk\" (UID: \"f9b64606-aa04-4801-bf16-55e0f797524c\") " pod="calico-system/csi-node-driver-j4ltk" Jan 16 23:57:34.656050 kubelet[2566]: E0116 23:57:34.655222 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:34.656050 kubelet[2566]: W0116 23:57:34.655241 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:34.656050 kubelet[2566]: E0116 23:57:34.655456 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:34.656819 kubelet[2566]: E0116 23:57:34.656204 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:34.656819 kubelet[2566]: W0116 23:57:34.656227 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:34.656819 kubelet[2566]: E0116 23:57:34.656402 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:34.657199 kubelet[2566]: E0116 23:57:34.657045 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:34.657199 kubelet[2566]: W0116 23:57:34.657063 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:34.658034 kubelet[2566]: E0116 23:57:34.657219 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:34.658533 kubelet[2566]: E0116 23:57:34.658502 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:34.658533 kubelet[2566]: W0116 23:57:34.658527 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:34.658533 kubelet[2566]: E0116 23:57:34.658561 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:34.658533 kubelet[2566]: I0116 23:57:34.658618 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f9b64606-aa04-4801-bf16-55e0f797524c-socket-dir\") pod \"csi-node-driver-j4ltk\" (UID: \"f9b64606-aa04-4801-bf16-55e0f797524c\") " pod="calico-system/csi-node-driver-j4ltk" Jan 16 23:57:34.659887 kubelet[2566]: E0116 23:57:34.659169 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:34.659887 kubelet[2566]: W0116 23:57:34.659197 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:34.659887 kubelet[2566]: E0116 23:57:34.659500 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:34.659887 kubelet[2566]: E0116 23:57:34.659799 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:34.659887 kubelet[2566]: W0116 23:57:34.659814 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:34.659887 kubelet[2566]: E0116 23:57:34.659828 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:34.662270 kubelet[2566]: E0116 23:57:34.662126 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:34.662270 kubelet[2566]: W0116 23:57:34.662152 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:34.662270 kubelet[2566]: E0116 23:57:34.662179 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:34.662787 kubelet[2566]: E0116 23:57:34.662658 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:34.662787 kubelet[2566]: W0116 23:57:34.662673 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:34.662787 kubelet[2566]: E0116 23:57:34.662686 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:34.663050 kubelet[2566]: E0116 23:57:34.662977 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:34.663050 kubelet[2566]: W0116 23:57:34.662993 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:34.663050 kubelet[2566]: E0116 23:57:34.663044 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:34.663293 kubelet[2566]: E0116 23:57:34.663280 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:34.663293 kubelet[2566]: W0116 23:57:34.663291 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:34.663350 kubelet[2566]: E0116 23:57:34.663301 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:34.663998 kubelet[2566]: E0116 23:57:34.663915 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:34.663998 kubelet[2566]: W0116 23:57:34.663932 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:34.663998 kubelet[2566]: E0116 23:57:34.663953 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:34.682749 containerd[1469]: time="2026-01-16T23:57:34.682550508Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 23:57:34.682749 containerd[1469]: time="2026-01-16T23:57:34.682621027Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 23:57:34.682749 containerd[1469]: time="2026-01-16T23:57:34.682633947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:57:34.684166 containerd[1469]: time="2026-01-16T23:57:34.682725786Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:57:34.714663 systemd[1]: Started cri-containerd-552f3addad7ca85e27364a4cafb8a2d2d9a764dd73d1dc09dfd6745496166e03.scope - libcontainer container 552f3addad7ca85e27364a4cafb8a2d2d9a764dd73d1dc09dfd6745496166e03. Jan 16 23:57:34.758765 containerd[1469]: time="2026-01-16T23:57:34.758608483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-66dd57c865-p2tm8,Uid:8f7d6773-acfe-4b45-8555-86c68d6cf24a,Namespace:calico-system,Attempt:0,} returns sandbox id \"ee595c685be042d2f4c1224168dca56571346a46dafb631f7e66d50a74a9f29a\"" Jan 16 23:57:34.762600 kubelet[2566]: E0116 23:57:34.762236 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:34.762600 kubelet[2566]: W0116 23:57:34.762261 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:34.762600 kubelet[2566]: E0116 23:57:34.762287 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:34.763361 kubelet[2566]: E0116 23:57:34.763252 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:34.763361 kubelet[2566]: W0116 23:57:34.763278 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:34.763615 kubelet[2566]: E0116 23:57:34.763448 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:34.765232 kubelet[2566]: E0116 23:57:34.764295 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:34.765232 kubelet[2566]: W0116 23:57:34.764316 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:34.765232 kubelet[2566]: E0116 23:57:34.764334 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:34.765232 kubelet[2566]: E0116 23:57:34.765231 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:34.765454 containerd[1469]: time="2026-01-16T23:57:34.764408601Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 16 23:57:34.765505 kubelet[2566]: W0116 23:57:34.765287 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:34.766547 kubelet[2566]: E0116 23:57:34.765716 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:34.766547 kubelet[2566]: W0116 23:57:34.765735 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:34.766547 kubelet[2566]: E0116 23:57:34.765755 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:34.766547 kubelet[2566]: E0116 23:57:34.766098 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:34.767037 kubelet[2566]: E0116 23:57:34.767012 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:34.767037 kubelet[2566]: W0116 23:57:34.767032 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:34.767037 kubelet[2566]: E0116 23:57:34.767054 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:34.767358 kubelet[2566]: E0116 23:57:34.767343 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:34.767358 kubelet[2566]: W0116 23:57:34.767356 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:34.767695 kubelet[2566]: E0116 23:57:34.767603 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:34.768005 kubelet[2566]: E0116 23:57:34.767939 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:34.768282 kubelet[2566]: W0116 23:57:34.768218 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:34.768282 kubelet[2566]: E0116 23:57:34.768289 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:34.769429 kubelet[2566]: E0116 23:57:34.769255 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:34.769649 kubelet[2566]: W0116 23:57:34.769494 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:34.769649 kubelet[2566]: E0116 23:57:34.769522 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:34.770434 kubelet[2566]: E0116 23:57:34.770283 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:34.770434 kubelet[2566]: W0116 23:57:34.770299 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:34.770434 kubelet[2566]: E0116 23:57:34.770329 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:34.771274 kubelet[2566]: E0116 23:57:34.770593 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:34.771274 kubelet[2566]: W0116 23:57:34.771156 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:34.771336 kubelet[2566]: E0116 23:57:34.771217 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:34.771913 kubelet[2566]: E0116 23:57:34.771655 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:34.771913 kubelet[2566]: W0116 23:57:34.771673 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:34.771913 kubelet[2566]: E0116 23:57:34.771704 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:34.772750 kubelet[2566]: E0116 23:57:34.772542 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:34.772750 kubelet[2566]: W0116 23:57:34.772562 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:34.772750 kubelet[2566]: E0116 23:57:34.772608 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:34.775780 kubelet[2566]: E0116 23:57:34.774405 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:34.775780 kubelet[2566]: W0116 23:57:34.774806 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:34.775780 kubelet[2566]: E0116 23:57:34.774910 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:34.777619 kubelet[2566]: E0116 23:57:34.777344 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:34.777619 kubelet[2566]: W0116 23:57:34.777374 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:34.778017 kubelet[2566]: E0116 23:57:34.777917 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:34.778017 kubelet[2566]: W0116 23:57:34.777932 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:34.779247 kubelet[2566]: E0116 23:57:34.778221 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:34.779247 kubelet[2566]: E0116 23:57:34.778283 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:34.779247 kubelet[2566]: E0116 23:57:34.778295 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:34.779247 kubelet[2566]: W0116 23:57:34.778304 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:34.779247 kubelet[2566]: E0116 23:57:34.778337 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:34.781175 kubelet[2566]: E0116 23:57:34.781140 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:34.781790 kubelet[2566]: W0116 23:57:34.781251 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:34.781790 kubelet[2566]: E0116 23:57:34.781362 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:34.784020 kubelet[2566]: E0116 23:57:34.782640 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:34.784336 kubelet[2566]: W0116 23:57:34.784073 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:34.784336 kubelet[2566]: E0116 23:57:34.784131 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:34.785492 kubelet[2566]: E0116 23:57:34.785461 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:34.786074 kubelet[2566]: W0116 23:57:34.785892 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:34.786074 kubelet[2566]: E0116 23:57:34.786012 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:34.786338 kubelet[2566]: E0116 23:57:34.786322 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:34.787260 kubelet[2566]: W0116 23:57:34.787095 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:34.787327 kubelet[2566]: E0116 23:57:34.787283 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:34.787914 kubelet[2566]: E0116 23:57:34.787685 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:34.787914 kubelet[2566]: W0116 23:57:34.787707 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:34.789075 kubelet[2566]: E0116 23:57:34.788227 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:34.789075 kubelet[2566]: E0116 23:57:34.788738 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:34.789075 kubelet[2566]: W0116 23:57:34.788753 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:34.789075 kubelet[2566]: E0116 23:57:34.788830 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:34.790318 kubelet[2566]: E0116 23:57:34.790295 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:34.790492 kubelet[2566]: W0116 23:57:34.790472 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:34.790714 kubelet[2566]: E0116 23:57:34.790681 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:34.791682 kubelet[2566]: E0116 23:57:34.791662 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:34.792002 kubelet[2566]: W0116 23:57:34.791904 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:34.792002 kubelet[2566]: E0116 23:57:34.791937 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:34.816082 containerd[1469]: time="2026-01-16T23:57:34.815153010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-z2nlx,Uid:571700f0-8042-498f-b62b-412eb3c52dce,Namespace:calico-system,Attempt:0,} returns sandbox id \"552f3addad7ca85e27364a4cafb8a2d2d9a764dd73d1dc09dfd6745496166e03\"" Jan 16 23:57:34.824009 kubelet[2566]: E0116 23:57:34.823114 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:34.824009 kubelet[2566]: W0116 23:57:34.823148 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:34.824009 kubelet[2566]: E0116 23:57:34.823177 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:36.212302 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1562108372.mount: Deactivated successfully. Jan 16 23:57:36.582580 kubelet[2566]: E0116 23:57:36.581894 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j4ltk" podUID="f9b64606-aa04-4801-bf16-55e0f797524c" Jan 16 23:57:36.730836 containerd[1469]: time="2026-01-16T23:57:36.729415391Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:57:36.730836 containerd[1469]: time="2026-01-16T23:57:36.730448218Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Jan 16 23:57:36.733309 containerd[1469]: time="2026-01-16T23:57:36.733252461Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:57:36.737186 containerd[1469]: time="2026-01-16T23:57:36.737139450Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:57:36.737875 containerd[1469]: time="2026-01-16T23:57:36.737834561Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 1.973356201s" Jan 16 23:57:36.737875 containerd[1469]: time="2026-01-16T23:57:36.737878441Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Jan 16 23:57:36.740397 containerd[1469]: time="2026-01-16T23:57:36.740067812Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 16 23:57:36.762821 containerd[1469]: time="2026-01-16T23:57:36.762724516Z" level=info msg="CreateContainer within sandbox \"ee595c685be042d2f4c1224168dca56571346a46dafb631f7e66d50a74a9f29a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 16 23:57:36.781530 containerd[1469]: time="2026-01-16T23:57:36.781461911Z" level=info msg="CreateContainer within sandbox \"ee595c685be042d2f4c1224168dca56571346a46dafb631f7e66d50a74a9f29a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f4932fdf4a72047381cac7b735dd69af459d33721d160ad5c5fa9a5835d67fff\"" Jan 16 23:57:36.782665 containerd[1469]: time="2026-01-16T23:57:36.782484897Z" level=info msg="StartContainer for \"f4932fdf4a72047381cac7b735dd69af459d33721d160ad5c5fa9a5835d67fff\"" Jan 16 23:57:36.815581 systemd[1]: Started cri-containerd-f4932fdf4a72047381cac7b735dd69af459d33721d160ad5c5fa9a5835d67fff.scope - libcontainer container f4932fdf4a72047381cac7b735dd69af459d33721d160ad5c5fa9a5835d67fff. Jan 16 23:57:36.871481 containerd[1469]: time="2026-01-16T23:57:36.870481547Z" level=info msg="StartContainer for \"f4932fdf4a72047381cac7b735dd69af459d33721d160ad5c5fa9a5835d67fff\" returns successfully" Jan 16 23:57:37.772708 kubelet[2566]: E0116 23:57:37.772648 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:37.772708 kubelet[2566]: W0116 23:57:37.772685 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:37.772708 kubelet[2566]: E0116 23:57:37.772709 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:37.773517 kubelet[2566]: E0116 23:57:37.772932 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:37.773517 kubelet[2566]: W0116 23:57:37.772972 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:37.773517 kubelet[2566]: E0116 23:57:37.772985 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:37.773517 kubelet[2566]: E0116 23:57:37.773141 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:37.773517 kubelet[2566]: W0116 23:57:37.773149 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:37.773517 kubelet[2566]: E0116 23:57:37.773157 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:37.773517 kubelet[2566]: E0116 23:57:37.773415 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:37.773517 kubelet[2566]: W0116 23:57:37.773423 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:37.773517 kubelet[2566]: E0116 23:57:37.773437 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:37.773929 kubelet[2566]: E0116 23:57:37.773622 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:37.773929 kubelet[2566]: W0116 23:57:37.773631 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:37.773929 kubelet[2566]: E0116 23:57:37.773639 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:37.773929 kubelet[2566]: E0116 23:57:37.773789 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:37.773929 kubelet[2566]: W0116 23:57:37.773797 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:37.773929 kubelet[2566]: E0116 23:57:37.773805 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:37.774229 kubelet[2566]: E0116 23:57:37.774006 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:37.774229 kubelet[2566]: W0116 23:57:37.774015 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:37.774229 kubelet[2566]: E0116 23:57:37.774024 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:37.774405 kubelet[2566]: E0116 23:57:37.774317 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:37.774405 kubelet[2566]: W0116 23:57:37.774330 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:37.774405 kubelet[2566]: E0116 23:57:37.774344 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:37.774567 kubelet[2566]: E0116 23:57:37.774556 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:37.774567 kubelet[2566]: W0116 23:57:37.774566 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:37.774667 kubelet[2566]: E0116 23:57:37.774574 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:37.774766 kubelet[2566]: E0116 23:57:37.774742 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:37.774766 kubelet[2566]: W0116 23:57:37.774762 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:37.774853 kubelet[2566]: E0116 23:57:37.774770 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:37.775074 kubelet[2566]: E0116 23:57:37.775055 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:37.775074 kubelet[2566]: W0116 23:57:37.775066 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:37.775204 kubelet[2566]: E0116 23:57:37.775088 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:37.775383 kubelet[2566]: E0116 23:57:37.775291 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:37.775383 kubelet[2566]: W0116 23:57:37.775365 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:37.775383 kubelet[2566]: E0116 23:57:37.775377 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:37.775687 kubelet[2566]: E0116 23:57:37.775673 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:37.775687 kubelet[2566]: W0116 23:57:37.775686 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:37.775774 kubelet[2566]: E0116 23:57:37.775697 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:37.776176 kubelet[2566]: E0116 23:57:37.776157 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:37.776176 kubelet[2566]: W0116 23:57:37.776174 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:37.776349 kubelet[2566]: E0116 23:57:37.776188 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:37.776472 kubelet[2566]: E0116 23:57:37.776456 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:37.776472 kubelet[2566]: W0116 23:57:37.776472 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:37.776570 kubelet[2566]: E0116 23:57:37.776484 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:37.791451 kubelet[2566]: E0116 23:57:37.791309 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:37.791451 kubelet[2566]: W0116 23:57:37.791350 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:37.791451 kubelet[2566]: E0116 23:57:37.791384 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:37.791819 kubelet[2566]: E0116 23:57:37.791799 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:37.791872 kubelet[2566]: W0116 23:57:37.791823 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:37.791872 kubelet[2566]: E0116 23:57:37.791852 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:37.792260 kubelet[2566]: E0116 23:57:37.792237 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:37.792337 kubelet[2566]: W0116 23:57:37.792263 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:37.792337 kubelet[2566]: E0116 23:57:37.792291 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:37.792857 kubelet[2566]: E0116 23:57:37.792813 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:37.792857 kubelet[2566]: W0116 23:57:37.792826 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:37.792857 kubelet[2566]: E0116 23:57:37.792849 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:37.793159 kubelet[2566]: E0116 23:57:37.793093 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:37.793159 kubelet[2566]: W0116 23:57:37.793104 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:37.793402 kubelet[2566]: E0116 23:57:37.793252 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:37.793402 kubelet[2566]: E0116 23:57:37.793259 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:37.793402 kubelet[2566]: W0116 23:57:37.793285 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:37.793583 kubelet[2566]: E0116 23:57:37.793548 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:37.793746 kubelet[2566]: E0116 23:57:37.793676 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:37.793746 kubelet[2566]: W0116 23:57:37.793688 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:37.793746 kubelet[2566]: E0116 23:57:37.793735 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:37.794162 kubelet[2566]: E0116 23:57:37.794048 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:37.794162 kubelet[2566]: W0116 23:57:37.794065 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:37.794162 kubelet[2566]: E0116 23:57:37.794084 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:37.794680 kubelet[2566]: E0116 23:57:37.794529 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:37.794680 kubelet[2566]: W0116 23:57:37.794548 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:37.794680 kubelet[2566]: E0116 23:57:37.794572 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:37.794883 kubelet[2566]: E0116 23:57:37.794804 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:37.794883 kubelet[2566]: W0116 23:57:37.794821 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:37.794883 kubelet[2566]: E0116 23:57:37.794836 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:37.795118 kubelet[2566]: E0116 23:57:37.795055 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:37.795118 kubelet[2566]: W0116 23:57:37.795076 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:37.795118 kubelet[2566]: E0116 23:57:37.795090 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:37.795352 kubelet[2566]: E0116 23:57:37.795327 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:37.795352 kubelet[2566]: W0116 23:57:37.795343 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:37.795352 kubelet[2566]: E0116 23:57:37.795362 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:37.795725 kubelet[2566]: E0116 23:57:37.795690 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:37.795725 kubelet[2566]: W0116 23:57:37.795705 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:37.795996 kubelet[2566]: E0116 23:57:37.795931 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:37.796230 kubelet[2566]: E0116 23:57:37.796108 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:37.796230 kubelet[2566]: W0116 23:57:37.796121 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:37.796230 kubelet[2566]: E0116 23:57:37.796143 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:37.796518 kubelet[2566]: E0116 23:57:37.796415 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:37.796518 kubelet[2566]: W0116 23:57:37.796428 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:37.796518 kubelet[2566]: E0116 23:57:37.796447 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:37.796920 kubelet[2566]: E0116 23:57:37.796783 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:37.796920 kubelet[2566]: W0116 23:57:37.796797 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:37.796920 kubelet[2566]: E0116 23:57:37.796815 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:37.797358 kubelet[2566]: E0116 23:57:37.797104 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:37.797358 kubelet[2566]: W0116 23:57:37.797121 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:37.797358 kubelet[2566]: E0116 23:57:37.797136 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:37.797760 kubelet[2566]: E0116 23:57:37.797745 2566 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:37.797833 kubelet[2566]: W0116 23:57:37.797821 2566 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:37.797893 kubelet[2566]: E0116 23:57:37.797882 2566 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:38.148997 containerd[1469]: time="2026-01-16T23:57:38.147620792Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:57:38.150036 containerd[1469]: time="2026-01-16T23:57:38.149968563Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Jan 16 23:57:38.150998 containerd[1469]: time="2026-01-16T23:57:38.150884752Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:57:38.155113 containerd[1469]: time="2026-01-16T23:57:38.154608066Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:57:38.156739 containerd[1469]: time="2026-01-16T23:57:38.156695120Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.416298993s" Jan 16 23:57:38.157232 containerd[1469]: time="2026-01-16T23:57:38.156880038Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Jan 16 23:57:38.160844 containerd[1469]: time="2026-01-16T23:57:38.160684592Z" level=info msg="CreateContainer within sandbox \"552f3addad7ca85e27364a4cafb8a2d2d9a764dd73d1dc09dfd6745496166e03\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 16 23:57:38.186300 containerd[1469]: time="2026-01-16T23:57:38.186132640Z" level=info msg="CreateContainer within sandbox \"552f3addad7ca85e27364a4cafb8a2d2d9a764dd73d1dc09dfd6745496166e03\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"2d0808b1185421dee4fed8fc1830ffd18eec8458bafa2496b0fe762226b57c3d\"" Jan 16 23:57:38.190969 containerd[1469]: time="2026-01-16T23:57:38.189614517Z" level=info msg="StartContainer for \"2d0808b1185421dee4fed8fc1830ffd18eec8458bafa2496b0fe762226b57c3d\"" Jan 16 23:57:38.228276 systemd[1]: Started cri-containerd-2d0808b1185421dee4fed8fc1830ffd18eec8458bafa2496b0fe762226b57c3d.scope - libcontainer container 2d0808b1185421dee4fed8fc1830ffd18eec8458bafa2496b0fe762226b57c3d. Jan 16 23:57:38.266163 containerd[1469]: time="2026-01-16T23:57:38.266106060Z" level=info msg="StartContainer for \"2d0808b1185421dee4fed8fc1830ffd18eec8458bafa2496b0fe762226b57c3d\" returns successfully" Jan 16 23:57:38.278837 systemd[1]: cri-containerd-2d0808b1185421dee4fed8fc1830ffd18eec8458bafa2496b0fe762226b57c3d.scope: Deactivated successfully. Jan 16 23:57:38.413473 containerd[1469]: time="2026-01-16T23:57:38.413254057Z" level=info msg="shim disconnected" id=2d0808b1185421dee4fed8fc1830ffd18eec8458bafa2496b0fe762226b57c3d namespace=k8s.io Jan 16 23:57:38.413473 containerd[1469]: time="2026-01-16T23:57:38.413360695Z" level=warning msg="cleaning up after shim disconnected" id=2d0808b1185421dee4fed8fc1830ffd18eec8458bafa2496b0fe762226b57c3d namespace=k8s.io Jan 16 23:57:38.413473 containerd[1469]: time="2026-01-16T23:57:38.413376335Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 16 23:57:38.582796 kubelet[2566]: E0116 23:57:38.581742 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j4ltk" podUID="f9b64606-aa04-4801-bf16-55e0f797524c" Jan 16 23:57:38.742052 kubelet[2566]: I0116 23:57:38.741904 2566 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 16 23:57:38.744810 containerd[1469]: time="2026-01-16T23:57:38.744516318Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 16 23:57:38.750404 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2d0808b1185421dee4fed8fc1830ffd18eec8458bafa2496b0fe762226b57c3d-rootfs.mount: Deactivated successfully. Jan 16 23:57:38.765941 kubelet[2566]: I0116 23:57:38.765862 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-66dd57c865-p2tm8" podStartSLOduration=2.78917266 podStartE2EDuration="4.765842456s" podCreationTimestamp="2026-01-16 23:57:34 +0000 UTC" firstStartedPulling="2026-01-16 23:57:34.762444229 +0000 UTC m=+30.340577852" lastFinishedPulling="2026-01-16 23:57:36.739114025 +0000 UTC m=+32.317247648" observedRunningTime="2026-01-16 23:57:37.755793691 +0000 UTC m=+33.333927354" watchObservedRunningTime="2026-01-16 23:57:38.765842456 +0000 UTC m=+34.343976079" Jan 16 23:57:40.581846 kubelet[2566]: E0116 23:57:40.581789 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j4ltk" podUID="f9b64606-aa04-4801-bf16-55e0f797524c" Jan 16 23:57:41.267989 containerd[1469]: time="2026-01-16T23:57:41.267788496Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:57:41.270783 containerd[1469]: time="2026-01-16T23:57:41.270498625Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Jan 16 23:57:41.275969 containerd[1469]: time="2026-01-16T23:57:41.274614539Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:57:41.278274 containerd[1469]: time="2026-01-16T23:57:41.278226659Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:57:41.279174 containerd[1469]: time="2026-01-16T23:57:41.279124409Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.534529652s" Jan 16 23:57:41.279174 containerd[1469]: time="2026-01-16T23:57:41.279170528Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Jan 16 23:57:41.283754 containerd[1469]: time="2026-01-16T23:57:41.283713437Z" level=info msg="CreateContainer within sandbox \"552f3addad7ca85e27364a4cafb8a2d2d9a764dd73d1dc09dfd6745496166e03\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 16 23:57:41.302715 containerd[1469]: time="2026-01-16T23:57:41.302664025Z" level=info msg="CreateContainer within sandbox \"552f3addad7ca85e27364a4cafb8a2d2d9a764dd73d1dc09dfd6745496166e03\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3ce69fea215185821b0e7fef052069fbacb4e4fd781f9a270e7e43ea52032d15\"" Jan 16 23:57:41.303753 containerd[1469]: time="2026-01-16T23:57:41.303721413Z" level=info msg="StartContainer for \"3ce69fea215185821b0e7fef052069fbacb4e4fd781f9a270e7e43ea52032d15\"" Jan 16 23:57:41.342323 systemd[1]: Started cri-containerd-3ce69fea215185821b0e7fef052069fbacb4e4fd781f9a270e7e43ea52032d15.scope - libcontainer container 3ce69fea215185821b0e7fef052069fbacb4e4fd781f9a270e7e43ea52032d15. Jan 16 23:57:41.382409 containerd[1469]: time="2026-01-16T23:57:41.382323252Z" level=info msg="StartContainer for \"3ce69fea215185821b0e7fef052069fbacb4e4fd781f9a270e7e43ea52032d15\" returns successfully" Jan 16 23:57:41.966057 containerd[1469]: time="2026-01-16T23:57:41.965978754Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 16 23:57:41.970578 systemd[1]: cri-containerd-3ce69fea215185821b0e7fef052069fbacb4e4fd781f9a270e7e43ea52032d15.scope: Deactivated successfully. Jan 16 23:57:41.979276 kubelet[2566]: I0116 23:57:41.978695 2566 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 16 23:57:41.997495 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3ce69fea215185821b0e7fef052069fbacb4e4fd781f9a270e7e43ea52032d15-rootfs.mount: Deactivated successfully. Jan 16 23:57:42.053528 systemd[1]: Created slice kubepods-burstable-pod29ddbdd3_30d8_4cf4_8a5f_7715f3d5b4bb.slice - libcontainer container kubepods-burstable-pod29ddbdd3_30d8_4cf4_8a5f_7715f3d5b4bb.slice. Jan 16 23:57:42.073322 systemd[1]: Created slice kubepods-besteffort-pode83705ab_d8ce_46ca_880d_899f69158672.slice - libcontainer container kubepods-besteffort-pode83705ab_d8ce_46ca_880d_899f69158672.slice. Jan 16 23:57:42.081632 containerd[1469]: time="2026-01-16T23:57:42.080693094Z" level=info msg="shim disconnected" id=3ce69fea215185821b0e7fef052069fbacb4e4fd781f9a270e7e43ea52032d15 namespace=k8s.io Jan 16 23:57:42.081632 containerd[1469]: time="2026-01-16T23:57:42.080781333Z" level=warning msg="cleaning up after shim disconnected" id=3ce69fea215185821b0e7fef052069fbacb4e4fd781f9a270e7e43ea52032d15 namespace=k8s.io Jan 16 23:57:42.081632 containerd[1469]: time="2026-01-16T23:57:42.080890092Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 16 23:57:42.091019 systemd[1]: Created slice kubepods-burstable-pod59071300_7ce3_4b48_ae86_f59c1ac4567d.slice - libcontainer container kubepods-burstable-pod59071300_7ce3_4b48_ae86_f59c1ac4567d.slice. Jan 16 23:57:42.106709 systemd[1]: Created slice kubepods-besteffort-pod861b4149_53db_42c9_9886_651961041ffb.slice - libcontainer container kubepods-besteffort-pod861b4149_53db_42c9_9886_651961041ffb.slice. Jan 16 23:57:42.122741 kubelet[2566]: I0116 23:57:42.122686 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jt8k\" (UniqueName: \"kubernetes.io/projected/29ddbdd3-30d8-4cf4-8a5f-7715f3d5b4bb-kube-api-access-4jt8k\") pod \"coredns-668d6bf9bc-ktbgp\" (UID: \"29ddbdd3-30d8-4cf4-8a5f-7715f3d5b4bb\") " pod="kube-system/coredns-668d6bf9bc-ktbgp" Jan 16 23:57:42.122741 kubelet[2566]: I0116 23:57:42.122740 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75f2p\" (UniqueName: \"kubernetes.io/projected/e83705ab-d8ce-46ca-880d-899f69158672-kube-api-access-75f2p\") pod \"calico-kube-controllers-68bd9998fd-lpljt\" (UID: \"e83705ab-d8ce-46ca-880d-899f69158672\") " pod="calico-system/calico-kube-controllers-68bd9998fd-lpljt" Jan 16 23:57:42.122934 kubelet[2566]: I0116 23:57:42.122763 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcknv\" (UniqueName: \"kubernetes.io/projected/195fd954-db29-4a46-a5c3-26216d80a6af-kube-api-access-mcknv\") pod \"goldmane-666569f655-jr25d\" (UID: \"195fd954-db29-4a46-a5c3-26216d80a6af\") " pod="calico-system/goldmane-666569f655-jr25d" Jan 16 23:57:42.122934 kubelet[2566]: I0116 23:57:42.122782 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e83705ab-d8ce-46ca-880d-899f69158672-tigera-ca-bundle\") pod \"calico-kube-controllers-68bd9998fd-lpljt\" (UID: \"e83705ab-d8ce-46ca-880d-899f69158672\") " pod="calico-system/calico-kube-controllers-68bd9998fd-lpljt" Jan 16 23:57:42.122934 kubelet[2566]: I0116 23:57:42.122807 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/861b4149-53db-42c9-9886-651961041ffb-calico-apiserver-certs\") pod \"calico-apiserver-c7c7c7dd6-kgtcm\" (UID: \"861b4149-53db-42c9-9886-651961041ffb\") " pod="calico-apiserver/calico-apiserver-c7c7c7dd6-kgtcm" Jan 16 23:57:42.122934 kubelet[2566]: I0116 23:57:42.122826 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/59071300-7ce3-4b48-ae86-f59c1ac4567d-config-volume\") pod \"coredns-668d6bf9bc-9tw7r\" (UID: \"59071300-7ce3-4b48-ae86-f59c1ac4567d\") " pod="kube-system/coredns-668d6bf9bc-9tw7r" Jan 16 23:57:42.122934 kubelet[2566]: I0116 23:57:42.122843 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xxtn\" (UniqueName: \"kubernetes.io/projected/59071300-7ce3-4b48-ae86-f59c1ac4567d-kube-api-access-2xxtn\") pod \"coredns-668d6bf9bc-9tw7r\" (UID: \"59071300-7ce3-4b48-ae86-f59c1ac4567d\") " pod="kube-system/coredns-668d6bf9bc-9tw7r" Jan 16 23:57:42.123082 kubelet[2566]: I0116 23:57:42.122883 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rmz5\" (UniqueName: \"kubernetes.io/projected/261f04ea-7bee-49b1-9c52-588c82c92cce-kube-api-access-8rmz5\") pod \"whisker-65b547c47d-4k7mm\" (UID: \"261f04ea-7bee-49b1-9c52-588c82c92cce\") " pod="calico-system/whisker-65b547c47d-4k7mm" Jan 16 23:57:42.123082 kubelet[2566]: I0116 23:57:42.122903 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/195fd954-db29-4a46-a5c3-26216d80a6af-goldmane-ca-bundle\") pod \"goldmane-666569f655-jr25d\" (UID: \"195fd954-db29-4a46-a5c3-26216d80a6af\") " pod="calico-system/goldmane-666569f655-jr25d" Jan 16 23:57:42.123082 kubelet[2566]: I0116 23:57:42.122923 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b9c78c11-17fe-4d54-827b-16ba9d81154b-calico-apiserver-certs\") pod \"calico-apiserver-c7c7c7dd6-26hzj\" (UID: \"b9c78c11-17fe-4d54-827b-16ba9d81154b\") " pod="calico-apiserver/calico-apiserver-c7c7c7dd6-26hzj" Jan 16 23:57:42.123082 kubelet[2566]: I0116 23:57:42.122980 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/29ddbdd3-30d8-4cf4-8a5f-7715f3d5b4bb-config-volume\") pod \"coredns-668d6bf9bc-ktbgp\" (UID: \"29ddbdd3-30d8-4cf4-8a5f-7715f3d5b4bb\") " pod="kube-system/coredns-668d6bf9bc-ktbgp" Jan 16 23:57:42.123082 kubelet[2566]: I0116 23:57:42.123001 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hczh\" (UniqueName: \"kubernetes.io/projected/b9c78c11-17fe-4d54-827b-16ba9d81154b-kube-api-access-6hczh\") pod \"calico-apiserver-c7c7c7dd6-26hzj\" (UID: \"b9c78c11-17fe-4d54-827b-16ba9d81154b\") " pod="calico-apiserver/calico-apiserver-c7c7c7dd6-26hzj" Jan 16 23:57:42.123205 kubelet[2566]: I0116 23:57:42.123020 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/261f04ea-7bee-49b1-9c52-588c82c92cce-whisker-backend-key-pair\") pod \"whisker-65b547c47d-4k7mm\" (UID: \"261f04ea-7bee-49b1-9c52-588c82c92cce\") " pod="calico-system/whisker-65b547c47d-4k7mm" Jan 16 23:57:42.123205 kubelet[2566]: I0116 23:57:42.123039 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5md25\" (UniqueName: \"kubernetes.io/projected/861b4149-53db-42c9-9886-651961041ffb-kube-api-access-5md25\") pod \"calico-apiserver-c7c7c7dd6-kgtcm\" (UID: \"861b4149-53db-42c9-9886-651961041ffb\") " pod="calico-apiserver/calico-apiserver-c7c7c7dd6-kgtcm" Jan 16 23:57:42.123205 kubelet[2566]: I0116 23:57:42.123065 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/261f04ea-7bee-49b1-9c52-588c82c92cce-whisker-ca-bundle\") pod \"whisker-65b547c47d-4k7mm\" (UID: \"261f04ea-7bee-49b1-9c52-588c82c92cce\") " pod="calico-system/whisker-65b547c47d-4k7mm" Jan 16 23:57:42.123205 kubelet[2566]: I0116 23:57:42.123081 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/195fd954-db29-4a46-a5c3-26216d80a6af-config\") pod \"goldmane-666569f655-jr25d\" (UID: \"195fd954-db29-4a46-a5c3-26216d80a6af\") " pod="calico-system/goldmane-666569f655-jr25d" Jan 16 23:57:42.123205 kubelet[2566]: I0116 23:57:42.123099 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/195fd954-db29-4a46-a5c3-26216d80a6af-goldmane-key-pair\") pod \"goldmane-666569f655-jr25d\" (UID: \"195fd954-db29-4a46-a5c3-26216d80a6af\") " pod="calico-system/goldmane-666569f655-jr25d" Jan 16 23:57:42.126407 systemd[1]: Created slice kubepods-besteffort-podb9c78c11_17fe_4d54_827b_16ba9d81154b.slice - libcontainer container kubepods-besteffort-podb9c78c11_17fe_4d54_827b_16ba9d81154b.slice. Jan 16 23:57:42.139127 systemd[1]: Created slice kubepods-besteffort-pod195fd954_db29_4a46_a5c3_26216d80a6af.slice - libcontainer container kubepods-besteffort-pod195fd954_db29_4a46_a5c3_26216d80a6af.slice. Jan 16 23:57:42.148333 systemd[1]: Created slice kubepods-besteffort-pod261f04ea_7bee_49b1_9c52_588c82c92cce.slice - libcontainer container kubepods-besteffort-pod261f04ea_7bee_49b1_9c52_588c82c92cce.slice. Jan 16 23:57:42.365384 containerd[1469]: time="2026-01-16T23:57:42.365321633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ktbgp,Uid:29ddbdd3-30d8-4cf4-8a5f-7715f3d5b4bb,Namespace:kube-system,Attempt:0,}" Jan 16 23:57:42.394354 containerd[1469]: time="2026-01-16T23:57:42.393932841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68bd9998fd-lpljt,Uid:e83705ab-d8ce-46ca-880d-899f69158672,Namespace:calico-system,Attempt:0,}" Jan 16 23:57:42.399886 containerd[1469]: time="2026-01-16T23:57:42.399498300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9tw7r,Uid:59071300-7ce3-4b48-ae86-f59c1ac4567d,Namespace:kube-system,Attempt:0,}" Jan 16 23:57:42.420695 containerd[1469]: time="2026-01-16T23:57:42.420650630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c7c7c7dd6-kgtcm,Uid:861b4149-53db-42c9-9886-651961041ffb,Namespace:calico-apiserver,Attempt:0,}" Jan 16 23:57:42.436463 containerd[1469]: time="2026-01-16T23:57:42.436374779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c7c7c7dd6-26hzj,Uid:b9c78c11-17fe-4d54-827b-16ba9d81154b,Namespace:calico-apiserver,Attempt:0,}" Jan 16 23:57:42.445030 containerd[1469]: time="2026-01-16T23:57:42.444718728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-jr25d,Uid:195fd954-db29-4a46-a5c3-26216d80a6af,Namespace:calico-system,Attempt:0,}" Jan 16 23:57:42.453471 containerd[1469]: time="2026-01-16T23:57:42.453428073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65b547c47d-4k7mm,Uid:261f04ea-7bee-49b1-9c52-588c82c92cce,Namespace:calico-system,Attempt:0,}" Jan 16 23:57:42.553746 containerd[1469]: time="2026-01-16T23:57:42.553691981Z" level=error msg="Failed to destroy network for sandbox \"1c099c7b6edcf90c7c98d9cfb00e2eba031feee97a663feca17d01d07b01764c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:42.554668 containerd[1469]: time="2026-01-16T23:57:42.554613251Z" level=error msg="encountered an error cleaning up failed sandbox \"1c099c7b6edcf90c7c98d9cfb00e2eba031feee97a663feca17d01d07b01764c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:42.554766 containerd[1469]: time="2026-01-16T23:57:42.554686050Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68bd9998fd-lpljt,Uid:e83705ab-d8ce-46ca-880d-899f69158672,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1c099c7b6edcf90c7c98d9cfb00e2eba031feee97a663feca17d01d07b01764c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:42.555051 kubelet[2566]: E0116 23:57:42.555000 2566 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c099c7b6edcf90c7c98d9cfb00e2eba031feee97a663feca17d01d07b01764c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:42.555132 kubelet[2566]: E0116 23:57:42.555077 2566 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c099c7b6edcf90c7c98d9cfb00e2eba031feee97a663feca17d01d07b01764c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68bd9998fd-lpljt" Jan 16 23:57:42.555132 kubelet[2566]: E0116 23:57:42.555098 2566 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c099c7b6edcf90c7c98d9cfb00e2eba031feee97a663feca17d01d07b01764c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68bd9998fd-lpljt" Jan 16 23:57:42.555234 kubelet[2566]: E0116 23:57:42.555139 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-68bd9998fd-lpljt_calico-system(e83705ab-d8ce-46ca-880d-899f69158672)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-68bd9998fd-lpljt_calico-system(e83705ab-d8ce-46ca-880d-899f69158672)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1c099c7b6edcf90c7c98d9cfb00e2eba031feee97a663feca17d01d07b01764c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68bd9998fd-lpljt" podUID="e83705ab-d8ce-46ca-880d-899f69158672" Jan 16 23:57:42.593151 systemd[1]: Created slice kubepods-besteffort-podf9b64606_aa04_4801_bf16_55e0f797524c.slice - libcontainer container kubepods-besteffort-podf9b64606_aa04_4801_bf16_55e0f797524c.slice. Jan 16 23:57:42.597208 containerd[1469]: time="2026-01-16T23:57:42.597085588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j4ltk,Uid:f9b64606-aa04-4801-bf16-55e0f797524c,Namespace:calico-system,Attempt:0,}" Jan 16 23:57:42.610699 containerd[1469]: time="2026-01-16T23:57:42.610458162Z" level=error msg="Failed to destroy network for sandbox \"ca2931cae940bb49fc6db34fb07c9ccab972a1f20d06a7cf7b02651ba8eeacba\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:42.612244 containerd[1469]: time="2026-01-16T23:57:42.612142464Z" level=error msg="encountered an error cleaning up failed sandbox \"ca2931cae940bb49fc6db34fb07c9ccab972a1f20d06a7cf7b02651ba8eeacba\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:42.613080 containerd[1469]: time="2026-01-16T23:57:42.613035374Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ktbgp,Uid:29ddbdd3-30d8-4cf4-8a5f-7715f3d5b4bb,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ca2931cae940bb49fc6db34fb07c9ccab972a1f20d06a7cf7b02651ba8eeacba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:42.614836 kubelet[2566]: E0116 23:57:42.614788 2566 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca2931cae940bb49fc6db34fb07c9ccab972a1f20d06a7cf7b02651ba8eeacba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:42.614935 kubelet[2566]: E0116 23:57:42.614854 2566 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca2931cae940bb49fc6db34fb07c9ccab972a1f20d06a7cf7b02651ba8eeacba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-ktbgp" Jan 16 23:57:42.614935 kubelet[2566]: E0116 23:57:42.614879 2566 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca2931cae940bb49fc6db34fb07c9ccab972a1f20d06a7cf7b02651ba8eeacba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-ktbgp" Jan 16 23:57:42.614935 kubelet[2566]: E0116 23:57:42.614919 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-ktbgp_kube-system(29ddbdd3-30d8-4cf4-8a5f-7715f3d5b4bb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-ktbgp_kube-system(29ddbdd3-30d8-4cf4-8a5f-7715f3d5b4bb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ca2931cae940bb49fc6db34fb07c9ccab972a1f20d06a7cf7b02651ba8eeacba\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-ktbgp" podUID="29ddbdd3-30d8-4cf4-8a5f-7715f3d5b4bb" Jan 16 23:57:42.625161 containerd[1469]: time="2026-01-16T23:57:42.624286132Z" level=error msg="Failed to destroy network for sandbox \"372f97accdf5a50ac72721b082abc929fafc066190f37146341999729c4e8bcc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:42.625161 containerd[1469]: time="2026-01-16T23:57:42.624782726Z" level=error msg="encountered an error cleaning up failed sandbox \"372f97accdf5a50ac72721b082abc929fafc066190f37146341999729c4e8bcc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:42.625161 containerd[1469]: time="2026-01-16T23:57:42.624866165Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9tw7r,Uid:59071300-7ce3-4b48-ae86-f59c1ac4567d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"372f97accdf5a50ac72721b082abc929fafc066190f37146341999729c4e8bcc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:42.625514 kubelet[2566]: E0116 23:57:42.625248 2566 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"372f97accdf5a50ac72721b082abc929fafc066190f37146341999729c4e8bcc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:42.625514 kubelet[2566]: E0116 23:57:42.625303 2566 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"372f97accdf5a50ac72721b082abc929fafc066190f37146341999729c4e8bcc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-9tw7r" Jan 16 23:57:42.625514 kubelet[2566]: E0116 23:57:42.625334 2566 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"372f97accdf5a50ac72721b082abc929fafc066190f37146341999729c4e8bcc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-9tw7r" Jan 16 23:57:42.625795 kubelet[2566]: E0116 23:57:42.625377 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-9tw7r_kube-system(59071300-7ce3-4b48-ae86-f59c1ac4567d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-9tw7r_kube-system(59071300-7ce3-4b48-ae86-f59c1ac4567d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"372f97accdf5a50ac72721b082abc929fafc066190f37146341999729c4e8bcc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-9tw7r" podUID="59071300-7ce3-4b48-ae86-f59c1ac4567d" Jan 16 23:57:42.673985 containerd[1469]: time="2026-01-16T23:57:42.673888511Z" level=error msg="Failed to destroy network for sandbox \"b0b45ca3b3b60ea73958fcf4bbea6bcef53c53dab5003e89ddea87423499aebd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:42.678570 containerd[1469]: time="2026-01-16T23:57:42.675070538Z" level=error msg="encountered an error cleaning up failed sandbox \"b0b45ca3b3b60ea73958fcf4bbea6bcef53c53dab5003e89ddea87423499aebd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:42.678570 containerd[1469]: time="2026-01-16T23:57:42.676359284Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c7c7c7dd6-kgtcm,Uid:861b4149-53db-42c9-9886-651961041ffb,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b0b45ca3b3b60ea73958fcf4bbea6bcef53c53dab5003e89ddea87423499aebd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:42.678797 kubelet[2566]: E0116 23:57:42.676603 2566 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0b45ca3b3b60ea73958fcf4bbea6bcef53c53dab5003e89ddea87423499aebd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:42.678797 kubelet[2566]: E0116 23:57:42.676662 2566 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0b45ca3b3b60ea73958fcf4bbea6bcef53c53dab5003e89ddea87423499aebd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c7c7c7dd6-kgtcm" Jan 16 23:57:42.678797 kubelet[2566]: E0116 23:57:42.676687 2566 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0b45ca3b3b60ea73958fcf4bbea6bcef53c53dab5003e89ddea87423499aebd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c7c7c7dd6-kgtcm" Jan 16 23:57:42.678903 kubelet[2566]: E0116 23:57:42.676724 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c7c7c7dd6-kgtcm_calico-apiserver(861b4149-53db-42c9-9886-651961041ffb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c7c7c7dd6-kgtcm_calico-apiserver(861b4149-53db-42c9-9886-651961041ffb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b0b45ca3b3b60ea73958fcf4bbea6bcef53c53dab5003e89ddea87423499aebd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c7c7c7dd6-kgtcm" podUID="861b4149-53db-42c9-9886-651961041ffb" Jan 16 23:57:42.684385 containerd[1469]: time="2026-01-16T23:57:42.684083760Z" level=error msg="Failed to destroy network for sandbox \"a7ee57f7bb023a0ee0f2568b9564b7017c07a878d1c81bc06782f6229a5dcbc0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:42.685959 containerd[1469]: time="2026-01-16T23:57:42.685874821Z" level=error msg="encountered an error cleaning up failed sandbox \"a7ee57f7bb023a0ee0f2568b9564b7017c07a878d1c81bc06782f6229a5dcbc0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:42.686309 containerd[1469]: time="2026-01-16T23:57:42.686268696Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c7c7c7dd6-26hzj,Uid:b9c78c11-17fe-4d54-827b-16ba9d81154b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a7ee57f7bb023a0ee0f2568b9564b7017c07a878d1c81bc06782f6229a5dcbc0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:42.687305 kubelet[2566]: E0116 23:57:42.687269 2566 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7ee57f7bb023a0ee0f2568b9564b7017c07a878d1c81bc06782f6229a5dcbc0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:42.687678 kubelet[2566]: E0116 23:57:42.687560 2566 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7ee57f7bb023a0ee0f2568b9564b7017c07a878d1c81bc06782f6229a5dcbc0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c7c7c7dd6-26hzj" Jan 16 23:57:42.687678 kubelet[2566]: E0116 23:57:42.687594 2566 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7ee57f7bb023a0ee0f2568b9564b7017c07a878d1c81bc06782f6229a5dcbc0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c7c7c7dd6-26hzj" Jan 16 23:57:42.687988 kubelet[2566]: E0116 23:57:42.687656 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c7c7c7dd6-26hzj_calico-apiserver(b9c78c11-17fe-4d54-827b-16ba9d81154b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c7c7c7dd6-26hzj_calico-apiserver(b9c78c11-17fe-4d54-827b-16ba9d81154b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a7ee57f7bb023a0ee0f2568b9564b7017c07a878d1c81bc06782f6229a5dcbc0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c7c7c7dd6-26hzj" podUID="b9c78c11-17fe-4d54-827b-16ba9d81154b" Jan 16 23:57:42.713693 containerd[1469]: time="2026-01-16T23:57:42.713637198Z" level=error msg="Failed to destroy network for sandbox \"86b346fee605a870a4dfa9dfff1a653e49704e18c7a67f4c2476a170ff73ed6b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:42.714085 containerd[1469]: time="2026-01-16T23:57:42.713999714Z" level=error msg="encountered an error cleaning up failed sandbox \"86b346fee605a870a4dfa9dfff1a653e49704e18c7a67f4c2476a170ff73ed6b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:42.714085 containerd[1469]: time="2026-01-16T23:57:42.714064673Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65b547c47d-4k7mm,Uid:261f04ea-7bee-49b1-9c52-588c82c92cce,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"86b346fee605a870a4dfa9dfff1a653e49704e18c7a67f4c2476a170ff73ed6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:42.715241 kubelet[2566]: E0116 23:57:42.714348 2566 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86b346fee605a870a4dfa9dfff1a653e49704e18c7a67f4c2476a170ff73ed6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:42.715241 kubelet[2566]: E0116 23:57:42.714406 2566 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86b346fee605a870a4dfa9dfff1a653e49704e18c7a67f4c2476a170ff73ed6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-65b547c47d-4k7mm" Jan 16 23:57:42.715241 kubelet[2566]: E0116 23:57:42.714425 2566 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86b346fee605a870a4dfa9dfff1a653e49704e18c7a67f4c2476a170ff73ed6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-65b547c47d-4k7mm" Jan 16 23:57:42.715406 kubelet[2566]: E0116 23:57:42.714468 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-65b547c47d-4k7mm_calico-system(261f04ea-7bee-49b1-9c52-588c82c92cce)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-65b547c47d-4k7mm_calico-system(261f04ea-7bee-49b1-9c52-588c82c92cce)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"86b346fee605a870a4dfa9dfff1a653e49704e18c7a67f4c2476a170ff73ed6b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-65b547c47d-4k7mm" podUID="261f04ea-7bee-49b1-9c52-588c82c92cce" Jan 16 23:57:42.718012 containerd[1469]: time="2026-01-16T23:57:42.717970711Z" level=error msg="Failed to destroy network for sandbox \"d5b43f50f38347de7ac99fc30ba9192dcbe4e414ef3d800b6f310f994e3ea191\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:42.718875 containerd[1469]: time="2026-01-16T23:57:42.718840261Z" level=error msg="encountered an error cleaning up failed sandbox \"d5b43f50f38347de7ac99fc30ba9192dcbe4e414ef3d800b6f310f994e3ea191\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:42.719154 containerd[1469]: time="2026-01-16T23:57:42.719100579Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-jr25d,Uid:195fd954-db29-4a46-a5c3-26216d80a6af,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d5b43f50f38347de7ac99fc30ba9192dcbe4e414ef3d800b6f310f994e3ea191\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:42.719790 kubelet[2566]: E0116 23:57:42.719572 2566 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5b43f50f38347de7ac99fc30ba9192dcbe4e414ef3d800b6f310f994e3ea191\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:42.719790 kubelet[2566]: E0116 23:57:42.719650 2566 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5b43f50f38347de7ac99fc30ba9192dcbe4e414ef3d800b6f310f994e3ea191\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-jr25d" Jan 16 23:57:42.719790 kubelet[2566]: E0116 23:57:42.719672 2566 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5b43f50f38347de7ac99fc30ba9192dcbe4e414ef3d800b6f310f994e3ea191\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-jr25d" Jan 16 23:57:42.720020 kubelet[2566]: E0116 23:57:42.719727 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-jr25d_calico-system(195fd954-db29-4a46-a5c3-26216d80a6af)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-jr25d_calico-system(195fd954-db29-4a46-a5c3-26216d80a6af)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d5b43f50f38347de7ac99fc30ba9192dcbe4e414ef3d800b6f310f994e3ea191\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-jr25d" podUID="195fd954-db29-4a46-a5c3-26216d80a6af" Jan 16 23:57:42.743396 containerd[1469]: time="2026-01-16T23:57:42.743331755Z" level=error msg="Failed to destroy network for sandbox \"b7a3322c32bde7d1c36bad44bcae6518c9ec8c552711565589665eeb47ea7f4a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:42.744035 containerd[1469]: time="2026-01-16T23:57:42.743907908Z" level=error msg="encountered an error cleaning up failed sandbox \"b7a3322c32bde7d1c36bad44bcae6518c9ec8c552711565589665eeb47ea7f4a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:42.744265 containerd[1469]: time="2026-01-16T23:57:42.744235785Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j4ltk,Uid:f9b64606-aa04-4801-bf16-55e0f797524c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b7a3322c32bde7d1c36bad44bcae6518c9ec8c552711565589665eeb47ea7f4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:42.744652 kubelet[2566]: E0116 23:57:42.744602 2566 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7a3322c32bde7d1c36bad44bcae6518c9ec8c552711565589665eeb47ea7f4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:42.744794 kubelet[2566]: E0116 23:57:42.744774 2566 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7a3322c32bde7d1c36bad44bcae6518c9ec8c552711565589665eeb47ea7f4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-j4ltk" Jan 16 23:57:42.744895 kubelet[2566]: E0116 23:57:42.744867 2566 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7a3322c32bde7d1c36bad44bcae6518c9ec8c552711565589665eeb47ea7f4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-j4ltk" Jan 16 23:57:42.745075 kubelet[2566]: E0116 23:57:42.745044 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-j4ltk_calico-system(f9b64606-aa04-4801-bf16-55e0f797524c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-j4ltk_calico-system(f9b64606-aa04-4801-bf16-55e0f797524c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b7a3322c32bde7d1c36bad44bcae6518c9ec8c552711565589665eeb47ea7f4a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-j4ltk" podUID="f9b64606-aa04-4801-bf16-55e0f797524c" Jan 16 23:57:42.758012 kubelet[2566]: I0116 23:57:42.757901 2566 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7ee57f7bb023a0ee0f2568b9564b7017c07a878d1c81bc06782f6229a5dcbc0" Jan 16 23:57:42.759758 containerd[1469]: time="2026-01-16T23:57:42.759475259Z" level=info msg="StopPodSandbox for \"a7ee57f7bb023a0ee0f2568b9564b7017c07a878d1c81bc06782f6229a5dcbc0\"" Jan 16 23:57:42.759758 containerd[1469]: time="2026-01-16T23:57:42.759675696Z" level=info msg="Ensure that sandbox a7ee57f7bb023a0ee0f2568b9564b7017c07a878d1c81bc06782f6229a5dcbc0 in task-service has been cleanup successfully" Jan 16 23:57:42.761804 kubelet[2566]: I0116 23:57:42.761257 2566 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="372f97accdf5a50ac72721b082abc929fafc066190f37146341999729c4e8bcc" Jan 16 23:57:42.762842 containerd[1469]: time="2026-01-16T23:57:42.762743223Z" level=info msg="StopPodSandbox for \"372f97accdf5a50ac72721b082abc929fafc066190f37146341999729c4e8bcc\"" Jan 16 23:57:42.763973 containerd[1469]: time="2026-01-16T23:57:42.763864771Z" level=info msg="Ensure that sandbox 372f97accdf5a50ac72721b082abc929fafc066190f37146341999729c4e8bcc in task-service has been cleanup successfully" Jan 16 23:57:42.767544 kubelet[2566]: I0116 23:57:42.766774 2566 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca2931cae940bb49fc6db34fb07c9ccab972a1f20d06a7cf7b02651ba8eeacba" Jan 16 23:57:42.769627 containerd[1469]: time="2026-01-16T23:57:42.769582269Z" level=info msg="StopPodSandbox for \"ca2931cae940bb49fc6db34fb07c9ccab972a1f20d06a7cf7b02651ba8eeacba\"" Jan 16 23:57:42.771142 containerd[1469]: time="2026-01-16T23:57:42.770490019Z" level=info msg="Ensure that sandbox ca2931cae940bb49fc6db34fb07c9ccab972a1f20d06a7cf7b02651ba8eeacba in task-service has been cleanup successfully" Jan 16 23:57:42.775672 kubelet[2566]: I0116 23:57:42.775075 2566 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7a3322c32bde7d1c36bad44bcae6518c9ec8c552711565589665eeb47ea7f4a" Jan 16 23:57:42.776649 kubelet[2566]: I0116 23:57:42.776627 2566 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 16 23:57:42.776740 containerd[1469]: time="2026-01-16T23:57:42.776666591Z" level=info msg="StopPodSandbox for \"b7a3322c32bde7d1c36bad44bcae6518c9ec8c552711565589665eeb47ea7f4a\"" Jan 16 23:57:42.779391 containerd[1469]: time="2026-01-16T23:57:42.776826510Z" level=info msg="Ensure that sandbox b7a3322c32bde7d1c36bad44bcae6518c9ec8c552711565589665eeb47ea7f4a in task-service has been cleanup successfully" Jan 16 23:57:42.783710 kubelet[2566]: I0116 23:57:42.782987 2566 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5b43f50f38347de7ac99fc30ba9192dcbe4e414ef3d800b6f310f994e3ea191" Jan 16 23:57:42.785127 containerd[1469]: time="2026-01-16T23:57:42.784909502Z" level=info msg="StopPodSandbox for \"d5b43f50f38347de7ac99fc30ba9192dcbe4e414ef3d800b6f310f994e3ea191\"" Jan 16 23:57:42.785841 containerd[1469]: time="2026-01-16T23:57:42.785802852Z" level=info msg="Ensure that sandbox d5b43f50f38347de7ac99fc30ba9192dcbe4e414ef3d800b6f310f994e3ea191 in task-service has been cleanup successfully" Jan 16 23:57:42.805071 containerd[1469]: time="2026-01-16T23:57:42.805031682Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 16 23:57:42.807319 kubelet[2566]: I0116 23:57:42.806888 2566 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c099c7b6edcf90c7c98d9cfb00e2eba031feee97a663feca17d01d07b01764c" Jan 16 23:57:42.808322 containerd[1469]: time="2026-01-16T23:57:42.808278007Z" level=info msg="StopPodSandbox for \"1c099c7b6edcf90c7c98d9cfb00e2eba031feee97a663feca17d01d07b01764c\"" Jan 16 23:57:42.809020 containerd[1469]: time="2026-01-16T23:57:42.808839601Z" level=info msg="Ensure that sandbox 1c099c7b6edcf90c7c98d9cfb00e2eba031feee97a663feca17d01d07b01764c in task-service has been cleanup successfully" Jan 16 23:57:42.825785 kubelet[2566]: I0116 23:57:42.825365 2566 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86b346fee605a870a4dfa9dfff1a653e49704e18c7a67f4c2476a170ff73ed6b" Jan 16 23:57:42.828906 containerd[1469]: time="2026-01-16T23:57:42.828762664Z" level=info msg="StopPodSandbox for \"86b346fee605a870a4dfa9dfff1a653e49704e18c7a67f4c2476a170ff73ed6b\"" Jan 16 23:57:42.829027 containerd[1469]: time="2026-01-16T23:57:42.828990861Z" level=info msg="Ensure that sandbox 86b346fee605a870a4dfa9dfff1a653e49704e18c7a67f4c2476a170ff73ed6b in task-service has been cleanup successfully" Jan 16 23:57:42.865847 kubelet[2566]: I0116 23:57:42.865799 2566 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0b45ca3b3b60ea73958fcf4bbea6bcef53c53dab5003e89ddea87423499aebd" Jan 16 23:57:42.868834 containerd[1469]: time="2026-01-16T23:57:42.868789028Z" level=info msg="StopPodSandbox for \"b0b45ca3b3b60ea73958fcf4bbea6bcef53c53dab5003e89ddea87423499aebd\"" Jan 16 23:57:42.869862 containerd[1469]: time="2026-01-16T23:57:42.869010385Z" level=info msg="Ensure that sandbox b0b45ca3b3b60ea73958fcf4bbea6bcef53c53dab5003e89ddea87423499aebd in task-service has been cleanup successfully" Jan 16 23:57:42.940012 containerd[1469]: time="2026-01-16T23:57:42.938603427Z" level=error msg="StopPodSandbox for \"d5b43f50f38347de7ac99fc30ba9192dcbe4e414ef3d800b6f310f994e3ea191\" failed" error="failed to destroy network for sandbox \"d5b43f50f38347de7ac99fc30ba9192dcbe4e414ef3d800b6f310f994e3ea191\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:42.941662 kubelet[2566]: E0116 23:57:42.941624 2566 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d5b43f50f38347de7ac99fc30ba9192dcbe4e414ef3d800b6f310f994e3ea191\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d5b43f50f38347de7ac99fc30ba9192dcbe4e414ef3d800b6f310f994e3ea191" Jan 16 23:57:42.941861 kubelet[2566]: E0116 23:57:42.941798 2566 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d5b43f50f38347de7ac99fc30ba9192dcbe4e414ef3d800b6f310f994e3ea191"} Jan 16 23:57:42.941987 kubelet[2566]: E0116 23:57:42.941970 2566 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"195fd954-db29-4a46-a5c3-26216d80a6af\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d5b43f50f38347de7ac99fc30ba9192dcbe4e414ef3d800b6f310f994e3ea191\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 16 23:57:42.942114 kubelet[2566]: E0116 23:57:42.942096 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"195fd954-db29-4a46-a5c3-26216d80a6af\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d5b43f50f38347de7ac99fc30ba9192dcbe4e414ef3d800b6f310f994e3ea191\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-jr25d" podUID="195fd954-db29-4a46-a5c3-26216d80a6af" Jan 16 23:57:42.942507 containerd[1469]: time="2026-01-16T23:57:42.942468665Z" level=error msg="StopPodSandbox for \"ca2931cae940bb49fc6db34fb07c9ccab972a1f20d06a7cf7b02651ba8eeacba\" failed" error="failed to destroy network for sandbox \"ca2931cae940bb49fc6db34fb07c9ccab972a1f20d06a7cf7b02651ba8eeacba\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:42.943727 kubelet[2566]: E0116 23:57:42.943693 2566 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ca2931cae940bb49fc6db34fb07c9ccab972a1f20d06a7cf7b02651ba8eeacba\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ca2931cae940bb49fc6db34fb07c9ccab972a1f20d06a7cf7b02651ba8eeacba" Jan 16 23:57:42.944390 kubelet[2566]: E0116 23:57:42.944268 2566 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ca2931cae940bb49fc6db34fb07c9ccab972a1f20d06a7cf7b02651ba8eeacba"} Jan 16 23:57:42.944390 kubelet[2566]: E0116 23:57:42.944328 2566 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"29ddbdd3-30d8-4cf4-8a5f-7715f3d5b4bb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ca2931cae940bb49fc6db34fb07c9ccab972a1f20d06a7cf7b02651ba8eeacba\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 16 23:57:42.944390 kubelet[2566]: E0116 23:57:42.944353 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"29ddbdd3-30d8-4cf4-8a5f-7715f3d5b4bb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ca2931cae940bb49fc6db34fb07c9ccab972a1f20d06a7cf7b02651ba8eeacba\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-ktbgp" podUID="29ddbdd3-30d8-4cf4-8a5f-7715f3d5b4bb" Jan 16 23:57:42.953287 containerd[1469]: time="2026-01-16T23:57:42.953105429Z" level=error msg="StopPodSandbox for \"a7ee57f7bb023a0ee0f2568b9564b7017c07a878d1c81bc06782f6229a5dcbc0\" failed" error="failed to destroy network for sandbox \"a7ee57f7bb023a0ee0f2568b9564b7017c07a878d1c81bc06782f6229a5dcbc0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:42.953648 kubelet[2566]: E0116 23:57:42.953416 2566 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a7ee57f7bb023a0ee0f2568b9564b7017c07a878d1c81bc06782f6229a5dcbc0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a7ee57f7bb023a0ee0f2568b9564b7017c07a878d1c81bc06782f6229a5dcbc0" Jan 16 23:57:42.953648 kubelet[2566]: E0116 23:57:42.953469 2566 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a7ee57f7bb023a0ee0f2568b9564b7017c07a878d1c81bc06782f6229a5dcbc0"} Jan 16 23:57:42.953648 kubelet[2566]: E0116 23:57:42.953502 2566 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b9c78c11-17fe-4d54-827b-16ba9d81154b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a7ee57f7bb023a0ee0f2568b9564b7017c07a878d1c81bc06782f6229a5dcbc0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 16 23:57:42.953648 kubelet[2566]: E0116 23:57:42.953526 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b9c78c11-17fe-4d54-827b-16ba9d81154b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a7ee57f7bb023a0ee0f2568b9564b7017c07a878d1c81bc06782f6229a5dcbc0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c7c7c7dd6-26hzj" podUID="b9c78c11-17fe-4d54-827b-16ba9d81154b" Jan 16 23:57:42.954352 containerd[1469]: time="2026-01-16T23:57:42.954064059Z" level=error msg="StopPodSandbox for \"86b346fee605a870a4dfa9dfff1a653e49704e18c7a67f4c2476a170ff73ed6b\" failed" error="failed to destroy network for sandbox \"86b346fee605a870a4dfa9dfff1a653e49704e18c7a67f4c2476a170ff73ed6b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:42.954534 kubelet[2566]: E0116 23:57:42.954323 2566 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"86b346fee605a870a4dfa9dfff1a653e49704e18c7a67f4c2476a170ff73ed6b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="86b346fee605a870a4dfa9dfff1a653e49704e18c7a67f4c2476a170ff73ed6b" Jan 16 23:57:42.954534 kubelet[2566]: E0116 23:57:42.954363 2566 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"86b346fee605a870a4dfa9dfff1a653e49704e18c7a67f4c2476a170ff73ed6b"} Jan 16 23:57:42.954534 kubelet[2566]: E0116 23:57:42.954396 2566 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"261f04ea-7bee-49b1-9c52-588c82c92cce\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"86b346fee605a870a4dfa9dfff1a653e49704e18c7a67f4c2476a170ff73ed6b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 16 23:57:42.954534 kubelet[2566]: E0116 23:57:42.954415 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"261f04ea-7bee-49b1-9c52-588c82c92cce\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"86b346fee605a870a4dfa9dfff1a653e49704e18c7a67f4c2476a170ff73ed6b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-65b547c47d-4k7mm" podUID="261f04ea-7bee-49b1-9c52-588c82c92cce" Jan 16 23:57:42.957353 containerd[1469]: time="2026-01-16T23:57:42.956646071Z" level=error msg="StopPodSandbox for \"372f97accdf5a50ac72721b082abc929fafc066190f37146341999729c4e8bcc\" failed" error="failed to destroy network for sandbox \"372f97accdf5a50ac72721b082abc929fafc066190f37146341999729c4e8bcc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:42.957474 kubelet[2566]: E0116 23:57:42.957111 2566 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"372f97accdf5a50ac72721b082abc929fafc066190f37146341999729c4e8bcc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="372f97accdf5a50ac72721b082abc929fafc066190f37146341999729c4e8bcc" Jan 16 23:57:42.957474 kubelet[2566]: E0116 23:57:42.957158 2566 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"372f97accdf5a50ac72721b082abc929fafc066190f37146341999729c4e8bcc"} Jan 16 23:57:42.957474 kubelet[2566]: E0116 23:57:42.957210 2566 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"59071300-7ce3-4b48-ae86-f59c1ac4567d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"372f97accdf5a50ac72721b082abc929fafc066190f37146341999729c4e8bcc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 16 23:57:42.957474 kubelet[2566]: E0116 23:57:42.957233 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"59071300-7ce3-4b48-ae86-f59c1ac4567d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"372f97accdf5a50ac72721b082abc929fafc066190f37146341999729c4e8bcc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-9tw7r" podUID="59071300-7ce3-4b48-ae86-f59c1ac4567d" Jan 16 23:57:42.960884 containerd[1469]: time="2026-01-16T23:57:42.960115753Z" level=error msg="StopPodSandbox for \"b7a3322c32bde7d1c36bad44bcae6518c9ec8c552711565589665eeb47ea7f4a\" failed" error="failed to destroy network for sandbox \"b7a3322c32bde7d1c36bad44bcae6518c9ec8c552711565589665eeb47ea7f4a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:42.961085 kubelet[2566]: E0116 23:57:42.960440 2566 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b7a3322c32bde7d1c36bad44bcae6518c9ec8c552711565589665eeb47ea7f4a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b7a3322c32bde7d1c36bad44bcae6518c9ec8c552711565589665eeb47ea7f4a" Jan 16 23:57:42.961085 kubelet[2566]: E0116 23:57:42.960487 2566 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b7a3322c32bde7d1c36bad44bcae6518c9ec8c552711565589665eeb47ea7f4a"} Jan 16 23:57:42.961085 kubelet[2566]: E0116 23:57:42.960519 2566 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f9b64606-aa04-4801-bf16-55e0f797524c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b7a3322c32bde7d1c36bad44bcae6518c9ec8c552711565589665eeb47ea7f4a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 16 23:57:42.961085 kubelet[2566]: E0116 23:57:42.960540 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f9b64606-aa04-4801-bf16-55e0f797524c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b7a3322c32bde7d1c36bad44bcae6518c9ec8c552711565589665eeb47ea7f4a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-j4ltk" podUID="f9b64606-aa04-4801-bf16-55e0f797524c" Jan 16 23:57:42.969503 containerd[1469]: time="2026-01-16T23:57:42.969338772Z" level=error msg="StopPodSandbox for \"1c099c7b6edcf90c7c98d9cfb00e2eba031feee97a663feca17d01d07b01764c\" failed" error="failed to destroy network for sandbox \"1c099c7b6edcf90c7c98d9cfb00e2eba031feee97a663feca17d01d07b01764c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:42.970079 kubelet[2566]: E0116 23:57:42.969584 2566 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1c099c7b6edcf90c7c98d9cfb00e2eba031feee97a663feca17d01d07b01764c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1c099c7b6edcf90c7c98d9cfb00e2eba031feee97a663feca17d01d07b01764c" Jan 16 23:57:42.970079 kubelet[2566]: E0116 23:57:42.969634 2566 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1c099c7b6edcf90c7c98d9cfb00e2eba031feee97a663feca17d01d07b01764c"} Jan 16 23:57:42.970079 kubelet[2566]: E0116 23:57:42.969669 2566 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e83705ab-d8ce-46ca-880d-899f69158672\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1c099c7b6edcf90c7c98d9cfb00e2eba031feee97a663feca17d01d07b01764c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 16 23:57:42.970079 kubelet[2566]: E0116 23:57:42.969697 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e83705ab-d8ce-46ca-880d-899f69158672\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1c099c7b6edcf90c7c98d9cfb00e2eba031feee97a663feca17d01d07b01764c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68bd9998fd-lpljt" podUID="e83705ab-d8ce-46ca-880d-899f69158672" Jan 16 23:57:42.979724 containerd[1469]: time="2026-01-16T23:57:42.979654900Z" level=error msg="StopPodSandbox for \"b0b45ca3b3b60ea73958fcf4bbea6bcef53c53dab5003e89ddea87423499aebd\" failed" error="failed to destroy network for sandbox \"b0b45ca3b3b60ea73958fcf4bbea6bcef53c53dab5003e89ddea87423499aebd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:42.980141 kubelet[2566]: E0116 23:57:42.979928 2566 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b0b45ca3b3b60ea73958fcf4bbea6bcef53c53dab5003e89ddea87423499aebd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b0b45ca3b3b60ea73958fcf4bbea6bcef53c53dab5003e89ddea87423499aebd" Jan 16 23:57:42.980141 kubelet[2566]: E0116 23:57:42.980002 2566 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b0b45ca3b3b60ea73958fcf4bbea6bcef53c53dab5003e89ddea87423499aebd"} Jan 16 23:57:42.980141 kubelet[2566]: E0116 23:57:42.980033 2566 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"861b4149-53db-42c9-9886-651961041ffb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b0b45ca3b3b60ea73958fcf4bbea6bcef53c53dab5003e89ddea87423499aebd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 16 23:57:42.980141 kubelet[2566]: E0116 23:57:42.980061 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"861b4149-53db-42c9-9886-651961041ffb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b0b45ca3b3b60ea73958fcf4bbea6bcef53c53dab5003e89ddea87423499aebd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c7c7c7dd6-kgtcm" podUID="861b4149-53db-42c9-9886-651961041ffb" Jan 16 23:57:43.301306 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1c099c7b6edcf90c7c98d9cfb00e2eba031feee97a663feca17d01d07b01764c-shm.mount: Deactivated successfully. Jan 16 23:57:43.301409 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ca2931cae940bb49fc6db34fb07c9ccab972a1f20d06a7cf7b02651ba8eeacba-shm.mount: Deactivated successfully. Jan 16 23:57:47.346206 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3317675664.mount: Deactivated successfully. Jan 16 23:57:47.379100 containerd[1469]: time="2026-01-16T23:57:47.379022560Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:57:47.380896 containerd[1469]: time="2026-01-16T23:57:47.380504266Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Jan 16 23:57:47.386727 containerd[1469]: time="2026-01-16T23:57:47.386605247Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:57:47.390106 containerd[1469]: time="2026-01-16T23:57:47.390019134Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:57:47.391006 containerd[1469]: time="2026-01-16T23:57:47.390957765Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 4.584968253s" Jan 16 23:57:47.391006 containerd[1469]: time="2026-01-16T23:57:47.391005644Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Jan 16 23:57:47.433757 containerd[1469]: time="2026-01-16T23:57:47.433679754Z" level=info msg="CreateContainer within sandbox \"552f3addad7ca85e27364a4cafb8a2d2d9a764dd73d1dc09dfd6745496166e03\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 16 23:57:47.456172 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2685938022.mount: Deactivated successfully. Jan 16 23:57:47.459481 containerd[1469]: time="2026-01-16T23:57:47.459433506Z" level=info msg="CreateContainer within sandbox \"552f3addad7ca85e27364a4cafb8a2d2d9a764dd73d1dc09dfd6745496166e03\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"41cc996a9e73c13dc12c83be9ac2c05532b570111d0e48005612074aef88e165\"" Jan 16 23:57:47.461630 containerd[1469]: time="2026-01-16T23:57:47.461484046Z" level=info msg="StartContainer for \"41cc996a9e73c13dc12c83be9ac2c05532b570111d0e48005612074aef88e165\"" Jan 16 23:57:47.498312 systemd[1]: Started cri-containerd-41cc996a9e73c13dc12c83be9ac2c05532b570111d0e48005612074aef88e165.scope - libcontainer container 41cc996a9e73c13dc12c83be9ac2c05532b570111d0e48005612074aef88e165. Jan 16 23:57:47.546782 containerd[1469]: time="2026-01-16T23:57:47.546719266Z" level=info msg="StartContainer for \"41cc996a9e73c13dc12c83be9ac2c05532b570111d0e48005612074aef88e165\" returns successfully" Jan 16 23:57:47.702014 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 16 23:57:47.702218 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 16 23:57:47.862877 containerd[1469]: time="2026-01-16T23:57:47.862820265Z" level=info msg="StopPodSandbox for \"86b346fee605a870a4dfa9dfff1a653e49704e18c7a67f4c2476a170ff73ed6b\"" Jan 16 23:57:48.026302 kubelet[2566]: I0116 23:57:48.026222 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-z2nlx" podStartSLOduration=1.457274081 podStartE2EDuration="14.026185778s" podCreationTimestamp="2026-01-16 23:57:34 +0000 UTC" firstStartedPulling="2026-01-16 23:57:34.823308056 +0000 UTC m=+30.401441679" lastFinishedPulling="2026-01-16 23:57:47.392219753 +0000 UTC m=+42.970353376" observedRunningTime="2026-01-16 23:57:47.957395035 +0000 UTC m=+43.535528698" watchObservedRunningTime="2026-01-16 23:57:48.026185778 +0000 UTC m=+43.604319401" Jan 16 23:57:48.120705 containerd[1469]: 2026-01-16 23:57:48.027 [INFO][3742] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="86b346fee605a870a4dfa9dfff1a653e49704e18c7a67f4c2476a170ff73ed6b" Jan 16 23:57:48.120705 containerd[1469]: 2026-01-16 23:57:48.029 [INFO][3742] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="86b346fee605a870a4dfa9dfff1a653e49704e18c7a67f4c2476a170ff73ed6b" iface="eth0" netns="/var/run/netns/cni-0db8a600-57d6-8ef8-64f5-9e17542468c2" Jan 16 23:57:48.120705 containerd[1469]: 2026-01-16 23:57:48.029 [INFO][3742] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="86b346fee605a870a4dfa9dfff1a653e49704e18c7a67f4c2476a170ff73ed6b" iface="eth0" netns="/var/run/netns/cni-0db8a600-57d6-8ef8-64f5-9e17542468c2" Jan 16 23:57:48.120705 containerd[1469]: 2026-01-16 23:57:48.029 [INFO][3742] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="86b346fee605a870a4dfa9dfff1a653e49704e18c7a67f4c2476a170ff73ed6b" iface="eth0" netns="/var/run/netns/cni-0db8a600-57d6-8ef8-64f5-9e17542468c2" Jan 16 23:57:48.120705 containerd[1469]: 2026-01-16 23:57:48.029 [INFO][3742] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="86b346fee605a870a4dfa9dfff1a653e49704e18c7a67f4c2476a170ff73ed6b" Jan 16 23:57:48.120705 containerd[1469]: 2026-01-16 23:57:48.029 [INFO][3742] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="86b346fee605a870a4dfa9dfff1a653e49704e18c7a67f4c2476a170ff73ed6b" Jan 16 23:57:48.120705 containerd[1469]: 2026-01-16 23:57:48.097 [INFO][3772] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="86b346fee605a870a4dfa9dfff1a653e49704e18c7a67f4c2476a170ff73ed6b" HandleID="k8s-pod-network.86b346fee605a870a4dfa9dfff1a653e49704e18c7a67f4c2476a170ff73ed6b" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-whisker--65b547c47d--4k7mm-eth0" Jan 16 23:57:48.120705 containerd[1469]: 2026-01-16 23:57:48.097 [INFO][3772] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:48.120705 containerd[1469]: 2026-01-16 23:57:48.097 [INFO][3772] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:48.120705 containerd[1469]: 2026-01-16 23:57:48.111 [WARNING][3772] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="86b346fee605a870a4dfa9dfff1a653e49704e18c7a67f4c2476a170ff73ed6b" HandleID="k8s-pod-network.86b346fee605a870a4dfa9dfff1a653e49704e18c7a67f4c2476a170ff73ed6b" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-whisker--65b547c47d--4k7mm-eth0" Jan 16 23:57:48.120705 containerd[1469]: 2026-01-16 23:57:48.111 [INFO][3772] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="86b346fee605a870a4dfa9dfff1a653e49704e18c7a67f4c2476a170ff73ed6b" HandleID="k8s-pod-network.86b346fee605a870a4dfa9dfff1a653e49704e18c7a67f4c2476a170ff73ed6b" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-whisker--65b547c47d--4k7mm-eth0" Jan 16 23:57:48.120705 containerd[1469]: 2026-01-16 23:57:48.113 [INFO][3772] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:48.120705 containerd[1469]: 2026-01-16 23:57:48.116 [INFO][3742] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="86b346fee605a870a4dfa9dfff1a653e49704e18c7a67f4c2476a170ff73ed6b" Jan 16 23:57:48.120705 containerd[1469]: time="2026-01-16T23:57:48.120280852Z" level=info msg="TearDown network for sandbox \"86b346fee605a870a4dfa9dfff1a653e49704e18c7a67f4c2476a170ff73ed6b\" successfully" Jan 16 23:57:48.120705 containerd[1469]: time="2026-01-16T23:57:48.120305772Z" level=info msg="StopPodSandbox for \"86b346fee605a870a4dfa9dfff1a653e49704e18c7a67f4c2476a170ff73ed6b\" returns successfully" Jan 16 23:57:48.171717 kubelet[2566]: I0116 23:57:48.171607 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rmz5\" (UniqueName: \"kubernetes.io/projected/261f04ea-7bee-49b1-9c52-588c82c92cce-kube-api-access-8rmz5\") pod \"261f04ea-7bee-49b1-9c52-588c82c92cce\" (UID: \"261f04ea-7bee-49b1-9c52-588c82c92cce\") " Jan 16 23:57:48.172237 kubelet[2566]: I0116 23:57:48.171871 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/261f04ea-7bee-49b1-9c52-588c82c92cce-whisker-backend-key-pair\") pod \"261f04ea-7bee-49b1-9c52-588c82c92cce\" (UID: \"261f04ea-7bee-49b1-9c52-588c82c92cce\") " Jan 16 23:57:48.172237 kubelet[2566]: I0116 23:57:48.171904 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/261f04ea-7bee-49b1-9c52-588c82c92cce-whisker-ca-bundle\") pod \"261f04ea-7bee-49b1-9c52-588c82c92cce\" (UID: \"261f04ea-7bee-49b1-9c52-588c82c92cce\") " Jan 16 23:57:48.175189 kubelet[2566]: I0116 23:57:48.173860 2566 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/261f04ea-7bee-49b1-9c52-588c82c92cce-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "261f04ea-7bee-49b1-9c52-588c82c92cce" (UID: "261f04ea-7bee-49b1-9c52-588c82c92cce"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 16 23:57:48.177995 kubelet[2566]: I0116 23:57:48.177872 2566 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/261f04ea-7bee-49b1-9c52-588c82c92cce-kube-api-access-8rmz5" (OuterVolumeSpecName: "kube-api-access-8rmz5") pod "261f04ea-7bee-49b1-9c52-588c82c92cce" (UID: "261f04ea-7bee-49b1-9c52-588c82c92cce"). InnerVolumeSpecName "kube-api-access-8rmz5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 16 23:57:48.184334 kubelet[2566]: I0116 23:57:48.184270 2566 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/261f04ea-7bee-49b1-9c52-588c82c92cce-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "261f04ea-7bee-49b1-9c52-588c82c92cce" (UID: "261f04ea-7bee-49b1-9c52-588c82c92cce"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 16 23:57:48.272673 kubelet[2566]: I0116 23:57:48.272576 2566 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8rmz5\" (UniqueName: \"kubernetes.io/projected/261f04ea-7bee-49b1-9c52-588c82c92cce-kube-api-access-8rmz5\") on node \"ci-4081-3-6-n-fe2a5b3650\" DevicePath \"\"" Jan 16 23:57:48.272673 kubelet[2566]: I0116 23:57:48.272650 2566 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/261f04ea-7bee-49b1-9c52-588c82c92cce-whisker-backend-key-pair\") on node \"ci-4081-3-6-n-fe2a5b3650\" DevicePath \"\"" Jan 16 23:57:48.272673 kubelet[2566]: I0116 23:57:48.272694 2566 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/261f04ea-7bee-49b1-9c52-588c82c92cce-whisker-ca-bundle\") on node \"ci-4081-3-6-n-fe2a5b3650\" DevicePath \"\"" Jan 16 23:57:48.348936 systemd[1]: run-netns-cni\x2d0db8a600\x2d57d6\x2d8ef8\x2d64f5\x2d9e17542468c2.mount: Deactivated successfully. Jan 16 23:57:48.350340 systemd[1]: var-lib-kubelet-pods-261f04ea\x2d7bee\x2d49b1\x2d9c52\x2d588c82c92cce-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8rmz5.mount: Deactivated successfully. Jan 16 23:57:48.350418 systemd[1]: var-lib-kubelet-pods-261f04ea\x2d7bee\x2d49b1\x2d9c52\x2d588c82c92cce-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 16 23:57:48.596537 systemd[1]: Removed slice kubepods-besteffort-pod261f04ea_7bee_49b1_9c52_588c82c92cce.slice - libcontainer container kubepods-besteffort-pod261f04ea_7bee_49b1_9c52_588c82c92cce.slice. Jan 16 23:57:48.914485 systemd[1]: run-containerd-runc-k8s.io-41cc996a9e73c13dc12c83be9ac2c05532b570111d0e48005612074aef88e165-runc.SYtZ7x.mount: Deactivated successfully. Jan 16 23:57:48.984557 systemd[1]: Created slice kubepods-besteffort-pod377f520a_36e6_491f_865e_cdb387ff596c.slice - libcontainer container kubepods-besteffort-pod377f520a_36e6_491f_865e_cdb387ff596c.slice. Jan 16 23:57:49.081108 kubelet[2566]: I0116 23:57:49.081039 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/377f520a-36e6-491f-865e-cdb387ff596c-whisker-backend-key-pair\") pod \"whisker-7c4f4c46c6-chbl9\" (UID: \"377f520a-36e6-491f-865e-cdb387ff596c\") " pod="calico-system/whisker-7c4f4c46c6-chbl9" Jan 16 23:57:49.081108 kubelet[2566]: I0116 23:57:49.081118 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdf9c\" (UniqueName: \"kubernetes.io/projected/377f520a-36e6-491f-865e-cdb387ff596c-kube-api-access-wdf9c\") pod \"whisker-7c4f4c46c6-chbl9\" (UID: \"377f520a-36e6-491f-865e-cdb387ff596c\") " pod="calico-system/whisker-7c4f4c46c6-chbl9" Jan 16 23:57:49.081594 kubelet[2566]: I0116 23:57:49.081146 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/377f520a-36e6-491f-865e-cdb387ff596c-whisker-ca-bundle\") pod \"whisker-7c4f4c46c6-chbl9\" (UID: \"377f520a-36e6-491f-865e-cdb387ff596c\") " pod="calico-system/whisker-7c4f4c46c6-chbl9" Jan 16 23:57:49.291179 containerd[1469]: time="2026-01-16T23:57:49.290452614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7c4f4c46c6-chbl9,Uid:377f520a-36e6-491f-865e-cdb387ff596c,Namespace:calico-system,Attempt:0,}" Jan 16 23:57:49.485724 systemd-networkd[1366]: cali0de1534c69d: Link UP Jan 16 23:57:49.486802 systemd-networkd[1366]: cali0de1534c69d: Gained carrier Jan 16 23:57:49.518677 containerd[1469]: 2026-01-16 23:57:49.337 [INFO][3822] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 16 23:57:49.518677 containerd[1469]: 2026-01-16 23:57:49.370 [INFO][3822] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--fe2a5b3650-k8s-whisker--7c4f4c46c6--chbl9-eth0 whisker-7c4f4c46c6- calico-system 377f520a-36e6-491f-865e-cdb387ff596c 887 0 2026-01-16 23:57:48 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7c4f4c46c6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-6-n-fe2a5b3650 whisker-7c4f4c46c6-chbl9 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali0de1534c69d [] [] }} ContainerID="2fb87dbac54cd30c6e277126405bffc48a89dd937e6adc16c8b0bce2ecfdcf34" Namespace="calico-system" Pod="whisker-7c4f4c46c6-chbl9" WorkloadEndpoint="ci--4081--3--6--n--fe2a5b3650-k8s-whisker--7c4f4c46c6--chbl9-" Jan 16 23:57:49.518677 containerd[1469]: 2026-01-16 23:57:49.370 [INFO][3822] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2fb87dbac54cd30c6e277126405bffc48a89dd937e6adc16c8b0bce2ecfdcf34" Namespace="calico-system" Pod="whisker-7c4f4c46c6-chbl9" WorkloadEndpoint="ci--4081--3--6--n--fe2a5b3650-k8s-whisker--7c4f4c46c6--chbl9-eth0" Jan 16 23:57:49.518677 containerd[1469]: 2026-01-16 23:57:49.404 [INFO][3834] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2fb87dbac54cd30c6e277126405bffc48a89dd937e6adc16c8b0bce2ecfdcf34" HandleID="k8s-pod-network.2fb87dbac54cd30c6e277126405bffc48a89dd937e6adc16c8b0bce2ecfdcf34" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-whisker--7c4f4c46c6--chbl9-eth0" Jan 16 23:57:49.518677 containerd[1469]: 2026-01-16 23:57:49.406 [INFO][3834] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2fb87dbac54cd30c6e277126405bffc48a89dd937e6adc16c8b0bce2ecfdcf34" HandleID="k8s-pod-network.2fb87dbac54cd30c6e277126405bffc48a89dd937e6adc16c8b0bce2ecfdcf34" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-whisker--7c4f4c46c6--chbl9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024aff0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-fe2a5b3650", "pod":"whisker-7c4f4c46c6-chbl9", "timestamp":"2026-01-16 23:57:49.404678361 +0000 UTC"}, Hostname:"ci-4081-3-6-n-fe2a5b3650", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 23:57:49.518677 containerd[1469]: 2026-01-16 23:57:49.407 [INFO][3834] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:49.518677 containerd[1469]: 2026-01-16 23:57:49.407 [INFO][3834] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:49.518677 containerd[1469]: 2026-01-16 23:57:49.407 [INFO][3834] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-fe2a5b3650' Jan 16 23:57:49.518677 containerd[1469]: 2026-01-16 23:57:49.421 [INFO][3834] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2fb87dbac54cd30c6e277126405bffc48a89dd937e6adc16c8b0bce2ecfdcf34" host="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:49.518677 containerd[1469]: 2026-01-16 23:57:49.430 [INFO][3834] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:49.518677 containerd[1469]: 2026-01-16 23:57:49.438 [INFO][3834] ipam/ipam.go 511: Trying affinity for 192.168.52.192/26 host="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:49.518677 containerd[1469]: 2026-01-16 23:57:49.442 [INFO][3834] ipam/ipam.go 158: Attempting to load block cidr=192.168.52.192/26 host="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:49.518677 containerd[1469]: 2026-01-16 23:57:49.447 [INFO][3834] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.52.192/26 host="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:49.518677 containerd[1469]: 2026-01-16 23:57:49.447 [INFO][3834] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.52.192/26 handle="k8s-pod-network.2fb87dbac54cd30c6e277126405bffc48a89dd937e6adc16c8b0bce2ecfdcf34" host="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:49.518677 containerd[1469]: 2026-01-16 23:57:49.450 [INFO][3834] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2fb87dbac54cd30c6e277126405bffc48a89dd937e6adc16c8b0bce2ecfdcf34 Jan 16 23:57:49.518677 containerd[1469]: 2026-01-16 23:57:49.455 [INFO][3834] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.52.192/26 handle="k8s-pod-network.2fb87dbac54cd30c6e277126405bffc48a89dd937e6adc16c8b0bce2ecfdcf34" host="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:49.518677 containerd[1469]: 2026-01-16 23:57:49.465 [INFO][3834] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.52.193/26] block=192.168.52.192/26 handle="k8s-pod-network.2fb87dbac54cd30c6e277126405bffc48a89dd937e6adc16c8b0bce2ecfdcf34" host="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:49.518677 containerd[1469]: 2026-01-16 23:57:49.465 [INFO][3834] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.52.193/26] handle="k8s-pod-network.2fb87dbac54cd30c6e277126405bffc48a89dd937e6adc16c8b0bce2ecfdcf34" host="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:49.518677 containerd[1469]: 2026-01-16 23:57:49.465 [INFO][3834] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:49.518677 containerd[1469]: 2026-01-16 23:57:49.465 [INFO][3834] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.52.193/26] IPv6=[] ContainerID="2fb87dbac54cd30c6e277126405bffc48a89dd937e6adc16c8b0bce2ecfdcf34" HandleID="k8s-pod-network.2fb87dbac54cd30c6e277126405bffc48a89dd937e6adc16c8b0bce2ecfdcf34" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-whisker--7c4f4c46c6--chbl9-eth0" Jan 16 23:57:49.519288 containerd[1469]: 2026-01-16 23:57:49.468 [INFO][3822] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2fb87dbac54cd30c6e277126405bffc48a89dd937e6adc16c8b0bce2ecfdcf34" Namespace="calico-system" Pod="whisker-7c4f4c46c6-chbl9" WorkloadEndpoint="ci--4081--3--6--n--fe2a5b3650-k8s-whisker--7c4f4c46c6--chbl9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--fe2a5b3650-k8s-whisker--7c4f4c46c6--chbl9-eth0", GenerateName:"whisker-7c4f4c46c6-", Namespace:"calico-system", SelfLink:"", UID:"377f520a-36e6-491f-865e-cdb387ff596c", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 57, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7c4f4c46c6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-fe2a5b3650", ContainerID:"", Pod:"whisker-7c4f4c46c6-chbl9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.52.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0de1534c69d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:57:49.519288 containerd[1469]: 2026-01-16 23:57:49.468 [INFO][3822] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.52.193/32] ContainerID="2fb87dbac54cd30c6e277126405bffc48a89dd937e6adc16c8b0bce2ecfdcf34" Namespace="calico-system" Pod="whisker-7c4f4c46c6-chbl9" WorkloadEndpoint="ci--4081--3--6--n--fe2a5b3650-k8s-whisker--7c4f4c46c6--chbl9-eth0" Jan 16 23:57:49.519288 containerd[1469]: 2026-01-16 23:57:49.468 [INFO][3822] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0de1534c69d ContainerID="2fb87dbac54cd30c6e277126405bffc48a89dd937e6adc16c8b0bce2ecfdcf34" Namespace="calico-system" Pod="whisker-7c4f4c46c6-chbl9" WorkloadEndpoint="ci--4081--3--6--n--fe2a5b3650-k8s-whisker--7c4f4c46c6--chbl9-eth0" Jan 16 23:57:49.519288 containerd[1469]: 2026-01-16 23:57:49.482 [INFO][3822] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2fb87dbac54cd30c6e277126405bffc48a89dd937e6adc16c8b0bce2ecfdcf34" Namespace="calico-system" Pod="whisker-7c4f4c46c6-chbl9" WorkloadEndpoint="ci--4081--3--6--n--fe2a5b3650-k8s-whisker--7c4f4c46c6--chbl9-eth0" Jan 16 23:57:49.519288 containerd[1469]: 2026-01-16 23:57:49.491 [INFO][3822] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2fb87dbac54cd30c6e277126405bffc48a89dd937e6adc16c8b0bce2ecfdcf34" Namespace="calico-system" Pod="whisker-7c4f4c46c6-chbl9" WorkloadEndpoint="ci--4081--3--6--n--fe2a5b3650-k8s-whisker--7c4f4c46c6--chbl9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--fe2a5b3650-k8s-whisker--7c4f4c46c6--chbl9-eth0", GenerateName:"whisker-7c4f4c46c6-", Namespace:"calico-system", SelfLink:"", UID:"377f520a-36e6-491f-865e-cdb387ff596c", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 57, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7c4f4c46c6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-fe2a5b3650", ContainerID:"2fb87dbac54cd30c6e277126405bffc48a89dd937e6adc16c8b0bce2ecfdcf34", Pod:"whisker-7c4f4c46c6-chbl9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.52.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0de1534c69d", MAC:"86:a9:b4:fd:05:27", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:57:49.519288 containerd[1469]: 2026-01-16 23:57:49.512 [INFO][3822] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2fb87dbac54cd30c6e277126405bffc48a89dd937e6adc16c8b0bce2ecfdcf34" Namespace="calico-system" Pod="whisker-7c4f4c46c6-chbl9" WorkloadEndpoint="ci--4081--3--6--n--fe2a5b3650-k8s-whisker--7c4f4c46c6--chbl9-eth0" Jan 16 23:57:49.555663 containerd[1469]: time="2026-01-16T23:57:49.554222902Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 23:57:49.555663 containerd[1469]: time="2026-01-16T23:57:49.554280662Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 23:57:49.555663 containerd[1469]: time="2026-01-16T23:57:49.554292262Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:57:49.555663 containerd[1469]: time="2026-01-16T23:57:49.554380661Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:57:49.605369 systemd[1]: Started cri-containerd-2fb87dbac54cd30c6e277126405bffc48a89dd937e6adc16c8b0bce2ecfdcf34.scope - libcontainer container 2fb87dbac54cd30c6e277126405bffc48a89dd937e6adc16c8b0bce2ecfdcf34. Jan 16 23:57:49.684440 containerd[1469]: time="2026-01-16T23:57:49.684385583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7c4f4c46c6-chbl9,Uid:377f520a-36e6-491f-865e-cdb387ff596c,Namespace:calico-system,Attempt:0,} returns sandbox id \"2fb87dbac54cd30c6e277126405bffc48a89dd937e6adc16c8b0bce2ecfdcf34\"" Jan 16 23:57:49.689858 containerd[1469]: time="2026-01-16T23:57:49.689430816Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 16 23:57:50.038279 containerd[1469]: time="2026-01-16T23:57:50.038225248Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:57:50.040987 containerd[1469]: time="2026-01-16T23:57:50.040886104Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 16 23:57:50.041208 containerd[1469]: time="2026-01-16T23:57:50.040966903Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 16 23:57:50.041407 kubelet[2566]: E0116 23:57:50.041354 2566 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 23:57:50.041517 kubelet[2566]: E0116 23:57:50.041421 2566 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 23:57:50.049966 kubelet[2566]: E0116 23:57:50.049731 2566 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:740b13404779446abb486266eca53865,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wdf9c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7c4f4c46c6-chbl9_calico-system(377f520a-36e6-491f-865e-cdb387ff596c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 16 23:57:50.053628 containerd[1469]: time="2026-01-16T23:57:50.052381000Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 16 23:57:50.089235 kernel: bpftool[4021]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 16 23:57:50.309766 systemd-networkd[1366]: vxlan.calico: Link UP Jan 16 23:57:50.309776 systemd-networkd[1366]: vxlan.calico: Gained carrier Jan 16 23:57:50.422258 containerd[1469]: time="2026-01-16T23:57:50.421838142Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:57:50.424217 containerd[1469]: time="2026-01-16T23:57:50.423846444Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 16 23:57:50.424217 containerd[1469]: time="2026-01-16T23:57:50.423915284Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 16 23:57:50.425053 kubelet[2566]: E0116 23:57:50.424661 2566 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 23:57:50.425053 kubelet[2566]: E0116 23:57:50.424731 2566 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 23:57:50.428145 kubelet[2566]: E0116 23:57:50.426298 2566 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wdf9c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7c4f4c46c6-chbl9_calico-system(377f520a-36e6-491f-865e-cdb387ff596c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 16 23:57:50.429314 kubelet[2566]: E0116 23:57:50.429214 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7c4f4c46c6-chbl9" podUID="377f520a-36e6-491f-865e-cdb387ff596c" Jan 16 23:57:50.591201 kubelet[2566]: I0116 23:57:50.591029 2566 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="261f04ea-7bee-49b1-9c52-588c82c92cce" path="/var/lib/kubelet/pods/261f04ea-7bee-49b1-9c52-588c82c92cce/volumes" Jan 16 23:57:50.898862 kubelet[2566]: E0116 23:57:50.898651 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7c4f4c46c6-chbl9" podUID="377f520a-36e6-491f-865e-cdb387ff596c" Jan 16 23:57:51.151618 systemd-networkd[1366]: cali0de1534c69d: Gained IPv6LL Jan 16 23:57:52.110265 systemd-networkd[1366]: vxlan.calico: Gained IPv6LL Jan 16 23:57:53.587314 containerd[1469]: time="2026-01-16T23:57:53.584252641Z" level=info msg="StopPodSandbox for \"372f97accdf5a50ac72721b082abc929fafc066190f37146341999729c4e8bcc\"" Jan 16 23:57:53.587314 containerd[1469]: time="2026-01-16T23:57:53.584702157Z" level=info msg="StopPodSandbox for \"d5b43f50f38347de7ac99fc30ba9192dcbe4e414ef3d800b6f310f994e3ea191\"" Jan 16 23:57:53.747809 containerd[1469]: 2026-01-16 23:57:53.684 [INFO][4119] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d5b43f50f38347de7ac99fc30ba9192dcbe4e414ef3d800b6f310f994e3ea191" Jan 16 23:57:53.747809 containerd[1469]: 2026-01-16 23:57:53.687 [INFO][4119] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d5b43f50f38347de7ac99fc30ba9192dcbe4e414ef3d800b6f310f994e3ea191" iface="eth0" netns="/var/run/netns/cni-9cf52ce6-bbba-dfc4-e111-516f3d97aedd" Jan 16 23:57:53.747809 containerd[1469]: 2026-01-16 23:57:53.689 [INFO][4119] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d5b43f50f38347de7ac99fc30ba9192dcbe4e414ef3d800b6f310f994e3ea191" iface="eth0" netns="/var/run/netns/cni-9cf52ce6-bbba-dfc4-e111-516f3d97aedd" Jan 16 23:57:53.747809 containerd[1469]: 2026-01-16 23:57:53.694 [INFO][4119] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d5b43f50f38347de7ac99fc30ba9192dcbe4e414ef3d800b6f310f994e3ea191" iface="eth0" netns="/var/run/netns/cni-9cf52ce6-bbba-dfc4-e111-516f3d97aedd" Jan 16 23:57:53.747809 containerd[1469]: 2026-01-16 23:57:53.694 [INFO][4119] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d5b43f50f38347de7ac99fc30ba9192dcbe4e414ef3d800b6f310f994e3ea191" Jan 16 23:57:53.747809 containerd[1469]: 2026-01-16 23:57:53.695 [INFO][4119] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d5b43f50f38347de7ac99fc30ba9192dcbe4e414ef3d800b6f310f994e3ea191" Jan 16 23:57:53.747809 containerd[1469]: 2026-01-16 23:57:53.728 [INFO][4131] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d5b43f50f38347de7ac99fc30ba9192dcbe4e414ef3d800b6f310f994e3ea191" HandleID="k8s-pod-network.d5b43f50f38347de7ac99fc30ba9192dcbe4e414ef3d800b6f310f994e3ea191" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-goldmane--666569f655--jr25d-eth0" Jan 16 23:57:53.747809 containerd[1469]: 2026-01-16 23:57:53.728 [INFO][4131] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:53.747809 containerd[1469]: 2026-01-16 23:57:53.728 [INFO][4131] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:53.747809 containerd[1469]: 2026-01-16 23:57:53.739 [WARNING][4131] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d5b43f50f38347de7ac99fc30ba9192dcbe4e414ef3d800b6f310f994e3ea191" HandleID="k8s-pod-network.d5b43f50f38347de7ac99fc30ba9192dcbe4e414ef3d800b6f310f994e3ea191" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-goldmane--666569f655--jr25d-eth0" Jan 16 23:57:53.747809 containerd[1469]: 2026-01-16 23:57:53.739 [INFO][4131] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d5b43f50f38347de7ac99fc30ba9192dcbe4e414ef3d800b6f310f994e3ea191" HandleID="k8s-pod-network.d5b43f50f38347de7ac99fc30ba9192dcbe4e414ef3d800b6f310f994e3ea191" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-goldmane--666569f655--jr25d-eth0" Jan 16 23:57:53.747809 containerd[1469]: 2026-01-16 23:57:53.741 [INFO][4131] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:53.747809 containerd[1469]: 2026-01-16 23:57:53.746 [INFO][4119] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d5b43f50f38347de7ac99fc30ba9192dcbe4e414ef3d800b6f310f994e3ea191" Jan 16 23:57:53.752227 containerd[1469]: time="2026-01-16T23:57:53.752182806Z" level=info msg="TearDown network for sandbox \"d5b43f50f38347de7ac99fc30ba9192dcbe4e414ef3d800b6f310f994e3ea191\" successfully" Jan 16 23:57:53.752227 containerd[1469]: time="2026-01-16T23:57:53.752222925Z" level=info msg="StopPodSandbox for \"d5b43f50f38347de7ac99fc30ba9192dcbe4e414ef3d800b6f310f994e3ea191\" returns successfully" Jan 16 23:57:53.753181 systemd[1]: run-netns-cni\x2d9cf52ce6\x2dbbba\x2ddfc4\x2de111\x2d516f3d97aedd.mount: Deactivated successfully. Jan 16 23:57:53.772305 containerd[1469]: time="2026-01-16T23:57:53.772259114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-jr25d,Uid:195fd954-db29-4a46-a5c3-26216d80a6af,Namespace:calico-system,Attempt:1,}" Jan 16 23:57:53.774388 containerd[1469]: 2026-01-16 23:57:53.692 [INFO][4118] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="372f97accdf5a50ac72721b082abc929fafc066190f37146341999729c4e8bcc" Jan 16 23:57:53.774388 containerd[1469]: 2026-01-16 23:57:53.693 [INFO][4118] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="372f97accdf5a50ac72721b082abc929fafc066190f37146341999729c4e8bcc" iface="eth0" netns="/var/run/netns/cni-36adcbb6-a5f5-aa7c-1c77-8b06e88bd9aa" Jan 16 23:57:53.774388 containerd[1469]: 2026-01-16 23:57:53.697 [INFO][4118] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="372f97accdf5a50ac72721b082abc929fafc066190f37146341999729c4e8bcc" iface="eth0" netns="/var/run/netns/cni-36adcbb6-a5f5-aa7c-1c77-8b06e88bd9aa" Jan 16 23:57:53.774388 containerd[1469]: 2026-01-16 23:57:53.697 [INFO][4118] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="372f97accdf5a50ac72721b082abc929fafc066190f37146341999729c4e8bcc" iface="eth0" netns="/var/run/netns/cni-36adcbb6-a5f5-aa7c-1c77-8b06e88bd9aa" Jan 16 23:57:53.774388 containerd[1469]: 2026-01-16 23:57:53.697 [INFO][4118] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="372f97accdf5a50ac72721b082abc929fafc066190f37146341999729c4e8bcc" Jan 16 23:57:53.774388 containerd[1469]: 2026-01-16 23:57:53.697 [INFO][4118] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="372f97accdf5a50ac72721b082abc929fafc066190f37146341999729c4e8bcc" Jan 16 23:57:53.774388 containerd[1469]: 2026-01-16 23:57:53.728 [INFO][4133] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="372f97accdf5a50ac72721b082abc929fafc066190f37146341999729c4e8bcc" HandleID="k8s-pod-network.372f97accdf5a50ac72721b082abc929fafc066190f37146341999729c4e8bcc" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-coredns--668d6bf9bc--9tw7r-eth0" Jan 16 23:57:53.774388 containerd[1469]: 2026-01-16 23:57:53.728 [INFO][4133] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:53.774388 containerd[1469]: 2026-01-16 23:57:53.742 [INFO][4133] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:53.774388 containerd[1469]: 2026-01-16 23:57:53.761 [WARNING][4133] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="372f97accdf5a50ac72721b082abc929fafc066190f37146341999729c4e8bcc" HandleID="k8s-pod-network.372f97accdf5a50ac72721b082abc929fafc066190f37146341999729c4e8bcc" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-coredns--668d6bf9bc--9tw7r-eth0" Jan 16 23:57:53.774388 containerd[1469]: 2026-01-16 23:57:53.761 [INFO][4133] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="372f97accdf5a50ac72721b082abc929fafc066190f37146341999729c4e8bcc" HandleID="k8s-pod-network.372f97accdf5a50ac72721b082abc929fafc066190f37146341999729c4e8bcc" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-coredns--668d6bf9bc--9tw7r-eth0" Jan 16 23:57:53.774388 containerd[1469]: 2026-01-16 23:57:53.765 [INFO][4133] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:53.774388 containerd[1469]: 2026-01-16 23:57:53.771 [INFO][4118] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="372f97accdf5a50ac72721b082abc929fafc066190f37146341999729c4e8bcc" Jan 16 23:57:53.776026 containerd[1469]: time="2026-01-16T23:57:53.775885163Z" level=info msg="TearDown network for sandbox \"372f97accdf5a50ac72721b082abc929fafc066190f37146341999729c4e8bcc\" successfully" Jan 16 23:57:53.776112 containerd[1469]: time="2026-01-16T23:57:53.776028442Z" level=info msg="StopPodSandbox for \"372f97accdf5a50ac72721b082abc929fafc066190f37146341999729c4e8bcc\" returns successfully" Jan 16 23:57:53.777159 containerd[1469]: time="2026-01-16T23:57:53.776814315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9tw7r,Uid:59071300-7ce3-4b48-ae86-f59c1ac4567d,Namespace:kube-system,Attempt:1,}" Jan 16 23:57:53.780653 systemd[1]: run-netns-cni\x2d36adcbb6\x2da5f5\x2daa7c\x2d1c77\x2d8b06e88bd9aa.mount: Deactivated successfully. Jan 16 23:57:54.005142 systemd-networkd[1366]: calid738080d3eb: Link UP Jan 16 23:57:54.006779 systemd-networkd[1366]: calid738080d3eb: Gained carrier Jan 16 23:57:54.033306 containerd[1469]: 2026-01-16 23:57:53.868 [INFO][4145] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--fe2a5b3650-k8s-goldmane--666569f655--jr25d-eth0 goldmane-666569f655- calico-system 195fd954-db29-4a46-a5c3-26216d80a6af 918 0 2026-01-16 23:57:30 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-6-n-fe2a5b3650 goldmane-666569f655-jr25d eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calid738080d3eb [] [] }} ContainerID="2d89683823b8dfa24558f775487f2a13559ca39e8b8914b9214118bf3e18ee6c" Namespace="calico-system" Pod="goldmane-666569f655-jr25d" WorkloadEndpoint="ci--4081--3--6--n--fe2a5b3650-k8s-goldmane--666569f655--jr25d-" Jan 16 23:57:54.033306 containerd[1469]: 2026-01-16 23:57:53.870 [INFO][4145] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2d89683823b8dfa24558f775487f2a13559ca39e8b8914b9214118bf3e18ee6c" Namespace="calico-system" Pod="goldmane-666569f655-jr25d" WorkloadEndpoint="ci--4081--3--6--n--fe2a5b3650-k8s-goldmane--666569f655--jr25d-eth0" Jan 16 23:57:54.033306 containerd[1469]: 2026-01-16 23:57:53.925 [INFO][4169] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2d89683823b8dfa24558f775487f2a13559ca39e8b8914b9214118bf3e18ee6c" HandleID="k8s-pod-network.2d89683823b8dfa24558f775487f2a13559ca39e8b8914b9214118bf3e18ee6c" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-goldmane--666569f655--jr25d-eth0" Jan 16 23:57:54.033306 containerd[1469]: 2026-01-16 23:57:53.925 [INFO][4169] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2d89683823b8dfa24558f775487f2a13559ca39e8b8914b9214118bf3e18ee6c" HandleID="k8s-pod-network.2d89683823b8dfa24558f775487f2a13559ca39e8b8914b9214118bf3e18ee6c" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-goldmane--666569f655--jr25d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cafe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-fe2a5b3650", "pod":"goldmane-666569f655-jr25d", "timestamp":"2026-01-16 23:57:53.925684602 +0000 UTC"}, Hostname:"ci-4081-3-6-n-fe2a5b3650", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 23:57:54.033306 containerd[1469]: 2026-01-16 23:57:53.925 [INFO][4169] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:54.033306 containerd[1469]: 2026-01-16 23:57:53.926 [INFO][4169] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:54.033306 containerd[1469]: 2026-01-16 23:57:53.926 [INFO][4169] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-fe2a5b3650' Jan 16 23:57:54.033306 containerd[1469]: 2026-01-16 23:57:53.944 [INFO][4169] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2d89683823b8dfa24558f775487f2a13559ca39e8b8914b9214118bf3e18ee6c" host="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:54.033306 containerd[1469]: 2026-01-16 23:57:53.955 [INFO][4169] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:54.033306 containerd[1469]: 2026-01-16 23:57:53.962 [INFO][4169] ipam/ipam.go 511: Trying affinity for 192.168.52.192/26 host="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:54.033306 containerd[1469]: 2026-01-16 23:57:53.967 [INFO][4169] ipam/ipam.go 158: Attempting to load block cidr=192.168.52.192/26 host="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:54.033306 containerd[1469]: 2026-01-16 23:57:53.971 [INFO][4169] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.52.192/26 host="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:54.033306 containerd[1469]: 2026-01-16 23:57:53.971 [INFO][4169] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.52.192/26 handle="k8s-pod-network.2d89683823b8dfa24558f775487f2a13559ca39e8b8914b9214118bf3e18ee6c" host="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:54.033306 containerd[1469]: 2026-01-16 23:57:53.973 [INFO][4169] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2d89683823b8dfa24558f775487f2a13559ca39e8b8914b9214118bf3e18ee6c Jan 16 23:57:54.033306 containerd[1469]: 2026-01-16 23:57:53.979 [INFO][4169] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.52.192/26 handle="k8s-pod-network.2d89683823b8dfa24558f775487f2a13559ca39e8b8914b9214118bf3e18ee6c" host="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:54.033306 containerd[1469]: 2026-01-16 23:57:53.989 [INFO][4169] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.52.194/26] block=192.168.52.192/26 handle="k8s-pod-network.2d89683823b8dfa24558f775487f2a13559ca39e8b8914b9214118bf3e18ee6c" host="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:54.033306 containerd[1469]: 2026-01-16 23:57:53.989 [INFO][4169] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.52.194/26] handle="k8s-pod-network.2d89683823b8dfa24558f775487f2a13559ca39e8b8914b9214118bf3e18ee6c" host="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:54.033306 containerd[1469]: 2026-01-16 23:57:53.989 [INFO][4169] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:54.033306 containerd[1469]: 2026-01-16 23:57:53.989 [INFO][4169] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.52.194/26] IPv6=[] ContainerID="2d89683823b8dfa24558f775487f2a13559ca39e8b8914b9214118bf3e18ee6c" HandleID="k8s-pod-network.2d89683823b8dfa24558f775487f2a13559ca39e8b8914b9214118bf3e18ee6c" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-goldmane--666569f655--jr25d-eth0" Jan 16 23:57:54.035378 containerd[1469]: 2026-01-16 23:57:53.995 [INFO][4145] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2d89683823b8dfa24558f775487f2a13559ca39e8b8914b9214118bf3e18ee6c" Namespace="calico-system" Pod="goldmane-666569f655-jr25d" WorkloadEndpoint="ci--4081--3--6--n--fe2a5b3650-k8s-goldmane--666569f655--jr25d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--fe2a5b3650-k8s-goldmane--666569f655--jr25d-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"195fd954-db29-4a46-a5c3-26216d80a6af", ResourceVersion:"918", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 57, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-fe2a5b3650", ContainerID:"", Pod:"goldmane-666569f655-jr25d", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.52.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid738080d3eb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:57:54.035378 containerd[1469]: 2026-01-16 23:57:53.995 [INFO][4145] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.52.194/32] ContainerID="2d89683823b8dfa24558f775487f2a13559ca39e8b8914b9214118bf3e18ee6c" Namespace="calico-system" Pod="goldmane-666569f655-jr25d" WorkloadEndpoint="ci--4081--3--6--n--fe2a5b3650-k8s-goldmane--666569f655--jr25d-eth0" Jan 16 23:57:54.035378 containerd[1469]: 2026-01-16 23:57:53.995 [INFO][4145] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid738080d3eb ContainerID="2d89683823b8dfa24558f775487f2a13559ca39e8b8914b9214118bf3e18ee6c" Namespace="calico-system" Pod="goldmane-666569f655-jr25d" WorkloadEndpoint="ci--4081--3--6--n--fe2a5b3650-k8s-goldmane--666569f655--jr25d-eth0" Jan 16 23:57:54.035378 containerd[1469]: 2026-01-16 23:57:54.007 [INFO][4145] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2d89683823b8dfa24558f775487f2a13559ca39e8b8914b9214118bf3e18ee6c" Namespace="calico-system" Pod="goldmane-666569f655-jr25d" WorkloadEndpoint="ci--4081--3--6--n--fe2a5b3650-k8s-goldmane--666569f655--jr25d-eth0" Jan 16 23:57:54.035378 containerd[1469]: 2026-01-16 23:57:54.009 [INFO][4145] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2d89683823b8dfa24558f775487f2a13559ca39e8b8914b9214118bf3e18ee6c" Namespace="calico-system" Pod="goldmane-666569f655-jr25d" WorkloadEndpoint="ci--4081--3--6--n--fe2a5b3650-k8s-goldmane--666569f655--jr25d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--fe2a5b3650-k8s-goldmane--666569f655--jr25d-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"195fd954-db29-4a46-a5c3-26216d80a6af", ResourceVersion:"918", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 57, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-fe2a5b3650", ContainerID:"2d89683823b8dfa24558f775487f2a13559ca39e8b8914b9214118bf3e18ee6c", Pod:"goldmane-666569f655-jr25d", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.52.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid738080d3eb", MAC:"46:96:b7:db:a6:68", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:57:54.035378 containerd[1469]: 2026-01-16 23:57:54.022 [INFO][4145] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2d89683823b8dfa24558f775487f2a13559ca39e8b8914b9214118bf3e18ee6c" Namespace="calico-system" Pod="goldmane-666569f655-jr25d" WorkloadEndpoint="ci--4081--3--6--n--fe2a5b3650-k8s-goldmane--666569f655--jr25d-eth0" Jan 16 23:57:54.087015 containerd[1469]: time="2026-01-16T23:57:54.086856676Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 23:57:54.087015 containerd[1469]: time="2026-01-16T23:57:54.086939636Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 23:57:54.087015 containerd[1469]: time="2026-01-16T23:57:54.086990595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:57:54.088141 containerd[1469]: time="2026-01-16T23:57:54.087197834Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:57:54.118241 systemd[1]: Started cri-containerd-2d89683823b8dfa24558f775487f2a13559ca39e8b8914b9214118bf3e18ee6c.scope - libcontainer container 2d89683823b8dfa24558f775487f2a13559ca39e8b8914b9214118bf3e18ee6c. Jan 16 23:57:54.129756 systemd-networkd[1366]: cali85ab4a39156: Link UP Jan 16 23:57:54.130042 systemd-networkd[1366]: cali85ab4a39156: Gained carrier Jan 16 23:57:54.155405 containerd[1469]: 2026-01-16 23:57:53.883 [INFO][4154] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--fe2a5b3650-k8s-coredns--668d6bf9bc--9tw7r-eth0 coredns-668d6bf9bc- kube-system 59071300-7ce3-4b48-ae86-f59c1ac4567d 919 0 2026-01-16 23:57:12 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-n-fe2a5b3650 coredns-668d6bf9bc-9tw7r eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali85ab4a39156 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="781089e1e253ce85f0920ecee8890cca2d12c98b6912a5b2755adbc3a543d7bf" Namespace="kube-system" Pod="coredns-668d6bf9bc-9tw7r" WorkloadEndpoint="ci--4081--3--6--n--fe2a5b3650-k8s-coredns--668d6bf9bc--9tw7r-" Jan 16 23:57:54.155405 containerd[1469]: 2026-01-16 23:57:53.884 [INFO][4154] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="781089e1e253ce85f0920ecee8890cca2d12c98b6912a5b2755adbc3a543d7bf" Namespace="kube-system" Pod="coredns-668d6bf9bc-9tw7r" WorkloadEndpoint="ci--4081--3--6--n--fe2a5b3650-k8s-coredns--668d6bf9bc--9tw7r-eth0" Jan 16 23:57:54.155405 containerd[1469]: 2026-01-16 23:57:53.939 [INFO][4173] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="781089e1e253ce85f0920ecee8890cca2d12c98b6912a5b2755adbc3a543d7bf" HandleID="k8s-pod-network.781089e1e253ce85f0920ecee8890cca2d12c98b6912a5b2755adbc3a543d7bf" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-coredns--668d6bf9bc--9tw7r-eth0" Jan 16 23:57:54.155405 containerd[1469]: 2026-01-16 23:57:53.940 [INFO][4173] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="781089e1e253ce85f0920ecee8890cca2d12c98b6912a5b2755adbc3a543d7bf" HandleID="k8s-pod-network.781089e1e253ce85f0920ecee8890cca2d12c98b6912a5b2755adbc3a543d7bf" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-coredns--668d6bf9bc--9tw7r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400032ce20), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-n-fe2a5b3650", "pod":"coredns-668d6bf9bc-9tw7r", "timestamp":"2026-01-16 23:57:53.939296406 +0000 UTC"}, Hostname:"ci-4081-3-6-n-fe2a5b3650", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 23:57:54.155405 containerd[1469]: 2026-01-16 23:57:53.940 [INFO][4173] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:54.155405 containerd[1469]: 2026-01-16 23:57:53.990 [INFO][4173] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:54.155405 containerd[1469]: 2026-01-16 23:57:53.991 [INFO][4173] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-fe2a5b3650' Jan 16 23:57:54.155405 containerd[1469]: 2026-01-16 23:57:54.046 [INFO][4173] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.781089e1e253ce85f0920ecee8890cca2d12c98b6912a5b2755adbc3a543d7bf" host="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:54.155405 containerd[1469]: 2026-01-16 23:57:54.059 [INFO][4173] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:54.155405 containerd[1469]: 2026-01-16 23:57:54.071 [INFO][4173] ipam/ipam.go 511: Trying affinity for 192.168.52.192/26 host="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:54.155405 containerd[1469]: 2026-01-16 23:57:54.079 [INFO][4173] ipam/ipam.go 158: Attempting to load block cidr=192.168.52.192/26 host="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:54.155405 containerd[1469]: 2026-01-16 23:57:54.086 [INFO][4173] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.52.192/26 host="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:54.155405 containerd[1469]: 2026-01-16 23:57:54.087 [INFO][4173] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.52.192/26 handle="k8s-pod-network.781089e1e253ce85f0920ecee8890cca2d12c98b6912a5b2755adbc3a543d7bf" host="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:54.155405 containerd[1469]: 2026-01-16 23:57:54.091 [INFO][4173] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.781089e1e253ce85f0920ecee8890cca2d12c98b6912a5b2755adbc3a543d7bf Jan 16 23:57:54.155405 containerd[1469]: 2026-01-16 23:57:54.101 [INFO][4173] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.52.192/26 handle="k8s-pod-network.781089e1e253ce85f0920ecee8890cca2d12c98b6912a5b2755adbc3a543d7bf" host="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:54.155405 containerd[1469]: 2026-01-16 23:57:54.117 [INFO][4173] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.52.195/26] block=192.168.52.192/26 handle="k8s-pod-network.781089e1e253ce85f0920ecee8890cca2d12c98b6912a5b2755adbc3a543d7bf" host="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:54.155405 containerd[1469]: 2026-01-16 23:57:54.118 [INFO][4173] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.52.195/26] handle="k8s-pod-network.781089e1e253ce85f0920ecee8890cca2d12c98b6912a5b2755adbc3a543d7bf" host="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:54.155405 containerd[1469]: 2026-01-16 23:57:54.118 [INFO][4173] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:54.155405 containerd[1469]: 2026-01-16 23:57:54.119 [INFO][4173] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.52.195/26] IPv6=[] ContainerID="781089e1e253ce85f0920ecee8890cca2d12c98b6912a5b2755adbc3a543d7bf" HandleID="k8s-pod-network.781089e1e253ce85f0920ecee8890cca2d12c98b6912a5b2755adbc3a543d7bf" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-coredns--668d6bf9bc--9tw7r-eth0" Jan 16 23:57:54.156037 containerd[1469]: 2026-01-16 23:57:54.123 [INFO][4154] cni-plugin/k8s.go 418: Populated endpoint ContainerID="781089e1e253ce85f0920ecee8890cca2d12c98b6912a5b2755adbc3a543d7bf" Namespace="kube-system" Pod="coredns-668d6bf9bc-9tw7r" WorkloadEndpoint="ci--4081--3--6--n--fe2a5b3650-k8s-coredns--668d6bf9bc--9tw7r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--fe2a5b3650-k8s-coredns--668d6bf9bc--9tw7r-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"59071300-7ce3-4b48-ae86-f59c1ac4567d", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 57, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-fe2a5b3650", ContainerID:"", Pod:"coredns-668d6bf9bc-9tw7r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali85ab4a39156", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:57:54.156037 containerd[1469]: 2026-01-16 23:57:54.123 [INFO][4154] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.52.195/32] ContainerID="781089e1e253ce85f0920ecee8890cca2d12c98b6912a5b2755adbc3a543d7bf" Namespace="kube-system" Pod="coredns-668d6bf9bc-9tw7r" WorkloadEndpoint="ci--4081--3--6--n--fe2a5b3650-k8s-coredns--668d6bf9bc--9tw7r-eth0" Jan 16 23:57:54.156037 containerd[1469]: 2026-01-16 23:57:54.123 [INFO][4154] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali85ab4a39156 ContainerID="781089e1e253ce85f0920ecee8890cca2d12c98b6912a5b2755adbc3a543d7bf" Namespace="kube-system" Pod="coredns-668d6bf9bc-9tw7r" WorkloadEndpoint="ci--4081--3--6--n--fe2a5b3650-k8s-coredns--668d6bf9bc--9tw7r-eth0" Jan 16 23:57:54.156037 containerd[1469]: 2026-01-16 23:57:54.129 [INFO][4154] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="781089e1e253ce85f0920ecee8890cca2d12c98b6912a5b2755adbc3a543d7bf" Namespace="kube-system" Pod="coredns-668d6bf9bc-9tw7r" WorkloadEndpoint="ci--4081--3--6--n--fe2a5b3650-k8s-coredns--668d6bf9bc--9tw7r-eth0" Jan 16 23:57:54.156037 containerd[1469]: 2026-01-16 23:57:54.132 [INFO][4154] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="781089e1e253ce85f0920ecee8890cca2d12c98b6912a5b2755adbc3a543d7bf" Namespace="kube-system" Pod="coredns-668d6bf9bc-9tw7r" WorkloadEndpoint="ci--4081--3--6--n--fe2a5b3650-k8s-coredns--668d6bf9bc--9tw7r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--fe2a5b3650-k8s-coredns--668d6bf9bc--9tw7r-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"59071300-7ce3-4b48-ae86-f59c1ac4567d", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 57, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-fe2a5b3650", ContainerID:"781089e1e253ce85f0920ecee8890cca2d12c98b6912a5b2755adbc3a543d7bf", Pod:"coredns-668d6bf9bc-9tw7r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali85ab4a39156", MAC:"46:56:d4:36:eb:98", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:57:54.156037 containerd[1469]: 2026-01-16 23:57:54.151 [INFO][4154] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="781089e1e253ce85f0920ecee8890cca2d12c98b6912a5b2755adbc3a543d7bf" Namespace="kube-system" Pod="coredns-668d6bf9bc-9tw7r" WorkloadEndpoint="ci--4081--3--6--n--fe2a5b3650-k8s-coredns--668d6bf9bc--9tw7r-eth0" Jan 16 23:57:54.190777 containerd[1469]: time="2026-01-16T23:57:54.190724283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-jr25d,Uid:195fd954-db29-4a46-a5c3-26216d80a6af,Namespace:calico-system,Attempt:1,} returns sandbox id \"2d89683823b8dfa24558f775487f2a13559ca39e8b8914b9214118bf3e18ee6c\"" Jan 16 23:57:54.197830 containerd[1469]: time="2026-01-16T23:57:54.197515826Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 23:57:54.197830 containerd[1469]: time="2026-01-16T23:57:54.197632385Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 23:57:54.197830 containerd[1469]: time="2026-01-16T23:57:54.197645585Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:57:54.200016 containerd[1469]: time="2026-01-16T23:57:54.198847575Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:57:54.205351 containerd[1469]: time="2026-01-16T23:57:54.205084002Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 16 23:57:54.226174 systemd[1]: Started cri-containerd-781089e1e253ce85f0920ecee8890cca2d12c98b6912a5b2755adbc3a543d7bf.scope - libcontainer container 781089e1e253ce85f0920ecee8890cca2d12c98b6912a5b2755adbc3a543d7bf. Jan 16 23:57:54.288357 containerd[1469]: time="2026-01-16T23:57:54.288302183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9tw7r,Uid:59071300-7ce3-4b48-ae86-f59c1ac4567d,Namespace:kube-system,Attempt:1,} returns sandbox id \"781089e1e253ce85f0920ecee8890cca2d12c98b6912a5b2755adbc3a543d7bf\"" Jan 16 23:57:54.295314 containerd[1469]: time="2026-01-16T23:57:54.294933567Z" level=info msg="CreateContainer within sandbox \"781089e1e253ce85f0920ecee8890cca2d12c98b6912a5b2755adbc3a543d7bf\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 16 23:57:54.320112 containerd[1469]: time="2026-01-16T23:57:54.319941397Z" level=info msg="CreateContainer within sandbox \"781089e1e253ce85f0920ecee8890cca2d12c98b6912a5b2755adbc3a543d7bf\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"45fc8fb311cd01dc41543903cd9460c4044186ec51c6c52058bd51885f52f7f2\"" Jan 16 23:57:54.321868 containerd[1469]: time="2026-01-16T23:57:54.321808061Z" level=info msg="StartContainer for \"45fc8fb311cd01dc41543903cd9460c4044186ec51c6c52058bd51885f52f7f2\"" Jan 16 23:57:54.364397 systemd[1]: Started cri-containerd-45fc8fb311cd01dc41543903cd9460c4044186ec51c6c52058bd51885f52f7f2.scope - libcontainer container 45fc8fb311cd01dc41543903cd9460c4044186ec51c6c52058bd51885f52f7f2. Jan 16 23:57:54.395438 containerd[1469]: time="2026-01-16T23:57:54.395261404Z" level=info msg="StartContainer for \"45fc8fb311cd01dc41543903cd9460c4044186ec51c6c52058bd51885f52f7f2\" returns successfully" Jan 16 23:57:54.559529 containerd[1469]: time="2026-01-16T23:57:54.559280385Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:57:54.562234 containerd[1469]: time="2026-01-16T23:57:54.561876963Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 16 23:57:54.562234 containerd[1469]: time="2026-01-16T23:57:54.561973882Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 16 23:57:54.563451 kubelet[2566]: E0116 23:57:54.563178 2566 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 23:57:54.563451 kubelet[2566]: E0116 23:57:54.563241 2566 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 23:57:54.567534 kubelet[2566]: E0116 23:57:54.567443 2566 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mcknv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-jr25d_calico-system(195fd954-db29-4a46-a5c3-26216d80a6af): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 16 23:57:54.568693 kubelet[2566]: E0116 23:57:54.568654 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jr25d" podUID="195fd954-db29-4a46-a5c3-26216d80a6af" Jan 16 23:57:54.586539 containerd[1469]: time="2026-01-16T23:57:54.584792570Z" level=info msg="StopPodSandbox for \"a7ee57f7bb023a0ee0f2568b9564b7017c07a878d1c81bc06782f6229a5dcbc0\"" Jan 16 23:57:54.721777 containerd[1469]: 2026-01-16 23:57:54.666 [INFO][4335] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a7ee57f7bb023a0ee0f2568b9564b7017c07a878d1c81bc06782f6229a5dcbc0" Jan 16 23:57:54.721777 containerd[1469]: 2026-01-16 23:57:54.666 [INFO][4335] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a7ee57f7bb023a0ee0f2568b9564b7017c07a878d1c81bc06782f6229a5dcbc0" iface="eth0" netns="/var/run/netns/cni-33dc918e-6285-f614-701f-27718dc69870" Jan 16 23:57:54.721777 containerd[1469]: 2026-01-16 23:57:54.666 [INFO][4335] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a7ee57f7bb023a0ee0f2568b9564b7017c07a878d1c81bc06782f6229a5dcbc0" iface="eth0" netns="/var/run/netns/cni-33dc918e-6285-f614-701f-27718dc69870" Jan 16 23:57:54.721777 containerd[1469]: 2026-01-16 23:57:54.667 [INFO][4335] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a7ee57f7bb023a0ee0f2568b9564b7017c07a878d1c81bc06782f6229a5dcbc0" iface="eth0" netns="/var/run/netns/cni-33dc918e-6285-f614-701f-27718dc69870" Jan 16 23:57:54.721777 containerd[1469]: 2026-01-16 23:57:54.667 [INFO][4335] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a7ee57f7bb023a0ee0f2568b9564b7017c07a878d1c81bc06782f6229a5dcbc0" Jan 16 23:57:54.721777 containerd[1469]: 2026-01-16 23:57:54.667 [INFO][4335] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a7ee57f7bb023a0ee0f2568b9564b7017c07a878d1c81bc06782f6229a5dcbc0" Jan 16 23:57:54.721777 containerd[1469]: 2026-01-16 23:57:54.701 [INFO][4342] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a7ee57f7bb023a0ee0f2568b9564b7017c07a878d1c81bc06782f6229a5dcbc0" HandleID="k8s-pod-network.a7ee57f7bb023a0ee0f2568b9564b7017c07a878d1c81bc06782f6229a5dcbc0" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-calico--apiserver--c7c7c7dd6--26hzj-eth0" Jan 16 23:57:54.721777 containerd[1469]: 2026-01-16 23:57:54.701 [INFO][4342] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:54.721777 containerd[1469]: 2026-01-16 23:57:54.701 [INFO][4342] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:54.721777 containerd[1469]: 2026-01-16 23:57:54.714 [WARNING][4342] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a7ee57f7bb023a0ee0f2568b9564b7017c07a878d1c81bc06782f6229a5dcbc0" HandleID="k8s-pod-network.a7ee57f7bb023a0ee0f2568b9564b7017c07a878d1c81bc06782f6229a5dcbc0" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-calico--apiserver--c7c7c7dd6--26hzj-eth0" Jan 16 23:57:54.721777 containerd[1469]: 2026-01-16 23:57:54.714 [INFO][4342] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a7ee57f7bb023a0ee0f2568b9564b7017c07a878d1c81bc06782f6229a5dcbc0" HandleID="k8s-pod-network.a7ee57f7bb023a0ee0f2568b9564b7017c07a878d1c81bc06782f6229a5dcbc0" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-calico--apiserver--c7c7c7dd6--26hzj-eth0" Jan 16 23:57:54.721777 containerd[1469]: 2026-01-16 23:57:54.717 [INFO][4342] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:54.721777 containerd[1469]: 2026-01-16 23:57:54.720 [INFO][4335] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a7ee57f7bb023a0ee0f2568b9564b7017c07a878d1c81bc06782f6229a5dcbc0" Jan 16 23:57:54.723548 containerd[1469]: time="2026-01-16T23:57:54.723204126Z" level=info msg="TearDown network for sandbox \"a7ee57f7bb023a0ee0f2568b9564b7017c07a878d1c81bc06782f6229a5dcbc0\" successfully" Jan 16 23:57:54.723548 containerd[1469]: time="2026-01-16T23:57:54.723241806Z" level=info msg="StopPodSandbox for \"a7ee57f7bb023a0ee0f2568b9564b7017c07a878d1c81bc06782f6229a5dcbc0\" returns successfully" Jan 16 23:57:54.724616 containerd[1469]: time="2026-01-16T23:57:54.724045999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c7c7c7dd6-26hzj,Uid:b9c78c11-17fe-4d54-827b-16ba9d81154b,Namespace:calico-apiserver,Attempt:1,}" Jan 16 23:57:54.759766 systemd[1]: run-netns-cni\x2d33dc918e\x2d6285\x2df614\x2d701f\x2d27718dc69870.mount: Deactivated successfully. Jan 16 23:57:54.942142 kubelet[2566]: E0116 23:57:54.941073 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jr25d" podUID="195fd954-db29-4a46-a5c3-26216d80a6af" Jan 16 23:57:54.991315 kubelet[2566]: I0116 23:57:54.991150 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-9tw7r" podStartSLOduration=42.991128674 podStartE2EDuration="42.991128674s" podCreationTimestamp="2026-01-16 23:57:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-16 23:57:54.956567924 +0000 UTC m=+50.534701587" watchObservedRunningTime="2026-01-16 23:57:54.991128674 +0000 UTC m=+50.569262297" Jan 16 23:57:55.033867 systemd-networkd[1366]: cali0beb6567bb1: Link UP Jan 16 23:57:55.036928 systemd-networkd[1366]: cali0beb6567bb1: Gained carrier Jan 16 23:57:55.069227 containerd[1469]: 2026-01-16 23:57:54.828 [INFO][4351] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--fe2a5b3650-k8s-calico--apiserver--c7c7c7dd6--26hzj-eth0 calico-apiserver-c7c7c7dd6- calico-apiserver b9c78c11-17fe-4d54-827b-16ba9d81154b 937 0 2026-01-16 23:57:23 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:c7c7c7dd6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-n-fe2a5b3650 calico-apiserver-c7c7c7dd6-26hzj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0beb6567bb1 [] [] }} ContainerID="6d2ea36f147ff898c1b4bdf37b7fe81c49c79d024caa69926b151130c3946f0c" Namespace="calico-apiserver" Pod="calico-apiserver-c7c7c7dd6-26hzj" WorkloadEndpoint="ci--4081--3--6--n--fe2a5b3650-k8s-calico--apiserver--c7c7c7dd6--26hzj-" Jan 16 23:57:55.069227 containerd[1469]: 2026-01-16 23:57:54.829 [INFO][4351] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6d2ea36f147ff898c1b4bdf37b7fe81c49c79d024caa69926b151130c3946f0c" Namespace="calico-apiserver" Pod="calico-apiserver-c7c7c7dd6-26hzj" WorkloadEndpoint="ci--4081--3--6--n--fe2a5b3650-k8s-calico--apiserver--c7c7c7dd6--26hzj-eth0" Jan 16 23:57:55.069227 containerd[1469]: 2026-01-16 23:57:54.890 [INFO][4363] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6d2ea36f147ff898c1b4bdf37b7fe81c49c79d024caa69926b151130c3946f0c" HandleID="k8s-pod-network.6d2ea36f147ff898c1b4bdf37b7fe81c49c79d024caa69926b151130c3946f0c" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-calico--apiserver--c7c7c7dd6--26hzj-eth0" Jan 16 23:57:55.069227 containerd[1469]: 2026-01-16 23:57:54.891 [INFO][4363] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6d2ea36f147ff898c1b4bdf37b7fe81c49c79d024caa69926b151130c3946f0c" HandleID="k8s-pod-network.6d2ea36f147ff898c1b4bdf37b7fe81c49c79d024caa69926b151130c3946f0c" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-calico--apiserver--c7c7c7dd6--26hzj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000330290), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-6-n-fe2a5b3650", "pod":"calico-apiserver-c7c7c7dd6-26hzj", "timestamp":"2026-01-16 23:57:54.890817477 +0000 UTC"}, Hostname:"ci-4081-3-6-n-fe2a5b3650", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 23:57:55.069227 containerd[1469]: 2026-01-16 23:57:54.891 [INFO][4363] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:55.069227 containerd[1469]: 2026-01-16 23:57:54.892 [INFO][4363] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:55.069227 containerd[1469]: 2026-01-16 23:57:54.892 [INFO][4363] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-fe2a5b3650' Jan 16 23:57:55.069227 containerd[1469]: 2026-01-16 23:57:54.918 [INFO][4363] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6d2ea36f147ff898c1b4bdf37b7fe81c49c79d024caa69926b151130c3946f0c" host="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:55.069227 containerd[1469]: 2026-01-16 23:57:54.934 [INFO][4363] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:55.069227 containerd[1469]: 2026-01-16 23:57:54.950 [INFO][4363] ipam/ipam.go 511: Trying affinity for 192.168.52.192/26 host="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:55.069227 containerd[1469]: 2026-01-16 23:57:54.958 [INFO][4363] ipam/ipam.go 158: Attempting to load block cidr=192.168.52.192/26 host="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:55.069227 containerd[1469]: 2026-01-16 23:57:54.969 [INFO][4363] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.52.192/26 host="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:55.069227 containerd[1469]: 2026-01-16 23:57:54.969 [INFO][4363] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.52.192/26 handle="k8s-pod-network.6d2ea36f147ff898c1b4bdf37b7fe81c49c79d024caa69926b151130c3946f0c" host="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:55.069227 containerd[1469]: 2026-01-16 23:57:54.976 [INFO][4363] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6d2ea36f147ff898c1b4bdf37b7fe81c49c79d024caa69926b151130c3946f0c Jan 16 23:57:55.069227 containerd[1469]: 2026-01-16 23:57:54.993 [INFO][4363] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.52.192/26 handle="k8s-pod-network.6d2ea36f147ff898c1b4bdf37b7fe81c49c79d024caa69926b151130c3946f0c" host="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:55.069227 containerd[1469]: 2026-01-16 23:57:55.008 [INFO][4363] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.52.196/26] block=192.168.52.192/26 handle="k8s-pod-network.6d2ea36f147ff898c1b4bdf37b7fe81c49c79d024caa69926b151130c3946f0c" host="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:55.069227 containerd[1469]: 2026-01-16 23:57:55.008 [INFO][4363] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.52.196/26] handle="k8s-pod-network.6d2ea36f147ff898c1b4bdf37b7fe81c49c79d024caa69926b151130c3946f0c" host="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:55.069227 containerd[1469]: 2026-01-16 23:57:55.008 [INFO][4363] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:55.069227 containerd[1469]: 2026-01-16 23:57:55.008 [INFO][4363] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.52.196/26] IPv6=[] ContainerID="6d2ea36f147ff898c1b4bdf37b7fe81c49c79d024caa69926b151130c3946f0c" HandleID="k8s-pod-network.6d2ea36f147ff898c1b4bdf37b7fe81c49c79d024caa69926b151130c3946f0c" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-calico--apiserver--c7c7c7dd6--26hzj-eth0" Jan 16 23:57:55.070031 containerd[1469]: 2026-01-16 23:57:55.013 [INFO][4351] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6d2ea36f147ff898c1b4bdf37b7fe81c49c79d024caa69926b151130c3946f0c" Namespace="calico-apiserver" Pod="calico-apiserver-c7c7c7dd6-26hzj" WorkloadEndpoint="ci--4081--3--6--n--fe2a5b3650-k8s-calico--apiserver--c7c7c7dd6--26hzj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--fe2a5b3650-k8s-calico--apiserver--c7c7c7dd6--26hzj-eth0", GenerateName:"calico-apiserver-c7c7c7dd6-", Namespace:"calico-apiserver", SelfLink:"", UID:"b9c78c11-17fe-4d54-827b-16ba9d81154b", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 57, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c7c7c7dd6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-fe2a5b3650", ContainerID:"", Pod:"calico-apiserver-c7c7c7dd6-26hzj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.52.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0beb6567bb1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:57:55.070031 containerd[1469]: 2026-01-16 23:57:55.013 [INFO][4351] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.52.196/32] ContainerID="6d2ea36f147ff898c1b4bdf37b7fe81c49c79d024caa69926b151130c3946f0c" Namespace="calico-apiserver" Pod="calico-apiserver-c7c7c7dd6-26hzj" WorkloadEndpoint="ci--4081--3--6--n--fe2a5b3650-k8s-calico--apiserver--c7c7c7dd6--26hzj-eth0" Jan 16 23:57:55.070031 containerd[1469]: 2026-01-16 23:57:55.013 [INFO][4351] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0beb6567bb1 ContainerID="6d2ea36f147ff898c1b4bdf37b7fe81c49c79d024caa69926b151130c3946f0c" Namespace="calico-apiserver" Pod="calico-apiserver-c7c7c7dd6-26hzj" WorkloadEndpoint="ci--4081--3--6--n--fe2a5b3650-k8s-calico--apiserver--c7c7c7dd6--26hzj-eth0" Jan 16 23:57:55.070031 containerd[1469]: 2026-01-16 23:57:55.040 [INFO][4351] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6d2ea36f147ff898c1b4bdf37b7fe81c49c79d024caa69926b151130c3946f0c" Namespace="calico-apiserver" Pod="calico-apiserver-c7c7c7dd6-26hzj" WorkloadEndpoint="ci--4081--3--6--n--fe2a5b3650-k8s-calico--apiserver--c7c7c7dd6--26hzj-eth0" Jan 16 23:57:55.070031 containerd[1469]: 2026-01-16 23:57:55.042 [INFO][4351] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6d2ea36f147ff898c1b4bdf37b7fe81c49c79d024caa69926b151130c3946f0c" Namespace="calico-apiserver" Pod="calico-apiserver-c7c7c7dd6-26hzj" WorkloadEndpoint="ci--4081--3--6--n--fe2a5b3650-k8s-calico--apiserver--c7c7c7dd6--26hzj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--fe2a5b3650-k8s-calico--apiserver--c7c7c7dd6--26hzj-eth0", GenerateName:"calico-apiserver-c7c7c7dd6-", Namespace:"calico-apiserver", SelfLink:"", UID:"b9c78c11-17fe-4d54-827b-16ba9d81154b", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 57, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c7c7c7dd6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-fe2a5b3650", ContainerID:"6d2ea36f147ff898c1b4bdf37b7fe81c49c79d024caa69926b151130c3946f0c", Pod:"calico-apiserver-c7c7c7dd6-26hzj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.52.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0beb6567bb1", MAC:"b2:59:e4:1a:a0:a9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:57:55.070031 containerd[1469]: 2026-01-16 23:57:55.064 [INFO][4351] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6d2ea36f147ff898c1b4bdf37b7fe81c49c79d024caa69926b151130c3946f0c" Namespace="calico-apiserver" Pod="calico-apiserver-c7c7c7dd6-26hzj" WorkloadEndpoint="ci--4081--3--6--n--fe2a5b3650-k8s-calico--apiserver--c7c7c7dd6--26hzj-eth0" Jan 16 23:57:55.106615 containerd[1469]: time="2026-01-16T23:57:55.106314279Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 23:57:55.107669 containerd[1469]: time="2026-01-16T23:57:55.107297311Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 23:57:55.107669 containerd[1469]: time="2026-01-16T23:57:55.107327991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:57:55.109158 containerd[1469]: time="2026-01-16T23:57:55.108501501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:57:55.165413 systemd[1]: Started cri-containerd-6d2ea36f147ff898c1b4bdf37b7fe81c49c79d024caa69926b151130c3946f0c.scope - libcontainer container 6d2ea36f147ff898c1b4bdf37b7fe81c49c79d024caa69926b151130c3946f0c. Jan 16 23:57:55.219414 containerd[1469]: time="2026-01-16T23:57:55.219172745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c7c7c7dd6-26hzj,Uid:b9c78c11-17fe-4d54-827b-16ba9d81154b,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"6d2ea36f147ff898c1b4bdf37b7fe81c49c79d024caa69926b151130c3946f0c\"" Jan 16 23:57:55.222191 containerd[1469]: time="2026-01-16T23:57:55.221478126Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 23:57:55.374570 systemd-networkd[1366]: calid738080d3eb: Gained IPv6LL Jan 16 23:57:55.573588 containerd[1469]: time="2026-01-16T23:57:55.573272615Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:57:55.575660 containerd[1469]: time="2026-01-16T23:57:55.575444037Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 23:57:55.575660 containerd[1469]: time="2026-01-16T23:57:55.575550436Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 16 23:57:55.576256 kubelet[2566]: E0116 23:57:55.575813 2566 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 23:57:55.576256 kubelet[2566]: E0116 23:57:55.576124 2566 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 23:57:55.576807 kubelet[2566]: E0116 23:57:55.576268 2566 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6hczh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-c7c7c7dd6-26hzj_calico-apiserver(b9c78c11-17fe-4d54-827b-16ba9d81154b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 23:57:55.578180 kubelet[2566]: E0116 23:57:55.578093 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c7c7c7dd6-26hzj" podUID="b9c78c11-17fe-4d54-827b-16ba9d81154b" Jan 16 23:57:55.582265 containerd[1469]: time="2026-01-16T23:57:55.581851064Z" level=info msg="StopPodSandbox for \"ca2931cae940bb49fc6db34fb07c9ccab972a1f20d06a7cf7b02651ba8eeacba\"" Jan 16 23:57:55.693209 containerd[1469]: 2026-01-16 23:57:55.648 [INFO][4435] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ca2931cae940bb49fc6db34fb07c9ccab972a1f20d06a7cf7b02651ba8eeacba" Jan 16 23:57:55.693209 containerd[1469]: 2026-01-16 23:57:55.648 [INFO][4435] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ca2931cae940bb49fc6db34fb07c9ccab972a1f20d06a7cf7b02651ba8eeacba" iface="eth0" netns="/var/run/netns/cni-9a140b3c-b780-3628-c809-fa0a73801088" Jan 16 23:57:55.693209 containerd[1469]: 2026-01-16 23:57:55.648 [INFO][4435] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ca2931cae940bb49fc6db34fb07c9ccab972a1f20d06a7cf7b02651ba8eeacba" iface="eth0" netns="/var/run/netns/cni-9a140b3c-b780-3628-c809-fa0a73801088" Jan 16 23:57:55.693209 containerd[1469]: 2026-01-16 23:57:55.651 [INFO][4435] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ca2931cae940bb49fc6db34fb07c9ccab972a1f20d06a7cf7b02651ba8eeacba" iface="eth0" netns="/var/run/netns/cni-9a140b3c-b780-3628-c809-fa0a73801088" Jan 16 23:57:55.693209 containerd[1469]: 2026-01-16 23:57:55.651 [INFO][4435] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ca2931cae940bb49fc6db34fb07c9ccab972a1f20d06a7cf7b02651ba8eeacba" Jan 16 23:57:55.693209 containerd[1469]: 2026-01-16 23:57:55.651 [INFO][4435] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ca2931cae940bb49fc6db34fb07c9ccab972a1f20d06a7cf7b02651ba8eeacba" Jan 16 23:57:55.693209 containerd[1469]: 2026-01-16 23:57:55.675 [INFO][4446] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ca2931cae940bb49fc6db34fb07c9ccab972a1f20d06a7cf7b02651ba8eeacba" HandleID="k8s-pod-network.ca2931cae940bb49fc6db34fb07c9ccab972a1f20d06a7cf7b02651ba8eeacba" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-coredns--668d6bf9bc--ktbgp-eth0" Jan 16 23:57:55.693209 containerd[1469]: 2026-01-16 23:57:55.676 [INFO][4446] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:55.693209 containerd[1469]: 2026-01-16 23:57:55.676 [INFO][4446] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:55.693209 containerd[1469]: 2026-01-16 23:57:55.686 [WARNING][4446] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ca2931cae940bb49fc6db34fb07c9ccab972a1f20d06a7cf7b02651ba8eeacba" HandleID="k8s-pod-network.ca2931cae940bb49fc6db34fb07c9ccab972a1f20d06a7cf7b02651ba8eeacba" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-coredns--668d6bf9bc--ktbgp-eth0" Jan 16 23:57:55.693209 containerd[1469]: 2026-01-16 23:57:55.686 [INFO][4446] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ca2931cae940bb49fc6db34fb07c9ccab972a1f20d06a7cf7b02651ba8eeacba" HandleID="k8s-pod-network.ca2931cae940bb49fc6db34fb07c9ccab972a1f20d06a7cf7b02651ba8eeacba" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-coredns--668d6bf9bc--ktbgp-eth0" Jan 16 23:57:55.693209 containerd[1469]: 2026-01-16 23:57:55.689 [INFO][4446] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:55.693209 containerd[1469]: 2026-01-16 23:57:55.691 [INFO][4435] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ca2931cae940bb49fc6db34fb07c9ccab972a1f20d06a7cf7b02651ba8eeacba" Jan 16 23:57:55.697393 systemd[1]: run-netns-cni\x2d9a140b3c\x2db780\x2d3628\x2dc809\x2dfa0a73801088.mount: Deactivated successfully. Jan 16 23:57:55.697801 containerd[1469]: time="2026-01-16T23:57:55.697486227Z" level=info msg="TearDown network for sandbox \"ca2931cae940bb49fc6db34fb07c9ccab972a1f20d06a7cf7b02651ba8eeacba\" successfully" Jan 16 23:57:55.697801 containerd[1469]: time="2026-01-16T23:57:55.697522587Z" level=info msg="StopPodSandbox for \"ca2931cae940bb49fc6db34fb07c9ccab972a1f20d06a7cf7b02651ba8eeacba\" returns successfully" Jan 16 23:57:55.701342 containerd[1469]: time="2026-01-16T23:57:55.699458971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ktbgp,Uid:29ddbdd3-30d8-4cf4-8a5f-7715f3d5b4bb,Namespace:kube-system,Attempt:1,}" Jan 16 23:57:55.869335 systemd-networkd[1366]: cali1e75a8e80e5: Link UP Jan 16 23:57:55.870853 systemd-networkd[1366]: cali1e75a8e80e5: Gained carrier Jan 16 23:57:55.886139 systemd-networkd[1366]: cali85ab4a39156: Gained IPv6LL Jan 16 23:57:55.895106 containerd[1469]: 2026-01-16 23:57:55.765 [INFO][4452] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--fe2a5b3650-k8s-coredns--668d6bf9bc--ktbgp-eth0 coredns-668d6bf9bc- kube-system 29ddbdd3-30d8-4cf4-8a5f-7715f3d5b4bb 959 0 2026-01-16 23:57:12 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-n-fe2a5b3650 coredns-668d6bf9bc-ktbgp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1e75a8e80e5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b1f8a9a9026d756d194a11af938ebc1b01763d557f080a4fc1ed2ce38d97c300" Namespace="kube-system" Pod="coredns-668d6bf9bc-ktbgp" WorkloadEndpoint="ci--4081--3--6--n--fe2a5b3650-k8s-coredns--668d6bf9bc--ktbgp-" Jan 16 23:57:55.895106 containerd[1469]: 2026-01-16 23:57:55.766 [INFO][4452] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b1f8a9a9026d756d194a11af938ebc1b01763d557f080a4fc1ed2ce38d97c300" Namespace="kube-system" Pod="coredns-668d6bf9bc-ktbgp" WorkloadEndpoint="ci--4081--3--6--n--fe2a5b3650-k8s-coredns--668d6bf9bc--ktbgp-eth0" Jan 16 23:57:55.895106 containerd[1469]: 2026-01-16 23:57:55.798 [INFO][4464] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b1f8a9a9026d756d194a11af938ebc1b01763d557f080a4fc1ed2ce38d97c300" HandleID="k8s-pod-network.b1f8a9a9026d756d194a11af938ebc1b01763d557f080a4fc1ed2ce38d97c300" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-coredns--668d6bf9bc--ktbgp-eth0" Jan 16 23:57:55.895106 containerd[1469]: 2026-01-16 23:57:55.798 [INFO][4464] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b1f8a9a9026d756d194a11af938ebc1b01763d557f080a4fc1ed2ce38d97c300" HandleID="k8s-pod-network.b1f8a9a9026d756d194a11af938ebc1b01763d557f080a4fc1ed2ce38d97c300" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-coredns--668d6bf9bc--ktbgp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d30b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-n-fe2a5b3650", "pod":"coredns-668d6bf9bc-ktbgp", "timestamp":"2026-01-16 23:57:55.798333833 +0000 UTC"}, Hostname:"ci-4081-3-6-n-fe2a5b3650", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 23:57:55.895106 containerd[1469]: 2026-01-16 23:57:55.798 [INFO][4464] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:55.895106 containerd[1469]: 2026-01-16 23:57:55.798 [INFO][4464] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:55.895106 containerd[1469]: 2026-01-16 23:57:55.798 [INFO][4464] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-fe2a5b3650' Jan 16 23:57:55.895106 containerd[1469]: 2026-01-16 23:57:55.810 [INFO][4464] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b1f8a9a9026d756d194a11af938ebc1b01763d557f080a4fc1ed2ce38d97c300" host="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:55.895106 containerd[1469]: 2026-01-16 23:57:55.822 [INFO][4464] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:55.895106 containerd[1469]: 2026-01-16 23:57:55.829 [INFO][4464] ipam/ipam.go 511: Trying affinity for 192.168.52.192/26 host="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:55.895106 containerd[1469]: 2026-01-16 23:57:55.833 [INFO][4464] ipam/ipam.go 158: Attempting to load block cidr=192.168.52.192/26 host="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:55.895106 containerd[1469]: 2026-01-16 23:57:55.838 [INFO][4464] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.52.192/26 host="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:55.895106 containerd[1469]: 2026-01-16 23:57:55.838 [INFO][4464] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.52.192/26 handle="k8s-pod-network.b1f8a9a9026d756d194a11af938ebc1b01763d557f080a4fc1ed2ce38d97c300" host="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:55.895106 containerd[1469]: 2026-01-16 23:57:55.840 [INFO][4464] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b1f8a9a9026d756d194a11af938ebc1b01763d557f080a4fc1ed2ce38d97c300 Jan 16 23:57:55.895106 containerd[1469]: 2026-01-16 23:57:55.847 [INFO][4464] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.52.192/26 handle="k8s-pod-network.b1f8a9a9026d756d194a11af938ebc1b01763d557f080a4fc1ed2ce38d97c300" host="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:55.895106 containerd[1469]: 2026-01-16 23:57:55.860 [INFO][4464] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.52.197/26] block=192.168.52.192/26 handle="k8s-pod-network.b1f8a9a9026d756d194a11af938ebc1b01763d557f080a4fc1ed2ce38d97c300" host="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:55.895106 containerd[1469]: 2026-01-16 23:57:55.861 [INFO][4464] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.52.197/26] handle="k8s-pod-network.b1f8a9a9026d756d194a11af938ebc1b01763d557f080a4fc1ed2ce38d97c300" host="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:55.895106 containerd[1469]: 2026-01-16 23:57:55.861 [INFO][4464] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:55.895106 containerd[1469]: 2026-01-16 23:57:55.861 [INFO][4464] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.52.197/26] IPv6=[] ContainerID="b1f8a9a9026d756d194a11af938ebc1b01763d557f080a4fc1ed2ce38d97c300" HandleID="k8s-pod-network.b1f8a9a9026d756d194a11af938ebc1b01763d557f080a4fc1ed2ce38d97c300" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-coredns--668d6bf9bc--ktbgp-eth0" Jan 16 23:57:55.897353 containerd[1469]: 2026-01-16 23:57:55.864 [INFO][4452] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b1f8a9a9026d756d194a11af938ebc1b01763d557f080a4fc1ed2ce38d97c300" Namespace="kube-system" Pod="coredns-668d6bf9bc-ktbgp" WorkloadEndpoint="ci--4081--3--6--n--fe2a5b3650-k8s-coredns--668d6bf9bc--ktbgp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--fe2a5b3650-k8s-coredns--668d6bf9bc--ktbgp-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"29ddbdd3-30d8-4cf4-8a5f-7715f3d5b4bb", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 57, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-fe2a5b3650", ContainerID:"", Pod:"coredns-668d6bf9bc-ktbgp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1e75a8e80e5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:57:55.897353 containerd[1469]: 2026-01-16 23:57:55.864 [INFO][4452] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.52.197/32] ContainerID="b1f8a9a9026d756d194a11af938ebc1b01763d557f080a4fc1ed2ce38d97c300" Namespace="kube-system" Pod="coredns-668d6bf9bc-ktbgp" WorkloadEndpoint="ci--4081--3--6--n--fe2a5b3650-k8s-coredns--668d6bf9bc--ktbgp-eth0" Jan 16 23:57:55.897353 containerd[1469]: 2026-01-16 23:57:55.864 [INFO][4452] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1e75a8e80e5 ContainerID="b1f8a9a9026d756d194a11af938ebc1b01763d557f080a4fc1ed2ce38d97c300" Namespace="kube-system" Pod="coredns-668d6bf9bc-ktbgp" WorkloadEndpoint="ci--4081--3--6--n--fe2a5b3650-k8s-coredns--668d6bf9bc--ktbgp-eth0" Jan 16 23:57:55.897353 containerd[1469]: 2026-01-16 23:57:55.869 [INFO][4452] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b1f8a9a9026d756d194a11af938ebc1b01763d557f080a4fc1ed2ce38d97c300" Namespace="kube-system" Pod="coredns-668d6bf9bc-ktbgp" WorkloadEndpoint="ci--4081--3--6--n--fe2a5b3650-k8s-coredns--668d6bf9bc--ktbgp-eth0" Jan 16 23:57:55.897353 containerd[1469]: 2026-01-16 23:57:55.871 [INFO][4452] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b1f8a9a9026d756d194a11af938ebc1b01763d557f080a4fc1ed2ce38d97c300" Namespace="kube-system" Pod="coredns-668d6bf9bc-ktbgp" WorkloadEndpoint="ci--4081--3--6--n--fe2a5b3650-k8s-coredns--668d6bf9bc--ktbgp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--fe2a5b3650-k8s-coredns--668d6bf9bc--ktbgp-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"29ddbdd3-30d8-4cf4-8a5f-7715f3d5b4bb", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 57, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-fe2a5b3650", ContainerID:"b1f8a9a9026d756d194a11af938ebc1b01763d557f080a4fc1ed2ce38d97c300", Pod:"coredns-668d6bf9bc-ktbgp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1e75a8e80e5", MAC:"aa:27:ae:2e:da:97", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:57:55.897353 containerd[1469]: 2026-01-16 23:57:55.890 [INFO][4452] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b1f8a9a9026d756d194a11af938ebc1b01763d557f080a4fc1ed2ce38d97c300" Namespace="kube-system" Pod="coredns-668d6bf9bc-ktbgp" WorkloadEndpoint="ci--4081--3--6--n--fe2a5b3650-k8s-coredns--668d6bf9bc--ktbgp-eth0" Jan 16 23:57:55.920835 containerd[1469]: time="2026-01-16T23:57:55.920697181Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 23:57:55.920835 containerd[1469]: time="2026-01-16T23:57:55.920754860Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 23:57:55.921662 containerd[1469]: time="2026-01-16T23:57:55.920781540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:57:55.921662 containerd[1469]: time="2026-01-16T23:57:55.920937779Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:57:55.947053 systemd[1]: run-containerd-runc-k8s.io-b1f8a9a9026d756d194a11af938ebc1b01763d557f080a4fc1ed2ce38d97c300-runc.3Uj4rJ.mount: Deactivated successfully. Jan 16 23:57:55.954565 kubelet[2566]: E0116 23:57:55.954418 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c7c7c7dd6-26hzj" podUID="b9c78c11-17fe-4d54-827b-16ba9d81154b" Jan 16 23:57:55.954565 kubelet[2566]: E0116 23:57:55.954495 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jr25d" podUID="195fd954-db29-4a46-a5c3-26216d80a6af" Jan 16 23:57:55.955419 systemd[1]: Started cri-containerd-b1f8a9a9026d756d194a11af938ebc1b01763d557f080a4fc1ed2ce38d97c300.scope - libcontainer container b1f8a9a9026d756d194a11af938ebc1b01763d557f080a4fc1ed2ce38d97c300. Jan 16 23:57:56.010382 containerd[1469]: time="2026-01-16T23:57:56.010336560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ktbgp,Uid:29ddbdd3-30d8-4cf4-8a5f-7715f3d5b4bb,Namespace:kube-system,Attempt:1,} returns sandbox id \"b1f8a9a9026d756d194a11af938ebc1b01763d557f080a4fc1ed2ce38d97c300\"" Jan 16 23:57:56.018447 containerd[1469]: time="2026-01-16T23:57:56.017997098Z" level=info msg="CreateContainer within sandbox \"b1f8a9a9026d756d194a11af938ebc1b01763d557f080a4fc1ed2ce38d97c300\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 16 23:57:56.042321 containerd[1469]: time="2026-01-16T23:57:56.042254940Z" level=info msg="CreateContainer within sandbox \"b1f8a9a9026d756d194a11af938ebc1b01763d557f080a4fc1ed2ce38d97c300\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a74e64dda06f15645305cf35e1d0543c811493b368c23a40778cd8769fc82b0b\"" Jan 16 23:57:56.045686 containerd[1469]: time="2026-01-16T23:57:56.045648112Z" level=info msg="StartContainer for \"a74e64dda06f15645305cf35e1d0543c811493b368c23a40778cd8769fc82b0b\"" Jan 16 23:57:56.080233 systemd[1]: Started cri-containerd-a74e64dda06f15645305cf35e1d0543c811493b368c23a40778cd8769fc82b0b.scope - libcontainer container a74e64dda06f15645305cf35e1d0543c811493b368c23a40778cd8769fc82b0b. Jan 16 23:57:56.112918 containerd[1469]: time="2026-01-16T23:57:56.112626726Z" level=info msg="StartContainer for \"a74e64dda06f15645305cf35e1d0543c811493b368c23a40778cd8769fc82b0b\" returns successfully" Jan 16 23:57:56.206170 systemd-networkd[1366]: cali0beb6567bb1: Gained IPv6LL Jan 16 23:57:56.961799 kubelet[2566]: E0116 23:57:56.961466 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c7c7c7dd6-26hzj" podUID="b9c78c11-17fe-4d54-827b-16ba9d81154b" Jan 16 23:57:56.975051 systemd-networkd[1366]: cali1e75a8e80e5: Gained IPv6LL Jan 16 23:57:56.992926 kubelet[2566]: I0116 23:57:56.992839 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-ktbgp" podStartSLOduration=44.992818153 podStartE2EDuration="44.992818153s" podCreationTimestamp="2026-01-16 23:57:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-16 23:57:56.976416086 +0000 UTC m=+52.554549709" watchObservedRunningTime="2026-01-16 23:57:56.992818153 +0000 UTC m=+52.570951776" Jan 16 23:57:57.152367 systemd[1]: Started sshd@8-46.224.42.239:22-62.133.62.168:36988.service - OpenSSH per-connection server daemon (62.133.62.168:36988). Jan 16 23:57:57.436816 sshd[4565]: Received disconnect from 62.133.62.168 port 36988:11: Bye Bye [preauth] Jan 16 23:57:57.436816 sshd[4565]: Disconnected from authenticating user root 62.133.62.168 port 36988 [preauth] Jan 16 23:57:57.441056 systemd[1]: sshd@8-46.224.42.239:22-62.133.62.168:36988.service: Deactivated successfully. Jan 16 23:57:57.583155 containerd[1469]: time="2026-01-16T23:57:57.582668253Z" level=info msg="StopPodSandbox for \"b0b45ca3b3b60ea73958fcf4bbea6bcef53c53dab5003e89ddea87423499aebd\"" Jan 16 23:57:57.583155 containerd[1469]: time="2026-01-16T23:57:57.582982571Z" level=info msg="StopPodSandbox for \"1c099c7b6edcf90c7c98d9cfb00e2eba031feee97a663feca17d01d07b01764c\"" Jan 16 23:57:57.586651 containerd[1469]: time="2026-01-16T23:57:57.586577502Z" level=info msg="StopPodSandbox for \"b7a3322c32bde7d1c36bad44bcae6518c9ec8c552711565589665eeb47ea7f4a\"" Jan 16 23:57:57.770057 containerd[1469]: 2026-01-16 23:57:57.686 [INFO][4588] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b0b45ca3b3b60ea73958fcf4bbea6bcef53c53dab5003e89ddea87423499aebd" Jan 16 23:57:57.770057 containerd[1469]: 2026-01-16 23:57:57.687 [INFO][4588] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b0b45ca3b3b60ea73958fcf4bbea6bcef53c53dab5003e89ddea87423499aebd" iface="eth0" netns="/var/run/netns/cni-c831ae05-c8f6-8ac9-4343-f133301e9107" Jan 16 23:57:57.770057 containerd[1469]: 2026-01-16 23:57:57.688 [INFO][4588] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b0b45ca3b3b60ea73958fcf4bbea6bcef53c53dab5003e89ddea87423499aebd" iface="eth0" netns="/var/run/netns/cni-c831ae05-c8f6-8ac9-4343-f133301e9107" Jan 16 23:57:57.770057 containerd[1469]: 2026-01-16 23:57:57.688 [INFO][4588] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b0b45ca3b3b60ea73958fcf4bbea6bcef53c53dab5003e89ddea87423499aebd" iface="eth0" netns="/var/run/netns/cni-c831ae05-c8f6-8ac9-4343-f133301e9107" Jan 16 23:57:57.770057 containerd[1469]: 2026-01-16 23:57:57.688 [INFO][4588] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b0b45ca3b3b60ea73958fcf4bbea6bcef53c53dab5003e89ddea87423499aebd" Jan 16 23:57:57.770057 containerd[1469]: 2026-01-16 23:57:57.688 [INFO][4588] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b0b45ca3b3b60ea73958fcf4bbea6bcef53c53dab5003e89ddea87423499aebd" Jan 16 23:57:57.770057 containerd[1469]: 2026-01-16 23:57:57.734 [INFO][4616] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b0b45ca3b3b60ea73958fcf4bbea6bcef53c53dab5003e89ddea87423499aebd" HandleID="k8s-pod-network.b0b45ca3b3b60ea73958fcf4bbea6bcef53c53dab5003e89ddea87423499aebd" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-calico--apiserver--c7c7c7dd6--kgtcm-eth0" Jan 16 23:57:57.770057 containerd[1469]: 2026-01-16 23:57:57.735 [INFO][4616] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:57.770057 containerd[1469]: 2026-01-16 23:57:57.735 [INFO][4616] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:57.770057 containerd[1469]: 2026-01-16 23:57:57.754 [WARNING][4616] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b0b45ca3b3b60ea73958fcf4bbea6bcef53c53dab5003e89ddea87423499aebd" HandleID="k8s-pod-network.b0b45ca3b3b60ea73958fcf4bbea6bcef53c53dab5003e89ddea87423499aebd" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-calico--apiserver--c7c7c7dd6--kgtcm-eth0" Jan 16 23:57:57.770057 containerd[1469]: 2026-01-16 23:57:57.754 [INFO][4616] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b0b45ca3b3b60ea73958fcf4bbea6bcef53c53dab5003e89ddea87423499aebd" HandleID="k8s-pod-network.b0b45ca3b3b60ea73958fcf4bbea6bcef53c53dab5003e89ddea87423499aebd" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-calico--apiserver--c7c7c7dd6--kgtcm-eth0" Jan 16 23:57:57.770057 containerd[1469]: 2026-01-16 23:57:57.759 [INFO][4616] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:57.770057 containerd[1469]: 2026-01-16 23:57:57.762 [INFO][4588] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b0b45ca3b3b60ea73958fcf4bbea6bcef53c53dab5003e89ddea87423499aebd" Jan 16 23:57:57.778555 containerd[1469]: time="2026-01-16T23:57:57.778413561Z" level=info msg="TearDown network for sandbox \"b0b45ca3b3b60ea73958fcf4bbea6bcef53c53dab5003e89ddea87423499aebd\" successfully" Jan 16 23:57:57.778555 containerd[1469]: time="2026-01-16T23:57:57.778456441Z" level=info msg="StopPodSandbox for \"b0b45ca3b3b60ea73958fcf4bbea6bcef53c53dab5003e89ddea87423499aebd\" returns successfully" Jan 16 23:57:57.781577 containerd[1469]: time="2026-01-16T23:57:57.781522416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c7c7c7dd6-kgtcm,Uid:861b4149-53db-42c9-9886-651961041ffb,Namespace:calico-apiserver,Attempt:1,}" Jan 16 23:57:57.783482 systemd[1]: run-netns-cni\x2dc831ae05\x2dc8f6\x2d8ac9\x2d4343\x2df133301e9107.mount: Deactivated successfully. Jan 16 23:57:57.791719 containerd[1469]: 2026-01-16 23:57:57.695 [INFO][4592] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1c099c7b6edcf90c7c98d9cfb00e2eba031feee97a663feca17d01d07b01764c" Jan 16 23:57:57.791719 containerd[1469]: 2026-01-16 23:57:57.696 [INFO][4592] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1c099c7b6edcf90c7c98d9cfb00e2eba031feee97a663feca17d01d07b01764c" iface="eth0" netns="/var/run/netns/cni-24940b75-5e7d-1413-86f4-b0a5cab37bd7" Jan 16 23:57:57.791719 containerd[1469]: 2026-01-16 23:57:57.696 [INFO][4592] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1c099c7b6edcf90c7c98d9cfb00e2eba031feee97a663feca17d01d07b01764c" iface="eth0" netns="/var/run/netns/cni-24940b75-5e7d-1413-86f4-b0a5cab37bd7" Jan 16 23:57:57.791719 containerd[1469]: 2026-01-16 23:57:57.696 [INFO][4592] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1c099c7b6edcf90c7c98d9cfb00e2eba031feee97a663feca17d01d07b01764c" iface="eth0" netns="/var/run/netns/cni-24940b75-5e7d-1413-86f4-b0a5cab37bd7" Jan 16 23:57:57.791719 containerd[1469]: 2026-01-16 23:57:57.696 [INFO][4592] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1c099c7b6edcf90c7c98d9cfb00e2eba031feee97a663feca17d01d07b01764c" Jan 16 23:57:57.791719 containerd[1469]: 2026-01-16 23:57:57.697 [INFO][4592] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1c099c7b6edcf90c7c98d9cfb00e2eba031feee97a663feca17d01d07b01764c" Jan 16 23:57:57.791719 containerd[1469]: 2026-01-16 23:57:57.742 [INFO][4621] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1c099c7b6edcf90c7c98d9cfb00e2eba031feee97a663feca17d01d07b01764c" HandleID="k8s-pod-network.1c099c7b6edcf90c7c98d9cfb00e2eba031feee97a663feca17d01d07b01764c" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-calico--kube--controllers--68bd9998fd--lpljt-eth0" Jan 16 23:57:57.791719 containerd[1469]: 2026-01-16 23:57:57.742 [INFO][4621] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:57.791719 containerd[1469]: 2026-01-16 23:57:57.759 [INFO][4621] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:57.791719 containerd[1469]: 2026-01-16 23:57:57.782 [WARNING][4621] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1c099c7b6edcf90c7c98d9cfb00e2eba031feee97a663feca17d01d07b01764c" HandleID="k8s-pod-network.1c099c7b6edcf90c7c98d9cfb00e2eba031feee97a663feca17d01d07b01764c" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-calico--kube--controllers--68bd9998fd--lpljt-eth0" Jan 16 23:57:57.791719 containerd[1469]: 2026-01-16 23:57:57.782 [INFO][4621] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1c099c7b6edcf90c7c98d9cfb00e2eba031feee97a663feca17d01d07b01764c" HandleID="k8s-pod-network.1c099c7b6edcf90c7c98d9cfb00e2eba031feee97a663feca17d01d07b01764c" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-calico--kube--controllers--68bd9998fd--lpljt-eth0" Jan 16 23:57:57.791719 containerd[1469]: 2026-01-16 23:57:57.787 [INFO][4621] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:57.791719 containerd[1469]: 2026-01-16 23:57:57.789 [INFO][4592] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1c099c7b6edcf90c7c98d9cfb00e2eba031feee97a663feca17d01d07b01764c" Jan 16 23:57:57.794400 containerd[1469]: time="2026-01-16T23:57:57.794033275Z" level=info msg="TearDown network for sandbox \"1c099c7b6edcf90c7c98d9cfb00e2eba031feee97a663feca17d01d07b01764c\" successfully" Jan 16 23:57:57.794400 containerd[1469]: time="2026-01-16T23:57:57.794074275Z" level=info msg="StopPodSandbox for \"1c099c7b6edcf90c7c98d9cfb00e2eba031feee97a663feca17d01d07b01764c\" returns successfully" Jan 16 23:57:57.797900 containerd[1469]: time="2026-01-16T23:57:57.797535007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68bd9998fd-lpljt,Uid:e83705ab-d8ce-46ca-880d-899f69158672,Namespace:calico-system,Attempt:1,}" Jan 16 23:57:57.799808 systemd[1]: run-netns-cni\x2d24940b75\x2d5e7d\x2d1413\x2d86f4\x2db0a5cab37bd7.mount: Deactivated successfully. Jan 16 23:57:57.825490 containerd[1469]: 2026-01-16 23:57:57.729 [INFO][4605] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b7a3322c32bde7d1c36bad44bcae6518c9ec8c552711565589665eeb47ea7f4a" Jan 16 23:57:57.825490 containerd[1469]: 2026-01-16 23:57:57.729 [INFO][4605] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b7a3322c32bde7d1c36bad44bcae6518c9ec8c552711565589665eeb47ea7f4a" iface="eth0" netns="/var/run/netns/cni-8c450e00-197e-11ff-f7e9-13b56d5d1948" Jan 16 23:57:57.825490 containerd[1469]: 2026-01-16 23:57:57.732 [INFO][4605] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b7a3322c32bde7d1c36bad44bcae6518c9ec8c552711565589665eeb47ea7f4a" iface="eth0" netns="/var/run/netns/cni-8c450e00-197e-11ff-f7e9-13b56d5d1948" Jan 16 23:57:57.825490 containerd[1469]: 2026-01-16 23:57:57.733 [INFO][4605] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b7a3322c32bde7d1c36bad44bcae6518c9ec8c552711565589665eeb47ea7f4a" iface="eth0" netns="/var/run/netns/cni-8c450e00-197e-11ff-f7e9-13b56d5d1948" Jan 16 23:57:57.825490 containerd[1469]: 2026-01-16 23:57:57.733 [INFO][4605] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b7a3322c32bde7d1c36bad44bcae6518c9ec8c552711565589665eeb47ea7f4a" Jan 16 23:57:57.825490 containerd[1469]: 2026-01-16 23:57:57.733 [INFO][4605] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b7a3322c32bde7d1c36bad44bcae6518c9ec8c552711565589665eeb47ea7f4a" Jan 16 23:57:57.825490 containerd[1469]: 2026-01-16 23:57:57.791 [INFO][4631] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b7a3322c32bde7d1c36bad44bcae6518c9ec8c552711565589665eeb47ea7f4a" HandleID="k8s-pod-network.b7a3322c32bde7d1c36bad44bcae6518c9ec8c552711565589665eeb47ea7f4a" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-csi--node--driver--j4ltk-eth0" Jan 16 23:57:57.825490 containerd[1469]: 2026-01-16 23:57:57.794 [INFO][4631] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:57.825490 containerd[1469]: 2026-01-16 23:57:57.794 [INFO][4631] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:57.825490 containerd[1469]: 2026-01-16 23:57:57.810 [WARNING][4631] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b7a3322c32bde7d1c36bad44bcae6518c9ec8c552711565589665eeb47ea7f4a" HandleID="k8s-pod-network.b7a3322c32bde7d1c36bad44bcae6518c9ec8c552711565589665eeb47ea7f4a" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-csi--node--driver--j4ltk-eth0" Jan 16 23:57:57.825490 containerd[1469]: 2026-01-16 23:57:57.810 [INFO][4631] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b7a3322c32bde7d1c36bad44bcae6518c9ec8c552711565589665eeb47ea7f4a" HandleID="k8s-pod-network.b7a3322c32bde7d1c36bad44bcae6518c9ec8c552711565589665eeb47ea7f4a" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-csi--node--driver--j4ltk-eth0" Jan 16 23:57:57.825490 containerd[1469]: 2026-01-16 23:57:57.812 [INFO][4631] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:57.825490 containerd[1469]: 2026-01-16 23:57:57.817 [INFO][4605] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b7a3322c32bde7d1c36bad44bcae6518c9ec8c552711565589665eeb47ea7f4a" Jan 16 23:57:57.827530 containerd[1469]: time="2026-01-16T23:57:57.827390287Z" level=info msg="TearDown network for sandbox \"b7a3322c32bde7d1c36bad44bcae6518c9ec8c552711565589665eeb47ea7f4a\" successfully" Jan 16 23:57:57.827530 containerd[1469]: time="2026-01-16T23:57:57.827428327Z" level=info msg="StopPodSandbox for \"b7a3322c32bde7d1c36bad44bcae6518c9ec8c552711565589665eeb47ea7f4a\" returns successfully" Jan 16 23:57:57.829995 containerd[1469]: time="2026-01-16T23:57:57.829740389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j4ltk,Uid:f9b64606-aa04-4801-bf16-55e0f797524c,Namespace:calico-system,Attempt:1,}" Jan 16 23:57:57.832375 systemd[1]: run-netns-cni\x2d8c450e00\x2d197e\x2d11ff\x2df7e9\x2d13b56d5d1948.mount: Deactivated successfully. Jan 16 23:57:58.102048 systemd-networkd[1366]: calic17203ebb8a: Link UP Jan 16 23:57:58.106122 systemd-networkd[1366]: calic17203ebb8a: Gained carrier Jan 16 23:57:58.131346 containerd[1469]: 2026-01-16 23:57:57.899 [INFO][4651] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--fe2a5b3650-k8s-calico--kube--controllers--68bd9998fd--lpljt-eth0 calico-kube-controllers-68bd9998fd- calico-system e83705ab-d8ce-46ca-880d-899f69158672 992 0 2026-01-16 23:57:34 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:68bd9998fd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-6-n-fe2a5b3650 calico-kube-controllers-68bd9998fd-lpljt eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic17203ebb8a [] [] }} ContainerID="83b0456ccd50f25e75495536e147bb55a2ae359ea09f4046eaba506efd0b842b" Namespace="calico-system" Pod="calico-kube-controllers-68bd9998fd-lpljt" WorkloadEndpoint="ci--4081--3--6--n--fe2a5b3650-k8s-calico--kube--controllers--68bd9998fd--lpljt-" Jan 16 23:57:58.131346 containerd[1469]: 2026-01-16 23:57:57.900 [INFO][4651] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="83b0456ccd50f25e75495536e147bb55a2ae359ea09f4046eaba506efd0b842b" Namespace="calico-system" Pod="calico-kube-controllers-68bd9998fd-lpljt" WorkloadEndpoint="ci--4081--3--6--n--fe2a5b3650-k8s-calico--kube--controllers--68bd9998fd--lpljt-eth0" Jan 16 23:57:58.131346 containerd[1469]: 2026-01-16 23:57:57.984 [INFO][4676] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="83b0456ccd50f25e75495536e147bb55a2ae359ea09f4046eaba506efd0b842b" HandleID="k8s-pod-network.83b0456ccd50f25e75495536e147bb55a2ae359ea09f4046eaba506efd0b842b" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-calico--kube--controllers--68bd9998fd--lpljt-eth0" Jan 16 23:57:58.131346 containerd[1469]: 2026-01-16 23:57:57.984 [INFO][4676] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="83b0456ccd50f25e75495536e147bb55a2ae359ea09f4046eaba506efd0b842b" HandleID="k8s-pod-network.83b0456ccd50f25e75495536e147bb55a2ae359ea09f4046eaba506efd0b842b" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-calico--kube--controllers--68bd9998fd--lpljt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000330150), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-fe2a5b3650", "pod":"calico-kube-controllers-68bd9998fd-lpljt", "timestamp":"2026-01-16 23:57:57.984437866 +0000 UTC"}, Hostname:"ci-4081-3-6-n-fe2a5b3650", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 23:57:58.131346 containerd[1469]: 2026-01-16 23:57:57.985 [INFO][4676] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:58.131346 containerd[1469]: 2026-01-16 23:57:57.985 [INFO][4676] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:58.131346 containerd[1469]: 2026-01-16 23:57:57.985 [INFO][4676] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-fe2a5b3650' Jan 16 23:57:58.131346 containerd[1469]: 2026-01-16 23:57:58.018 [INFO][4676] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.83b0456ccd50f25e75495536e147bb55a2ae359ea09f4046eaba506efd0b842b" host="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:58.131346 containerd[1469]: 2026-01-16 23:57:58.032 [INFO][4676] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:58.131346 containerd[1469]: 2026-01-16 23:57:58.043 [INFO][4676] ipam/ipam.go 511: Trying affinity for 192.168.52.192/26 host="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:58.131346 containerd[1469]: 2026-01-16 23:57:58.047 [INFO][4676] ipam/ipam.go 158: Attempting to load block cidr=192.168.52.192/26 host="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:58.131346 containerd[1469]: 2026-01-16 23:57:58.052 [INFO][4676] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.52.192/26 host="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:58.131346 containerd[1469]: 2026-01-16 23:57:58.053 [INFO][4676] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.52.192/26 handle="k8s-pod-network.83b0456ccd50f25e75495536e147bb55a2ae359ea09f4046eaba506efd0b842b" host="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:58.131346 containerd[1469]: 2026-01-16 23:57:58.056 [INFO][4676] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.83b0456ccd50f25e75495536e147bb55a2ae359ea09f4046eaba506efd0b842b Jan 16 23:57:58.131346 containerd[1469]: 2026-01-16 23:57:58.070 [INFO][4676] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.52.192/26 handle="k8s-pod-network.83b0456ccd50f25e75495536e147bb55a2ae359ea09f4046eaba506efd0b842b" host="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:58.131346 containerd[1469]: 2026-01-16 23:57:58.082 [INFO][4676] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.52.198/26] block=192.168.52.192/26 handle="k8s-pod-network.83b0456ccd50f25e75495536e147bb55a2ae359ea09f4046eaba506efd0b842b" host="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:58.131346 containerd[1469]: 2026-01-16 23:57:58.082 [INFO][4676] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.52.198/26] handle="k8s-pod-network.83b0456ccd50f25e75495536e147bb55a2ae359ea09f4046eaba506efd0b842b" host="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:58.131346 containerd[1469]: 2026-01-16 23:57:58.082 [INFO][4676] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:58.131346 containerd[1469]: 2026-01-16 23:57:58.082 [INFO][4676] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.52.198/26] IPv6=[] ContainerID="83b0456ccd50f25e75495536e147bb55a2ae359ea09f4046eaba506efd0b842b" HandleID="k8s-pod-network.83b0456ccd50f25e75495536e147bb55a2ae359ea09f4046eaba506efd0b842b" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-calico--kube--controllers--68bd9998fd--lpljt-eth0" Jan 16 23:57:58.132368 containerd[1469]: 2026-01-16 23:57:58.088 [INFO][4651] cni-plugin/k8s.go 418: Populated endpoint ContainerID="83b0456ccd50f25e75495536e147bb55a2ae359ea09f4046eaba506efd0b842b" Namespace="calico-system" Pod="calico-kube-controllers-68bd9998fd-lpljt" WorkloadEndpoint="ci--4081--3--6--n--fe2a5b3650-k8s-calico--kube--controllers--68bd9998fd--lpljt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--fe2a5b3650-k8s-calico--kube--controllers--68bd9998fd--lpljt-eth0", GenerateName:"calico-kube-controllers-68bd9998fd-", Namespace:"calico-system", SelfLink:"", UID:"e83705ab-d8ce-46ca-880d-899f69158672", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 57, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68bd9998fd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-fe2a5b3650", ContainerID:"", Pod:"calico-kube-controllers-68bd9998fd-lpljt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.52.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic17203ebb8a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:57:58.132368 containerd[1469]: 2026-01-16 23:57:58.088 [INFO][4651] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.52.198/32] ContainerID="83b0456ccd50f25e75495536e147bb55a2ae359ea09f4046eaba506efd0b842b" Namespace="calico-system" Pod="calico-kube-controllers-68bd9998fd-lpljt" WorkloadEndpoint="ci--4081--3--6--n--fe2a5b3650-k8s-calico--kube--controllers--68bd9998fd--lpljt-eth0" Jan 16 23:57:58.132368 containerd[1469]: 2026-01-16 23:57:58.088 [INFO][4651] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic17203ebb8a ContainerID="83b0456ccd50f25e75495536e147bb55a2ae359ea09f4046eaba506efd0b842b" Namespace="calico-system" Pod="calico-kube-controllers-68bd9998fd-lpljt" WorkloadEndpoint="ci--4081--3--6--n--fe2a5b3650-k8s-calico--kube--controllers--68bd9998fd--lpljt-eth0" Jan 16 23:57:58.132368 containerd[1469]: 2026-01-16 23:57:58.106 [INFO][4651] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="83b0456ccd50f25e75495536e147bb55a2ae359ea09f4046eaba506efd0b842b" Namespace="calico-system" Pod="calico-kube-controllers-68bd9998fd-lpljt" WorkloadEndpoint="ci--4081--3--6--n--fe2a5b3650-k8s-calico--kube--controllers--68bd9998fd--lpljt-eth0" Jan 16 23:57:58.132368 containerd[1469]: 2026-01-16 23:57:58.111 [INFO][4651] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="83b0456ccd50f25e75495536e147bb55a2ae359ea09f4046eaba506efd0b842b" Namespace="calico-system" Pod="calico-kube-controllers-68bd9998fd-lpljt" WorkloadEndpoint="ci--4081--3--6--n--fe2a5b3650-k8s-calico--kube--controllers--68bd9998fd--lpljt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--fe2a5b3650-k8s-calico--kube--controllers--68bd9998fd--lpljt-eth0", GenerateName:"calico-kube-controllers-68bd9998fd-", Namespace:"calico-system", SelfLink:"", UID:"e83705ab-d8ce-46ca-880d-899f69158672", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 57, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68bd9998fd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-fe2a5b3650", ContainerID:"83b0456ccd50f25e75495536e147bb55a2ae359ea09f4046eaba506efd0b842b", Pod:"calico-kube-controllers-68bd9998fd-lpljt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.52.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic17203ebb8a", MAC:"7e:ce:a7:74:16:22", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:57:58.132368 containerd[1469]: 2026-01-16 23:57:58.128 [INFO][4651] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="83b0456ccd50f25e75495536e147bb55a2ae359ea09f4046eaba506efd0b842b" Namespace="calico-system" Pod="calico-kube-controllers-68bd9998fd-lpljt" WorkloadEndpoint="ci--4081--3--6--n--fe2a5b3650-k8s-calico--kube--controllers--68bd9998fd--lpljt-eth0" Jan 16 23:57:58.169085 containerd[1469]: time="2026-01-16T23:57:58.167752651Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 23:57:58.169085 containerd[1469]: time="2026-01-16T23:57:58.167833971Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 23:57:58.169085 containerd[1469]: time="2026-01-16T23:57:58.167846891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:57:58.169085 containerd[1469]: time="2026-01-16T23:57:58.168976402Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:57:58.208204 systemd[1]: Started cri-containerd-83b0456ccd50f25e75495536e147bb55a2ae359ea09f4046eaba506efd0b842b.scope - libcontainer container 83b0456ccd50f25e75495536e147bb55a2ae359ea09f4046eaba506efd0b842b. Jan 16 23:57:58.219333 systemd-networkd[1366]: cali6cf05955fef: Link UP Jan 16 23:57:58.222237 systemd-networkd[1366]: cali6cf05955fef: Gained carrier Jan 16 23:57:58.248850 containerd[1469]: 2026-01-16 23:57:57.925 [INFO][4639] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--fe2a5b3650-k8s-calico--apiserver--c7c7c7dd6--kgtcm-eth0 calico-apiserver-c7c7c7dd6- calico-apiserver 861b4149-53db-42c9-9886-651961041ffb 993 0 2026-01-16 23:57:23 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:c7c7c7dd6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-n-fe2a5b3650 calico-apiserver-c7c7c7dd6-kgtcm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6cf05955fef [] [] }} ContainerID="52691ff05cebeae2e4577f9cbaed5fe249eaee9f6034298d8335f65a87544d6f" Namespace="calico-apiserver" Pod="calico-apiserver-c7c7c7dd6-kgtcm" WorkloadEndpoint="ci--4081--3--6--n--fe2a5b3650-k8s-calico--apiserver--c7c7c7dd6--kgtcm-" Jan 16 23:57:58.248850 containerd[1469]: 2026-01-16 23:57:57.925 [INFO][4639] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="52691ff05cebeae2e4577f9cbaed5fe249eaee9f6034298d8335f65a87544d6f" Namespace="calico-apiserver" Pod="calico-apiserver-c7c7c7dd6-kgtcm" WorkloadEndpoint="ci--4081--3--6--n--fe2a5b3650-k8s-calico--apiserver--c7c7c7dd6--kgtcm-eth0" Jan 16 23:57:58.248850 containerd[1469]: 2026-01-16 23:57:58.009 [INFO][4683] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="52691ff05cebeae2e4577f9cbaed5fe249eaee9f6034298d8335f65a87544d6f" HandleID="k8s-pod-network.52691ff05cebeae2e4577f9cbaed5fe249eaee9f6034298d8335f65a87544d6f" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-calico--apiserver--c7c7c7dd6--kgtcm-eth0" Jan 16 23:57:58.248850 containerd[1469]: 2026-01-16 23:57:58.009 [INFO][4683] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="52691ff05cebeae2e4577f9cbaed5fe249eaee9f6034298d8335f65a87544d6f" HandleID="k8s-pod-network.52691ff05cebeae2e4577f9cbaed5fe249eaee9f6034298d8335f65a87544d6f" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-calico--apiserver--c7c7c7dd6--kgtcm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d38f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-6-n-fe2a5b3650", "pod":"calico-apiserver-c7c7c7dd6-kgtcm", "timestamp":"2026-01-16 23:57:58.009117508 +0000 UTC"}, Hostname:"ci-4081-3-6-n-fe2a5b3650", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 23:57:58.248850 containerd[1469]: 2026-01-16 23:57:58.009 [INFO][4683] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:58.248850 containerd[1469]: 2026-01-16 23:57:58.084 [INFO][4683] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:58.248850 containerd[1469]: 2026-01-16 23:57:58.084 [INFO][4683] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-fe2a5b3650' Jan 16 23:57:58.248850 containerd[1469]: 2026-01-16 23:57:58.116 [INFO][4683] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.52691ff05cebeae2e4577f9cbaed5fe249eaee9f6034298d8335f65a87544d6f" host="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:58.248850 containerd[1469]: 2026-01-16 23:57:58.135 [INFO][4683] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:58.248850 containerd[1469]: 2026-01-16 23:57:58.148 [INFO][4683] ipam/ipam.go 511: Trying affinity for 192.168.52.192/26 host="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:58.248850 containerd[1469]: 2026-01-16 23:57:58.154 [INFO][4683] ipam/ipam.go 158: Attempting to load block cidr=192.168.52.192/26 host="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:58.248850 containerd[1469]: 2026-01-16 23:57:58.159 [INFO][4683] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.52.192/26 host="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:58.248850 containerd[1469]: 2026-01-16 23:57:58.160 [INFO][4683] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.52.192/26 handle="k8s-pod-network.52691ff05cebeae2e4577f9cbaed5fe249eaee9f6034298d8335f65a87544d6f" host="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:58.248850 containerd[1469]: 2026-01-16 23:57:58.162 [INFO][4683] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.52691ff05cebeae2e4577f9cbaed5fe249eaee9f6034298d8335f65a87544d6f Jan 16 23:57:58.248850 containerd[1469]: 2026-01-16 23:57:58.176 [INFO][4683] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.52.192/26 handle="k8s-pod-network.52691ff05cebeae2e4577f9cbaed5fe249eaee9f6034298d8335f65a87544d6f" host="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:58.248850 containerd[1469]: 2026-01-16 23:57:58.185 [INFO][4683] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.52.199/26] block=192.168.52.192/26 handle="k8s-pod-network.52691ff05cebeae2e4577f9cbaed5fe249eaee9f6034298d8335f65a87544d6f" host="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:58.248850 containerd[1469]: 2026-01-16 23:57:58.185 [INFO][4683] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.52.199/26] handle="k8s-pod-network.52691ff05cebeae2e4577f9cbaed5fe249eaee9f6034298d8335f65a87544d6f" host="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:58.248850 containerd[1469]: 2026-01-16 23:57:58.185 [INFO][4683] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:58.248850 containerd[1469]: 2026-01-16 23:57:58.185 [INFO][4683] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.52.199/26] IPv6=[] ContainerID="52691ff05cebeae2e4577f9cbaed5fe249eaee9f6034298d8335f65a87544d6f" HandleID="k8s-pod-network.52691ff05cebeae2e4577f9cbaed5fe249eaee9f6034298d8335f65a87544d6f" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-calico--apiserver--c7c7c7dd6--kgtcm-eth0" Jan 16 23:57:58.249757 containerd[1469]: 2026-01-16 23:57:58.207 [INFO][4639] cni-plugin/k8s.go 418: Populated endpoint ContainerID="52691ff05cebeae2e4577f9cbaed5fe249eaee9f6034298d8335f65a87544d6f" Namespace="calico-apiserver" Pod="calico-apiserver-c7c7c7dd6-kgtcm" WorkloadEndpoint="ci--4081--3--6--n--fe2a5b3650-k8s-calico--apiserver--c7c7c7dd6--kgtcm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--fe2a5b3650-k8s-calico--apiserver--c7c7c7dd6--kgtcm-eth0", GenerateName:"calico-apiserver-c7c7c7dd6-", Namespace:"calico-apiserver", SelfLink:"", UID:"861b4149-53db-42c9-9886-651961041ffb", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 57, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c7c7c7dd6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-fe2a5b3650", ContainerID:"", Pod:"calico-apiserver-c7c7c7dd6-kgtcm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.52.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6cf05955fef", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:57:58.249757 containerd[1469]: 2026-01-16 23:57:58.207 [INFO][4639] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.52.199/32] ContainerID="52691ff05cebeae2e4577f9cbaed5fe249eaee9f6034298d8335f65a87544d6f" Namespace="calico-apiserver" Pod="calico-apiserver-c7c7c7dd6-kgtcm" WorkloadEndpoint="ci--4081--3--6--n--fe2a5b3650-k8s-calico--apiserver--c7c7c7dd6--kgtcm-eth0" Jan 16 23:57:58.249757 containerd[1469]: 2026-01-16 23:57:58.207 [INFO][4639] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6cf05955fef ContainerID="52691ff05cebeae2e4577f9cbaed5fe249eaee9f6034298d8335f65a87544d6f" Namespace="calico-apiserver" Pod="calico-apiserver-c7c7c7dd6-kgtcm" WorkloadEndpoint="ci--4081--3--6--n--fe2a5b3650-k8s-calico--apiserver--c7c7c7dd6--kgtcm-eth0" Jan 16 23:57:58.249757 containerd[1469]: 2026-01-16 23:57:58.228 [INFO][4639] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="52691ff05cebeae2e4577f9cbaed5fe249eaee9f6034298d8335f65a87544d6f" Namespace="calico-apiserver" Pod="calico-apiserver-c7c7c7dd6-kgtcm" WorkloadEndpoint="ci--4081--3--6--n--fe2a5b3650-k8s-calico--apiserver--c7c7c7dd6--kgtcm-eth0" Jan 16 23:57:58.249757 containerd[1469]: 2026-01-16 23:57:58.229 [INFO][4639] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="52691ff05cebeae2e4577f9cbaed5fe249eaee9f6034298d8335f65a87544d6f" Namespace="calico-apiserver" Pod="calico-apiserver-c7c7c7dd6-kgtcm" WorkloadEndpoint="ci--4081--3--6--n--fe2a5b3650-k8s-calico--apiserver--c7c7c7dd6--kgtcm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--fe2a5b3650-k8s-calico--apiserver--c7c7c7dd6--kgtcm-eth0", GenerateName:"calico-apiserver-c7c7c7dd6-", Namespace:"calico-apiserver", SelfLink:"", UID:"861b4149-53db-42c9-9886-651961041ffb", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 57, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c7c7c7dd6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-fe2a5b3650", ContainerID:"52691ff05cebeae2e4577f9cbaed5fe249eaee9f6034298d8335f65a87544d6f", Pod:"calico-apiserver-c7c7c7dd6-kgtcm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.52.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6cf05955fef", MAC:"96:44:cb:c6:d9:1b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:57:58.249757 containerd[1469]: 2026-01-16 23:57:58.245 [INFO][4639] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="52691ff05cebeae2e4577f9cbaed5fe249eaee9f6034298d8335f65a87544d6f" Namespace="calico-apiserver" Pod="calico-apiserver-c7c7c7dd6-kgtcm" WorkloadEndpoint="ci--4081--3--6--n--fe2a5b3650-k8s-calico--apiserver--c7c7c7dd6--kgtcm-eth0" Jan 16 23:57:58.297967 containerd[1469]: time="2026-01-16T23:57:58.296565351Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 23:57:58.297967 containerd[1469]: time="2026-01-16T23:57:58.296705150Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 23:57:58.297967 containerd[1469]: time="2026-01-16T23:57:58.296763789Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:57:58.297967 containerd[1469]: time="2026-01-16T23:57:58.297306905Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:57:58.318265 systemd-networkd[1366]: cali0978ed1eb86: Link UP Jan 16 23:57:58.318594 systemd-networkd[1366]: cali0978ed1eb86: Gained carrier Jan 16 23:57:58.343173 systemd[1]: Started cri-containerd-52691ff05cebeae2e4577f9cbaed5fe249eaee9f6034298d8335f65a87544d6f.scope - libcontainer container 52691ff05cebeae2e4577f9cbaed5fe249eaee9f6034298d8335f65a87544d6f. Jan 16 23:57:58.361248 containerd[1469]: 2026-01-16 23:57:57.939 [INFO][4654] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--fe2a5b3650-k8s-csi--node--driver--j4ltk-eth0 csi-node-driver- calico-system f9b64606-aa04-4801-bf16-55e0f797524c 994 0 2026-01-16 23:57:34 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-6-n-fe2a5b3650 csi-node-driver-j4ltk eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali0978ed1eb86 [] [] }} ContainerID="f529a2e61106525b56a01122e9db8e27ae5ece2cd46d56d5fcdc26745ff3e2a3" Namespace="calico-system" Pod="csi-node-driver-j4ltk" WorkloadEndpoint="ci--4081--3--6--n--fe2a5b3650-k8s-csi--node--driver--j4ltk-" Jan 16 23:57:58.361248 containerd[1469]: 2026-01-16 23:57:57.939 [INFO][4654] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f529a2e61106525b56a01122e9db8e27ae5ece2cd46d56d5fcdc26745ff3e2a3" Namespace="calico-system" Pod="csi-node-driver-j4ltk" WorkloadEndpoint="ci--4081--3--6--n--fe2a5b3650-k8s-csi--node--driver--j4ltk-eth0" Jan 16 23:57:58.361248 containerd[1469]: 2026-01-16 23:57:58.033 [INFO][4688] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f529a2e61106525b56a01122e9db8e27ae5ece2cd46d56d5fcdc26745ff3e2a3" HandleID="k8s-pod-network.f529a2e61106525b56a01122e9db8e27ae5ece2cd46d56d5fcdc26745ff3e2a3" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-csi--node--driver--j4ltk-eth0" Jan 16 23:57:58.361248 containerd[1469]: 2026-01-16 23:57:58.034 [INFO][4688] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f529a2e61106525b56a01122e9db8e27ae5ece2cd46d56d5fcdc26745ff3e2a3" HandleID="k8s-pod-network.f529a2e61106525b56a01122e9db8e27ae5ece2cd46d56d5fcdc26745ff3e2a3" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-csi--node--driver--j4ltk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000281790), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-fe2a5b3650", "pod":"csi-node-driver-j4ltk", "timestamp":"2026-01-16 23:57:58.033604074 +0000 UTC"}, Hostname:"ci-4081-3-6-n-fe2a5b3650", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 23:57:58.361248 containerd[1469]: 2026-01-16 23:57:58.034 [INFO][4688] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:58.361248 containerd[1469]: 2026-01-16 23:57:58.185 [INFO][4688] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:58.361248 containerd[1469]: 2026-01-16 23:57:58.185 [INFO][4688] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-fe2a5b3650' Jan 16 23:57:58.361248 containerd[1469]: 2026-01-16 23:57:58.217 [INFO][4688] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f529a2e61106525b56a01122e9db8e27ae5ece2cd46d56d5fcdc26745ff3e2a3" host="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:58.361248 containerd[1469]: 2026-01-16 23:57:58.234 [INFO][4688] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:58.361248 containerd[1469]: 2026-01-16 23:57:58.246 [INFO][4688] ipam/ipam.go 511: Trying affinity for 192.168.52.192/26 host="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:58.361248 containerd[1469]: 2026-01-16 23:57:58.253 [INFO][4688] ipam/ipam.go 158: Attempting to load block cidr=192.168.52.192/26 host="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:58.361248 containerd[1469]: 2026-01-16 23:57:58.260 [INFO][4688] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.52.192/26 host="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:58.361248 containerd[1469]: 2026-01-16 23:57:58.260 [INFO][4688] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.52.192/26 handle="k8s-pod-network.f529a2e61106525b56a01122e9db8e27ae5ece2cd46d56d5fcdc26745ff3e2a3" host="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:58.361248 containerd[1469]: 2026-01-16 23:57:58.267 [INFO][4688] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f529a2e61106525b56a01122e9db8e27ae5ece2cd46d56d5fcdc26745ff3e2a3 Jan 16 23:57:58.361248 containerd[1469]: 2026-01-16 23:57:58.278 [INFO][4688] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.52.192/26 handle="k8s-pod-network.f529a2e61106525b56a01122e9db8e27ae5ece2cd46d56d5fcdc26745ff3e2a3" host="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:58.361248 containerd[1469]: 2026-01-16 23:57:58.307 [INFO][4688] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.52.200/26] block=192.168.52.192/26 handle="k8s-pod-network.f529a2e61106525b56a01122e9db8e27ae5ece2cd46d56d5fcdc26745ff3e2a3" host="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:58.361248 containerd[1469]: 2026-01-16 23:57:58.307 [INFO][4688] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.52.200/26] handle="k8s-pod-network.f529a2e61106525b56a01122e9db8e27ae5ece2cd46d56d5fcdc26745ff3e2a3" host="ci-4081-3-6-n-fe2a5b3650" Jan 16 23:57:58.361248 containerd[1469]: 2026-01-16 23:57:58.307 [INFO][4688] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:58.361248 containerd[1469]: 2026-01-16 23:57:58.307 [INFO][4688] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.52.200/26] IPv6=[] ContainerID="f529a2e61106525b56a01122e9db8e27ae5ece2cd46d56d5fcdc26745ff3e2a3" HandleID="k8s-pod-network.f529a2e61106525b56a01122e9db8e27ae5ece2cd46d56d5fcdc26745ff3e2a3" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-csi--node--driver--j4ltk-eth0" Jan 16 23:57:58.361799 containerd[1469]: 2026-01-16 23:57:58.312 [INFO][4654] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f529a2e61106525b56a01122e9db8e27ae5ece2cd46d56d5fcdc26745ff3e2a3" Namespace="calico-system" Pod="csi-node-driver-j4ltk" WorkloadEndpoint="ci--4081--3--6--n--fe2a5b3650-k8s-csi--node--driver--j4ltk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--fe2a5b3650-k8s-csi--node--driver--j4ltk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f9b64606-aa04-4801-bf16-55e0f797524c", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 57, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-fe2a5b3650", ContainerID:"", Pod:"csi-node-driver-j4ltk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.52.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0978ed1eb86", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:57:58.361799 containerd[1469]: 2026-01-16 23:57:58.313 [INFO][4654] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.52.200/32] ContainerID="f529a2e61106525b56a01122e9db8e27ae5ece2cd46d56d5fcdc26745ff3e2a3" Namespace="calico-system" Pod="csi-node-driver-j4ltk" WorkloadEndpoint="ci--4081--3--6--n--fe2a5b3650-k8s-csi--node--driver--j4ltk-eth0" Jan 16 23:57:58.361799 containerd[1469]: 2026-01-16 23:57:58.313 [INFO][4654] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0978ed1eb86 ContainerID="f529a2e61106525b56a01122e9db8e27ae5ece2cd46d56d5fcdc26745ff3e2a3" Namespace="calico-system" Pod="csi-node-driver-j4ltk" WorkloadEndpoint="ci--4081--3--6--n--fe2a5b3650-k8s-csi--node--driver--j4ltk-eth0" Jan 16 23:57:58.361799 containerd[1469]: 2026-01-16 23:57:58.329 [INFO][4654] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f529a2e61106525b56a01122e9db8e27ae5ece2cd46d56d5fcdc26745ff3e2a3" Namespace="calico-system" Pod="csi-node-driver-j4ltk" WorkloadEndpoint="ci--4081--3--6--n--fe2a5b3650-k8s-csi--node--driver--j4ltk-eth0" Jan 16 23:57:58.361799 containerd[1469]: 2026-01-16 23:57:58.329 [INFO][4654] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f529a2e61106525b56a01122e9db8e27ae5ece2cd46d56d5fcdc26745ff3e2a3" Namespace="calico-system" Pod="csi-node-driver-j4ltk" WorkloadEndpoint="ci--4081--3--6--n--fe2a5b3650-k8s-csi--node--driver--j4ltk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--fe2a5b3650-k8s-csi--node--driver--j4ltk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f9b64606-aa04-4801-bf16-55e0f797524c", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 57, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-fe2a5b3650", ContainerID:"f529a2e61106525b56a01122e9db8e27ae5ece2cd46d56d5fcdc26745ff3e2a3", Pod:"csi-node-driver-j4ltk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.52.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0978ed1eb86", MAC:"6a:44:09:c7:a9:d0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:57:58.361799 containerd[1469]: 2026-01-16 23:57:58.346 [INFO][4654] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f529a2e61106525b56a01122e9db8e27ae5ece2cd46d56d5fcdc26745ff3e2a3" Namespace="calico-system" Pod="csi-node-driver-j4ltk" WorkloadEndpoint="ci--4081--3--6--n--fe2a5b3650-k8s-csi--node--driver--j4ltk-eth0" Jan 16 23:57:58.385808 containerd[1469]: time="2026-01-16T23:57:58.383791340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68bd9998fd-lpljt,Uid:e83705ab-d8ce-46ca-880d-899f69158672,Namespace:calico-system,Attempt:1,} returns sandbox id \"83b0456ccd50f25e75495536e147bb55a2ae359ea09f4046eaba506efd0b842b\"" Jan 16 23:57:58.391136 containerd[1469]: time="2026-01-16T23:57:58.390675005Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 16 23:57:58.402074 containerd[1469]: time="2026-01-16T23:57:58.401902676Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 23:57:58.402258 containerd[1469]: time="2026-01-16T23:57:58.402105955Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 23:57:58.402258 containerd[1469]: time="2026-01-16T23:57:58.402125434Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:57:58.402486 containerd[1469]: time="2026-01-16T23:57:58.402359913Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:57:58.438332 systemd[1]: Started cri-containerd-f529a2e61106525b56a01122e9db8e27ae5ece2cd46d56d5fcdc26745ff3e2a3.scope - libcontainer container f529a2e61106525b56a01122e9db8e27ae5ece2cd46d56d5fcdc26745ff3e2a3. Jan 16 23:57:58.470349 containerd[1469]: time="2026-01-16T23:57:58.470289694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c7c7c7dd6-kgtcm,Uid:861b4149-53db-42c9-9886-651961041ffb,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"52691ff05cebeae2e4577f9cbaed5fe249eaee9f6034298d8335f65a87544d6f\"" Jan 16 23:57:58.487389 containerd[1469]: time="2026-01-16T23:57:58.487226920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j4ltk,Uid:f9b64606-aa04-4801-bf16-55e0f797524c,Namespace:calico-system,Attempt:1,} returns sandbox id \"f529a2e61106525b56a01122e9db8e27ae5ece2cd46d56d5fcdc26745ff3e2a3\"" Jan 16 23:57:58.737855 containerd[1469]: time="2026-01-16T23:57:58.737145500Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:57:58.743045 containerd[1469]: time="2026-01-16T23:57:58.742445418Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 16 23:57:58.743045 containerd[1469]: time="2026-01-16T23:57:58.742570777Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 16 23:57:58.743730 kubelet[2566]: E0116 23:57:58.743400 2566 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 23:57:58.743730 kubelet[2566]: E0116 23:57:58.743478 2566 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 23:57:58.748899 kubelet[2566]: E0116 23:57:58.746091 2566 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-75f2p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-68bd9998fd-lpljt_calico-system(e83705ab-d8ce-46ca-880d-899f69158672): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 16 23:57:58.749099 containerd[1469]: time="2026-01-16T23:57:58.745359075Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 23:57:58.749472 kubelet[2566]: E0116 23:57:58.749409 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68bd9998fd-lpljt" podUID="e83705ab-d8ce-46ca-880d-899f69158672" Jan 16 23:57:58.977467 kubelet[2566]: E0116 23:57:58.975773 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68bd9998fd-lpljt" podUID="e83705ab-d8ce-46ca-880d-899f69158672" Jan 16 23:57:59.090672 containerd[1469]: time="2026-01-16T23:57:59.090442349Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:57:59.092881 containerd[1469]: time="2026-01-16T23:57:59.092676092Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 23:57:59.092881 containerd[1469]: time="2026-01-16T23:57:59.092839411Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 16 23:57:59.094311 containerd[1469]: time="2026-01-16T23:57:59.093918842Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 16 23:57:59.094346 kubelet[2566]: E0116 23:57:59.093112 2566 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 23:57:59.094346 kubelet[2566]: E0116 23:57:59.093157 2566 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 23:57:59.094346 kubelet[2566]: E0116 23:57:59.093366 2566 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5md25,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-c7c7c7dd6-kgtcm_calico-apiserver(861b4149-53db-42c9-9886-651961041ffb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 23:57:59.095247 kubelet[2566]: E0116 23:57:59.095002 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c7c7c7dd6-kgtcm" podUID="861b4149-53db-42c9-9886-651961041ffb" Jan 16 23:57:59.151346 systemd-networkd[1366]: calic17203ebb8a: Gained IPv6LL Jan 16 23:57:59.437216 containerd[1469]: time="2026-01-16T23:57:59.437056439Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:57:59.439127 containerd[1469]: time="2026-01-16T23:57:59.439065903Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 16 23:57:59.439280 containerd[1469]: time="2026-01-16T23:57:59.439183462Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 16 23:57:59.440457 kubelet[2566]: E0116 23:57:59.439838 2566 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 23:57:59.440457 kubelet[2566]: E0116 23:57:59.439901 2566 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 23:57:59.441784 kubelet[2566]: E0116 23:57:59.441584 2566 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-54dj2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-j4ltk_calico-system(f9b64606-aa04-4801-bf16-55e0f797524c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 16 23:57:59.444753 containerd[1469]: time="2026-01-16T23:57:59.444367701Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 16 23:57:59.728087 systemd-networkd[1366]: cali6cf05955fef: Gained IPv6LL Jan 16 23:57:59.789070 containerd[1469]: time="2026-01-16T23:57:59.788935167Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:57:59.790489 systemd-networkd[1366]: cali0978ed1eb86: Gained IPv6LL Jan 16 23:57:59.791750 containerd[1469]: time="2026-01-16T23:57:59.790874111Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 16 23:57:59.791750 containerd[1469]: time="2026-01-16T23:57:59.791031350Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 16 23:57:59.793684 kubelet[2566]: E0116 23:57:59.792595 2566 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 23:57:59.793684 kubelet[2566]: E0116 23:57:59.793007 2566 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 23:57:59.793684 kubelet[2566]: E0116 23:57:59.793155 2566 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-54dj2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-j4ltk_calico-system(f9b64606-aa04-4801-bf16-55e0f797524c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 16 23:57:59.794918 kubelet[2566]: E0116 23:57:59.794834 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-j4ltk" podUID="f9b64606-aa04-4801-bf16-55e0f797524c" Jan 16 23:57:59.983079 kubelet[2566]: E0116 23:57:59.981500 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68bd9998fd-lpljt" podUID="e83705ab-d8ce-46ca-880d-899f69158672" Jan 16 23:57:59.984296 kubelet[2566]: E0116 23:57:59.983986 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c7c7c7dd6-kgtcm" podUID="861b4149-53db-42c9-9886-651961041ffb" Jan 16 23:57:59.986225 kubelet[2566]: E0116 23:57:59.986159 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-j4ltk" podUID="f9b64606-aa04-4801-bf16-55e0f797524c" Jan 16 23:58:03.584923 containerd[1469]: time="2026-01-16T23:58:03.583208649Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 16 23:58:03.930732 containerd[1469]: time="2026-01-16T23:58:03.930566814Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:58:03.932940 containerd[1469]: time="2026-01-16T23:58:03.932819558Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 16 23:58:03.932940 containerd[1469]: time="2026-01-16T23:58:03.932901517Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 16 23:58:03.933304 kubelet[2566]: E0116 23:58:03.933253 2566 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 23:58:03.934093 kubelet[2566]: E0116 23:58:03.933313 2566 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 23:58:03.934093 kubelet[2566]: E0116 23:58:03.933422 2566 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:740b13404779446abb486266eca53865,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wdf9c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7c4f4c46c6-chbl9_calico-system(377f520a-36e6-491f-865e-cdb387ff596c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 16 23:58:03.936978 containerd[1469]: time="2026-01-16T23:58:03.936487210Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 16 23:58:04.285577 containerd[1469]: time="2026-01-16T23:58:04.284834109Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:58:04.287736 containerd[1469]: time="2026-01-16T23:58:04.287639648Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 16 23:58:04.288126 containerd[1469]: time="2026-01-16T23:58:04.287810287Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 16 23:58:04.288230 kubelet[2566]: E0116 23:58:04.288022 2566 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 23:58:04.288557 kubelet[2566]: E0116 23:58:04.288341 2566 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 23:58:04.288782 kubelet[2566]: E0116 23:58:04.288739 2566 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wdf9c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7c4f4c46c6-chbl9_calico-system(377f520a-36e6-491f-865e-cdb387ff596c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 16 23:58:04.290119 kubelet[2566]: E0116 23:58:04.290053 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7c4f4c46c6-chbl9" podUID="377f520a-36e6-491f-865e-cdb387ff596c" Jan 16 23:58:04.580006 containerd[1469]: time="2026-01-16T23:58:04.579259971Z" level=info msg="StopPodSandbox for \"b7a3322c32bde7d1c36bad44bcae6518c9ec8c552711565589665eeb47ea7f4a\"" Jan 16 23:58:04.708642 containerd[1469]: 2026-01-16 23:58:04.626 [WARNING][4871] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b7a3322c32bde7d1c36bad44bcae6518c9ec8c552711565589665eeb47ea7f4a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--fe2a5b3650-k8s-csi--node--driver--j4ltk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f9b64606-aa04-4801-bf16-55e0f797524c", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 57, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-fe2a5b3650", ContainerID:"f529a2e61106525b56a01122e9db8e27ae5ece2cd46d56d5fcdc26745ff3e2a3", Pod:"csi-node-driver-j4ltk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.52.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0978ed1eb86", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:58:04.708642 containerd[1469]: 2026-01-16 23:58:04.627 [INFO][4871] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b7a3322c32bde7d1c36bad44bcae6518c9ec8c552711565589665eeb47ea7f4a" Jan 16 23:58:04.708642 containerd[1469]: 2026-01-16 23:58:04.627 [INFO][4871] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b7a3322c32bde7d1c36bad44bcae6518c9ec8c552711565589665eeb47ea7f4a" iface="eth0" netns="" Jan 16 23:58:04.708642 containerd[1469]: 2026-01-16 23:58:04.627 [INFO][4871] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b7a3322c32bde7d1c36bad44bcae6518c9ec8c552711565589665eeb47ea7f4a" Jan 16 23:58:04.708642 containerd[1469]: 2026-01-16 23:58:04.627 [INFO][4871] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b7a3322c32bde7d1c36bad44bcae6518c9ec8c552711565589665eeb47ea7f4a" Jan 16 23:58:04.708642 containerd[1469]: 2026-01-16 23:58:04.686 [INFO][4879] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b7a3322c32bde7d1c36bad44bcae6518c9ec8c552711565589665eeb47ea7f4a" HandleID="k8s-pod-network.b7a3322c32bde7d1c36bad44bcae6518c9ec8c552711565589665eeb47ea7f4a" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-csi--node--driver--j4ltk-eth0" Jan 16 23:58:04.708642 containerd[1469]: 2026-01-16 23:58:04.686 [INFO][4879] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:58:04.708642 containerd[1469]: 2026-01-16 23:58:04.686 [INFO][4879] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:58:04.708642 containerd[1469]: 2026-01-16 23:58:04.698 [WARNING][4879] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b7a3322c32bde7d1c36bad44bcae6518c9ec8c552711565589665eeb47ea7f4a" HandleID="k8s-pod-network.b7a3322c32bde7d1c36bad44bcae6518c9ec8c552711565589665eeb47ea7f4a" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-csi--node--driver--j4ltk-eth0" Jan 16 23:58:04.708642 containerd[1469]: 2026-01-16 23:58:04.698 [INFO][4879] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b7a3322c32bde7d1c36bad44bcae6518c9ec8c552711565589665eeb47ea7f4a" HandleID="k8s-pod-network.b7a3322c32bde7d1c36bad44bcae6518c9ec8c552711565589665eeb47ea7f4a" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-csi--node--driver--j4ltk-eth0" Jan 16 23:58:04.708642 containerd[1469]: 2026-01-16 23:58:04.701 [INFO][4879] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:58:04.708642 containerd[1469]: 2026-01-16 23:58:04.705 [INFO][4871] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b7a3322c32bde7d1c36bad44bcae6518c9ec8c552711565589665eeb47ea7f4a" Jan 16 23:58:04.709447 containerd[1469]: time="2026-01-16T23:58:04.708695214Z" level=info msg="TearDown network for sandbox \"b7a3322c32bde7d1c36bad44bcae6518c9ec8c552711565589665eeb47ea7f4a\" successfully" Jan 16 23:58:04.709447 containerd[1469]: time="2026-01-16T23:58:04.708725294Z" level=info msg="StopPodSandbox for \"b7a3322c32bde7d1c36bad44bcae6518c9ec8c552711565589665eeb47ea7f4a\" returns successfully" Jan 16 23:58:04.710397 containerd[1469]: time="2026-01-16T23:58:04.710295442Z" level=info msg="RemovePodSandbox for \"b7a3322c32bde7d1c36bad44bcae6518c9ec8c552711565589665eeb47ea7f4a\"" Jan 16 23:58:04.710397 containerd[1469]: time="2026-01-16T23:58:04.710386161Z" level=info msg="Forcibly stopping sandbox \"b7a3322c32bde7d1c36bad44bcae6518c9ec8c552711565589665eeb47ea7f4a\"" Jan 16 23:58:04.819548 containerd[1469]: 2026-01-16 23:58:04.769 [WARNING][4893] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b7a3322c32bde7d1c36bad44bcae6518c9ec8c552711565589665eeb47ea7f4a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--fe2a5b3650-k8s-csi--node--driver--j4ltk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f9b64606-aa04-4801-bf16-55e0f797524c", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 57, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-fe2a5b3650", ContainerID:"f529a2e61106525b56a01122e9db8e27ae5ece2cd46d56d5fcdc26745ff3e2a3", Pod:"csi-node-driver-j4ltk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.52.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0978ed1eb86", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:58:04.819548 containerd[1469]: 2026-01-16 23:58:04.769 [INFO][4893] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b7a3322c32bde7d1c36bad44bcae6518c9ec8c552711565589665eeb47ea7f4a" Jan 16 23:58:04.819548 containerd[1469]: 2026-01-16 23:58:04.769 [INFO][4893] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b7a3322c32bde7d1c36bad44bcae6518c9ec8c552711565589665eeb47ea7f4a" iface="eth0" netns="" Jan 16 23:58:04.819548 containerd[1469]: 2026-01-16 23:58:04.769 [INFO][4893] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b7a3322c32bde7d1c36bad44bcae6518c9ec8c552711565589665eeb47ea7f4a" Jan 16 23:58:04.819548 containerd[1469]: 2026-01-16 23:58:04.769 [INFO][4893] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b7a3322c32bde7d1c36bad44bcae6518c9ec8c552711565589665eeb47ea7f4a" Jan 16 23:58:04.819548 containerd[1469]: 2026-01-16 23:58:04.796 [INFO][4900] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b7a3322c32bde7d1c36bad44bcae6518c9ec8c552711565589665eeb47ea7f4a" HandleID="k8s-pod-network.b7a3322c32bde7d1c36bad44bcae6518c9ec8c552711565589665eeb47ea7f4a" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-csi--node--driver--j4ltk-eth0" Jan 16 23:58:04.819548 containerd[1469]: 2026-01-16 23:58:04.796 [INFO][4900] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:58:04.819548 containerd[1469]: 2026-01-16 23:58:04.796 [INFO][4900] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:58:04.819548 containerd[1469]: 2026-01-16 23:58:04.810 [WARNING][4900] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b7a3322c32bde7d1c36bad44bcae6518c9ec8c552711565589665eeb47ea7f4a" HandleID="k8s-pod-network.b7a3322c32bde7d1c36bad44bcae6518c9ec8c552711565589665eeb47ea7f4a" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-csi--node--driver--j4ltk-eth0" Jan 16 23:58:04.819548 containerd[1469]: 2026-01-16 23:58:04.810 [INFO][4900] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b7a3322c32bde7d1c36bad44bcae6518c9ec8c552711565589665eeb47ea7f4a" HandleID="k8s-pod-network.b7a3322c32bde7d1c36bad44bcae6518c9ec8c552711565589665eeb47ea7f4a" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-csi--node--driver--j4ltk-eth0" Jan 16 23:58:04.819548 containerd[1469]: 2026-01-16 23:58:04.815 [INFO][4900] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:58:04.819548 containerd[1469]: 2026-01-16 23:58:04.817 [INFO][4893] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b7a3322c32bde7d1c36bad44bcae6518c9ec8c552711565589665eeb47ea7f4a" Jan 16 23:58:04.820651 containerd[1469]: time="2026-01-16T23:58:04.820103910Z" level=info msg="TearDown network for sandbox \"b7a3322c32bde7d1c36bad44bcae6518c9ec8c552711565589665eeb47ea7f4a\" successfully" Jan 16 23:58:04.841069 containerd[1469]: time="2026-01-16T23:58:04.840529199Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b7a3322c32bde7d1c36bad44bcae6518c9ec8c552711565589665eeb47ea7f4a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 16 23:58:04.841069 containerd[1469]: time="2026-01-16T23:58:04.840606038Z" level=info msg="RemovePodSandbox \"b7a3322c32bde7d1c36bad44bcae6518c9ec8c552711565589665eeb47ea7f4a\" returns successfully" Jan 16 23:58:04.842638 containerd[1469]: time="2026-01-16T23:58:04.842576944Z" level=info msg="StopPodSandbox for \"372f97accdf5a50ac72721b082abc929fafc066190f37146341999729c4e8bcc\"" Jan 16 23:58:04.927582 containerd[1469]: 2026-01-16 23:58:04.886 [WARNING][4914] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="372f97accdf5a50ac72721b082abc929fafc066190f37146341999729c4e8bcc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--fe2a5b3650-k8s-coredns--668d6bf9bc--9tw7r-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"59071300-7ce3-4b48-ae86-f59c1ac4567d", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 57, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-fe2a5b3650", ContainerID:"781089e1e253ce85f0920ecee8890cca2d12c98b6912a5b2755adbc3a543d7bf", Pod:"coredns-668d6bf9bc-9tw7r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali85ab4a39156", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:58:04.927582 containerd[1469]: 2026-01-16 23:58:04.886 [INFO][4914] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="372f97accdf5a50ac72721b082abc929fafc066190f37146341999729c4e8bcc" Jan 16 23:58:04.927582 containerd[1469]: 2026-01-16 23:58:04.886 [INFO][4914] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="372f97accdf5a50ac72721b082abc929fafc066190f37146341999729c4e8bcc" iface="eth0" netns="" Jan 16 23:58:04.927582 containerd[1469]: 2026-01-16 23:58:04.886 [INFO][4914] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="372f97accdf5a50ac72721b082abc929fafc066190f37146341999729c4e8bcc" Jan 16 23:58:04.927582 containerd[1469]: 2026-01-16 23:58:04.886 [INFO][4914] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="372f97accdf5a50ac72721b082abc929fafc066190f37146341999729c4e8bcc" Jan 16 23:58:04.927582 containerd[1469]: 2026-01-16 23:58:04.909 [INFO][4921] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="372f97accdf5a50ac72721b082abc929fafc066190f37146341999729c4e8bcc" HandleID="k8s-pod-network.372f97accdf5a50ac72721b082abc929fafc066190f37146341999729c4e8bcc" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-coredns--668d6bf9bc--9tw7r-eth0" Jan 16 23:58:04.927582 containerd[1469]: 2026-01-16 23:58:04.909 [INFO][4921] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:58:04.927582 containerd[1469]: 2026-01-16 23:58:04.909 [INFO][4921] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:58:04.927582 containerd[1469]: 2026-01-16 23:58:04.920 [WARNING][4921] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="372f97accdf5a50ac72721b082abc929fafc066190f37146341999729c4e8bcc" HandleID="k8s-pod-network.372f97accdf5a50ac72721b082abc929fafc066190f37146341999729c4e8bcc" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-coredns--668d6bf9bc--9tw7r-eth0" Jan 16 23:58:04.927582 containerd[1469]: 2026-01-16 23:58:04.920 [INFO][4921] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="372f97accdf5a50ac72721b082abc929fafc066190f37146341999729c4e8bcc" HandleID="k8s-pod-network.372f97accdf5a50ac72721b082abc929fafc066190f37146341999729c4e8bcc" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-coredns--668d6bf9bc--9tw7r-eth0" Jan 16 23:58:04.927582 containerd[1469]: 2026-01-16 23:58:04.923 [INFO][4921] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:58:04.927582 containerd[1469]: 2026-01-16 23:58:04.925 [INFO][4914] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="372f97accdf5a50ac72721b082abc929fafc066190f37146341999729c4e8bcc" Jan 16 23:58:04.927582 containerd[1469]: time="2026-01-16T23:58:04.927577635Z" level=info msg="TearDown network for sandbox \"372f97accdf5a50ac72721b082abc929fafc066190f37146341999729c4e8bcc\" successfully" Jan 16 23:58:04.928333 containerd[1469]: time="2026-01-16T23:58:04.927603275Z" level=info msg="StopPodSandbox for \"372f97accdf5a50ac72721b082abc929fafc066190f37146341999729c4e8bcc\" returns successfully" Jan 16 23:58:04.928333 containerd[1469]: time="2026-01-16T23:58:04.928108791Z" level=info msg="RemovePodSandbox for \"372f97accdf5a50ac72721b082abc929fafc066190f37146341999729c4e8bcc\"" Jan 16 23:58:04.928333 containerd[1469]: time="2026-01-16T23:58:04.928140711Z" level=info msg="Forcibly stopping sandbox \"372f97accdf5a50ac72721b082abc929fafc066190f37146341999729c4e8bcc\"" Jan 16 23:58:05.036057 containerd[1469]: 2026-01-16 23:58:04.987 [WARNING][4935] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="372f97accdf5a50ac72721b082abc929fafc066190f37146341999729c4e8bcc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--fe2a5b3650-k8s-coredns--668d6bf9bc--9tw7r-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"59071300-7ce3-4b48-ae86-f59c1ac4567d", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 57, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-fe2a5b3650", ContainerID:"781089e1e253ce85f0920ecee8890cca2d12c98b6912a5b2755adbc3a543d7bf", Pod:"coredns-668d6bf9bc-9tw7r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali85ab4a39156", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:58:05.036057 containerd[1469]: 2026-01-16 23:58:04.987 [INFO][4935] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="372f97accdf5a50ac72721b082abc929fafc066190f37146341999729c4e8bcc" Jan 16 23:58:05.036057 containerd[1469]: 2026-01-16 23:58:04.987 [INFO][4935] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="372f97accdf5a50ac72721b082abc929fafc066190f37146341999729c4e8bcc" iface="eth0" netns="" Jan 16 23:58:05.036057 containerd[1469]: 2026-01-16 23:58:04.987 [INFO][4935] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="372f97accdf5a50ac72721b082abc929fafc066190f37146341999729c4e8bcc" Jan 16 23:58:05.036057 containerd[1469]: 2026-01-16 23:58:04.987 [INFO][4935] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="372f97accdf5a50ac72721b082abc929fafc066190f37146341999729c4e8bcc" Jan 16 23:58:05.036057 containerd[1469]: 2026-01-16 23:58:05.014 [INFO][4943] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="372f97accdf5a50ac72721b082abc929fafc066190f37146341999729c4e8bcc" HandleID="k8s-pod-network.372f97accdf5a50ac72721b082abc929fafc066190f37146341999729c4e8bcc" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-coredns--668d6bf9bc--9tw7r-eth0" Jan 16 23:58:05.036057 containerd[1469]: 2026-01-16 23:58:05.014 [INFO][4943] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:58:05.036057 containerd[1469]: 2026-01-16 23:58:05.014 [INFO][4943] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:58:05.036057 containerd[1469]: 2026-01-16 23:58:05.028 [WARNING][4943] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="372f97accdf5a50ac72721b082abc929fafc066190f37146341999729c4e8bcc" HandleID="k8s-pod-network.372f97accdf5a50ac72721b082abc929fafc066190f37146341999729c4e8bcc" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-coredns--668d6bf9bc--9tw7r-eth0" Jan 16 23:58:05.036057 containerd[1469]: 2026-01-16 23:58:05.029 [INFO][4943] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="372f97accdf5a50ac72721b082abc929fafc066190f37146341999729c4e8bcc" HandleID="k8s-pod-network.372f97accdf5a50ac72721b082abc929fafc066190f37146341999729c4e8bcc" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-coredns--668d6bf9bc--9tw7r-eth0" Jan 16 23:58:05.036057 containerd[1469]: 2026-01-16 23:58:05.031 [INFO][4943] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:58:05.036057 containerd[1469]: 2026-01-16 23:58:05.033 [INFO][4935] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="372f97accdf5a50ac72721b082abc929fafc066190f37146341999729c4e8bcc" Jan 16 23:58:05.036057 containerd[1469]: time="2026-01-16T23:58:05.035599838Z" level=info msg="TearDown network for sandbox \"372f97accdf5a50ac72721b082abc929fafc066190f37146341999729c4e8bcc\" successfully" Jan 16 23:58:05.041439 containerd[1469]: time="2026-01-16T23:58:05.041375516Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"372f97accdf5a50ac72721b082abc929fafc066190f37146341999729c4e8bcc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 16 23:58:05.041579 containerd[1469]: time="2026-01-16T23:58:05.041520235Z" level=info msg="RemovePodSandbox \"372f97accdf5a50ac72721b082abc929fafc066190f37146341999729c4e8bcc\" returns successfully" Jan 16 23:58:05.042493 containerd[1469]: time="2026-01-16T23:58:05.042180510Z" level=info msg="StopPodSandbox for \"ca2931cae940bb49fc6db34fb07c9ccab972a1f20d06a7cf7b02651ba8eeacba\"" Jan 16 23:58:05.155506 containerd[1469]: 2026-01-16 23:58:05.095 [WARNING][4957] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ca2931cae940bb49fc6db34fb07c9ccab972a1f20d06a7cf7b02651ba8eeacba" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--fe2a5b3650-k8s-coredns--668d6bf9bc--ktbgp-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"29ddbdd3-30d8-4cf4-8a5f-7715f3d5b4bb", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 57, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-fe2a5b3650", ContainerID:"b1f8a9a9026d756d194a11af938ebc1b01763d557f080a4fc1ed2ce38d97c300", Pod:"coredns-668d6bf9bc-ktbgp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1e75a8e80e5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:58:05.155506 containerd[1469]: 2026-01-16 23:58:05.096 [INFO][4957] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ca2931cae940bb49fc6db34fb07c9ccab972a1f20d06a7cf7b02651ba8eeacba" Jan 16 23:58:05.155506 containerd[1469]: 2026-01-16 23:58:05.096 [INFO][4957] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ca2931cae940bb49fc6db34fb07c9ccab972a1f20d06a7cf7b02651ba8eeacba" iface="eth0" netns="" Jan 16 23:58:05.155506 containerd[1469]: 2026-01-16 23:58:05.096 [INFO][4957] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ca2931cae940bb49fc6db34fb07c9ccab972a1f20d06a7cf7b02651ba8eeacba" Jan 16 23:58:05.155506 containerd[1469]: 2026-01-16 23:58:05.096 [INFO][4957] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ca2931cae940bb49fc6db34fb07c9ccab972a1f20d06a7cf7b02651ba8eeacba" Jan 16 23:58:05.155506 containerd[1469]: 2026-01-16 23:58:05.123 [INFO][4964] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ca2931cae940bb49fc6db34fb07c9ccab972a1f20d06a7cf7b02651ba8eeacba" HandleID="k8s-pod-network.ca2931cae940bb49fc6db34fb07c9ccab972a1f20d06a7cf7b02651ba8eeacba" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-coredns--668d6bf9bc--ktbgp-eth0" Jan 16 23:58:05.155506 containerd[1469]: 2026-01-16 23:58:05.123 [INFO][4964] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:58:05.155506 containerd[1469]: 2026-01-16 23:58:05.123 [INFO][4964] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:58:05.155506 containerd[1469]: 2026-01-16 23:58:05.144 [WARNING][4964] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ca2931cae940bb49fc6db34fb07c9ccab972a1f20d06a7cf7b02651ba8eeacba" HandleID="k8s-pod-network.ca2931cae940bb49fc6db34fb07c9ccab972a1f20d06a7cf7b02651ba8eeacba" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-coredns--668d6bf9bc--ktbgp-eth0" Jan 16 23:58:05.155506 containerd[1469]: 2026-01-16 23:58:05.144 [INFO][4964] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ca2931cae940bb49fc6db34fb07c9ccab972a1f20d06a7cf7b02651ba8eeacba" HandleID="k8s-pod-network.ca2931cae940bb49fc6db34fb07c9ccab972a1f20d06a7cf7b02651ba8eeacba" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-coredns--668d6bf9bc--ktbgp-eth0" Jan 16 23:58:05.155506 containerd[1469]: 2026-01-16 23:58:05.149 [INFO][4964] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:58:05.155506 containerd[1469]: 2026-01-16 23:58:05.152 [INFO][4957] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ca2931cae940bb49fc6db34fb07c9ccab972a1f20d06a7cf7b02651ba8eeacba" Jan 16 23:58:05.159189 containerd[1469]: time="2026-01-16T23:58:05.158983095Z" level=info msg="TearDown network for sandbox \"ca2931cae940bb49fc6db34fb07c9ccab972a1f20d06a7cf7b02651ba8eeacba\" successfully" Jan 16 23:58:05.159189 containerd[1469]: time="2026-01-16T23:58:05.159025294Z" level=info msg="StopPodSandbox for \"ca2931cae940bb49fc6db34fb07c9ccab972a1f20d06a7cf7b02651ba8eeacba\" returns successfully" Jan 16 23:58:05.160117 containerd[1469]: time="2026-01-16T23:58:05.160084046Z" level=info msg="RemovePodSandbox for \"ca2931cae940bb49fc6db34fb07c9ccab972a1f20d06a7cf7b02651ba8eeacba\"" Jan 16 23:58:05.160710 containerd[1469]: time="2026-01-16T23:58:05.160465564Z" level=info msg="Forcibly stopping sandbox \"ca2931cae940bb49fc6db34fb07c9ccab972a1f20d06a7cf7b02651ba8eeacba\"" Jan 16 23:58:05.260141 containerd[1469]: 2026-01-16 23:58:05.208 [WARNING][4979] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ca2931cae940bb49fc6db34fb07c9ccab972a1f20d06a7cf7b02651ba8eeacba" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--fe2a5b3650-k8s-coredns--668d6bf9bc--ktbgp-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"29ddbdd3-30d8-4cf4-8a5f-7715f3d5b4bb", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 57, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-fe2a5b3650", ContainerID:"b1f8a9a9026d756d194a11af938ebc1b01763d557f080a4fc1ed2ce38d97c300", Pod:"coredns-668d6bf9bc-ktbgp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1e75a8e80e5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:58:05.260141 containerd[1469]: 2026-01-16 23:58:05.208 [INFO][4979] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ca2931cae940bb49fc6db34fb07c9ccab972a1f20d06a7cf7b02651ba8eeacba" Jan 16 23:58:05.260141 containerd[1469]: 2026-01-16 23:58:05.208 [INFO][4979] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ca2931cae940bb49fc6db34fb07c9ccab972a1f20d06a7cf7b02651ba8eeacba" iface="eth0" netns="" Jan 16 23:58:05.260141 containerd[1469]: 2026-01-16 23:58:05.208 [INFO][4979] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ca2931cae940bb49fc6db34fb07c9ccab972a1f20d06a7cf7b02651ba8eeacba" Jan 16 23:58:05.260141 containerd[1469]: 2026-01-16 23:58:05.208 [INFO][4979] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ca2931cae940bb49fc6db34fb07c9ccab972a1f20d06a7cf7b02651ba8eeacba" Jan 16 23:58:05.260141 containerd[1469]: 2026-01-16 23:58:05.238 [INFO][4986] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ca2931cae940bb49fc6db34fb07c9ccab972a1f20d06a7cf7b02651ba8eeacba" HandleID="k8s-pod-network.ca2931cae940bb49fc6db34fb07c9ccab972a1f20d06a7cf7b02651ba8eeacba" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-coredns--668d6bf9bc--ktbgp-eth0" Jan 16 23:58:05.260141 containerd[1469]: 2026-01-16 23:58:05.238 [INFO][4986] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:58:05.260141 containerd[1469]: 2026-01-16 23:58:05.238 [INFO][4986] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:58:05.260141 containerd[1469]: 2026-01-16 23:58:05.250 [WARNING][4986] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ca2931cae940bb49fc6db34fb07c9ccab972a1f20d06a7cf7b02651ba8eeacba" HandleID="k8s-pod-network.ca2931cae940bb49fc6db34fb07c9ccab972a1f20d06a7cf7b02651ba8eeacba" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-coredns--668d6bf9bc--ktbgp-eth0" Jan 16 23:58:05.260141 containerd[1469]: 2026-01-16 23:58:05.250 [INFO][4986] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ca2931cae940bb49fc6db34fb07c9ccab972a1f20d06a7cf7b02651ba8eeacba" HandleID="k8s-pod-network.ca2931cae940bb49fc6db34fb07c9ccab972a1f20d06a7cf7b02651ba8eeacba" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-coredns--668d6bf9bc--ktbgp-eth0" Jan 16 23:58:05.260141 containerd[1469]: 2026-01-16 23:58:05.252 [INFO][4986] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:58:05.260141 containerd[1469]: 2026-01-16 23:58:05.256 [INFO][4979] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ca2931cae940bb49fc6db34fb07c9ccab972a1f20d06a7cf7b02651ba8eeacba" Jan 16 23:58:05.260141 containerd[1469]: time="2026-01-16T23:58:05.260056914Z" level=info msg="TearDown network for sandbox \"ca2931cae940bb49fc6db34fb07c9ccab972a1f20d06a7cf7b02651ba8eeacba\" successfully" Jan 16 23:58:05.264915 containerd[1469]: time="2026-01-16T23:58:05.264742600Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ca2931cae940bb49fc6db34fb07c9ccab972a1f20d06a7cf7b02651ba8eeacba\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 16 23:58:05.264915 containerd[1469]: time="2026-01-16T23:58:05.264839999Z" level=info msg="RemovePodSandbox \"ca2931cae940bb49fc6db34fb07c9ccab972a1f20d06a7cf7b02651ba8eeacba\" returns successfully" Jan 16 23:58:05.266867 containerd[1469]: time="2026-01-16T23:58:05.266142469Z" level=info msg="StopPodSandbox for \"b0b45ca3b3b60ea73958fcf4bbea6bcef53c53dab5003e89ddea87423499aebd\"" Jan 16 23:58:05.372530 containerd[1469]: 2026-01-16 23:58:05.319 [WARNING][5000] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b0b45ca3b3b60ea73958fcf4bbea6bcef53c53dab5003e89ddea87423499aebd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--fe2a5b3650-k8s-calico--apiserver--c7c7c7dd6--kgtcm-eth0", GenerateName:"calico-apiserver-c7c7c7dd6-", Namespace:"calico-apiserver", SelfLink:"", UID:"861b4149-53db-42c9-9886-651961041ffb", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 57, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c7c7c7dd6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-fe2a5b3650", ContainerID:"52691ff05cebeae2e4577f9cbaed5fe249eaee9f6034298d8335f65a87544d6f", Pod:"calico-apiserver-c7c7c7dd6-kgtcm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.52.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6cf05955fef", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:58:05.372530 containerd[1469]: 2026-01-16 23:58:05.319 [INFO][5000] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b0b45ca3b3b60ea73958fcf4bbea6bcef53c53dab5003e89ddea87423499aebd" Jan 16 23:58:05.372530 containerd[1469]: 2026-01-16 23:58:05.319 [INFO][5000] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b0b45ca3b3b60ea73958fcf4bbea6bcef53c53dab5003e89ddea87423499aebd" iface="eth0" netns="" Jan 16 23:58:05.372530 containerd[1469]: 2026-01-16 23:58:05.319 [INFO][5000] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b0b45ca3b3b60ea73958fcf4bbea6bcef53c53dab5003e89ddea87423499aebd" Jan 16 23:58:05.372530 containerd[1469]: 2026-01-16 23:58:05.319 [INFO][5000] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b0b45ca3b3b60ea73958fcf4bbea6bcef53c53dab5003e89ddea87423499aebd" Jan 16 23:58:05.372530 containerd[1469]: 2026-01-16 23:58:05.348 [INFO][5007] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b0b45ca3b3b60ea73958fcf4bbea6bcef53c53dab5003e89ddea87423499aebd" HandleID="k8s-pod-network.b0b45ca3b3b60ea73958fcf4bbea6bcef53c53dab5003e89ddea87423499aebd" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-calico--apiserver--c7c7c7dd6--kgtcm-eth0" Jan 16 23:58:05.372530 containerd[1469]: 2026-01-16 23:58:05.348 [INFO][5007] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:58:05.372530 containerd[1469]: 2026-01-16 23:58:05.348 [INFO][5007] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:58:05.372530 containerd[1469]: 2026-01-16 23:58:05.364 [WARNING][5007] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b0b45ca3b3b60ea73958fcf4bbea6bcef53c53dab5003e89ddea87423499aebd" HandleID="k8s-pod-network.b0b45ca3b3b60ea73958fcf4bbea6bcef53c53dab5003e89ddea87423499aebd" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-calico--apiserver--c7c7c7dd6--kgtcm-eth0" Jan 16 23:58:05.372530 containerd[1469]: 2026-01-16 23:58:05.364 [INFO][5007] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b0b45ca3b3b60ea73958fcf4bbea6bcef53c53dab5003e89ddea87423499aebd" HandleID="k8s-pod-network.b0b45ca3b3b60ea73958fcf4bbea6bcef53c53dab5003e89ddea87423499aebd" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-calico--apiserver--c7c7c7dd6--kgtcm-eth0" Jan 16 23:58:05.372530 containerd[1469]: 2026-01-16 23:58:05.367 [INFO][5007] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:58:05.372530 containerd[1469]: 2026-01-16 23:58:05.370 [INFO][5000] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b0b45ca3b3b60ea73958fcf4bbea6bcef53c53dab5003e89ddea87423499aebd" Jan 16 23:58:05.373431 containerd[1469]: time="2026-01-16T23:58:05.373301604Z" level=info msg="TearDown network for sandbox \"b0b45ca3b3b60ea73958fcf4bbea6bcef53c53dab5003e89ddea87423499aebd\" successfully" Jan 16 23:58:05.373431 containerd[1469]: time="2026-01-16T23:58:05.373341404Z" level=info msg="StopPodSandbox for \"b0b45ca3b3b60ea73958fcf4bbea6bcef53c53dab5003e89ddea87423499aebd\" returns successfully" Jan 16 23:58:05.374649 containerd[1469]: time="2026-01-16T23:58:05.374574355Z" level=info msg="RemovePodSandbox for \"b0b45ca3b3b60ea73958fcf4bbea6bcef53c53dab5003e89ddea87423499aebd\"" Jan 16 23:58:05.374649 containerd[1469]: time="2026-01-16T23:58:05.374612555Z" level=info msg="Forcibly stopping sandbox \"b0b45ca3b3b60ea73958fcf4bbea6bcef53c53dab5003e89ddea87423499aebd\"" Jan 16 23:58:05.491972 containerd[1469]: 2026-01-16 23:58:05.425 [WARNING][5022] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b0b45ca3b3b60ea73958fcf4bbea6bcef53c53dab5003e89ddea87423499aebd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--fe2a5b3650-k8s-calico--apiserver--c7c7c7dd6--kgtcm-eth0", GenerateName:"calico-apiserver-c7c7c7dd6-", Namespace:"calico-apiserver", SelfLink:"", UID:"861b4149-53db-42c9-9886-651961041ffb", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 57, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c7c7c7dd6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-fe2a5b3650", ContainerID:"52691ff05cebeae2e4577f9cbaed5fe249eaee9f6034298d8335f65a87544d6f", Pod:"calico-apiserver-c7c7c7dd6-kgtcm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.52.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6cf05955fef", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:58:05.491972 containerd[1469]: 2026-01-16 23:58:05.426 [INFO][5022] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b0b45ca3b3b60ea73958fcf4bbea6bcef53c53dab5003e89ddea87423499aebd" Jan 16 23:58:05.491972 containerd[1469]: 2026-01-16 23:58:05.426 [INFO][5022] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b0b45ca3b3b60ea73958fcf4bbea6bcef53c53dab5003e89ddea87423499aebd" iface="eth0" netns="" Jan 16 23:58:05.491972 containerd[1469]: 2026-01-16 23:58:05.426 [INFO][5022] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b0b45ca3b3b60ea73958fcf4bbea6bcef53c53dab5003e89ddea87423499aebd" Jan 16 23:58:05.491972 containerd[1469]: 2026-01-16 23:58:05.426 [INFO][5022] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b0b45ca3b3b60ea73958fcf4bbea6bcef53c53dab5003e89ddea87423499aebd" Jan 16 23:58:05.491972 containerd[1469]: 2026-01-16 23:58:05.470 [INFO][5029] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b0b45ca3b3b60ea73958fcf4bbea6bcef53c53dab5003e89ddea87423499aebd" HandleID="k8s-pod-network.b0b45ca3b3b60ea73958fcf4bbea6bcef53c53dab5003e89ddea87423499aebd" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-calico--apiserver--c7c7c7dd6--kgtcm-eth0" Jan 16 23:58:05.491972 containerd[1469]: 2026-01-16 23:58:05.471 [INFO][5029] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:58:05.491972 containerd[1469]: 2026-01-16 23:58:05.471 [INFO][5029] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:58:05.491972 containerd[1469]: 2026-01-16 23:58:05.483 [WARNING][5029] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b0b45ca3b3b60ea73958fcf4bbea6bcef53c53dab5003e89ddea87423499aebd" HandleID="k8s-pod-network.b0b45ca3b3b60ea73958fcf4bbea6bcef53c53dab5003e89ddea87423499aebd" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-calico--apiserver--c7c7c7dd6--kgtcm-eth0" Jan 16 23:58:05.491972 containerd[1469]: 2026-01-16 23:58:05.483 [INFO][5029] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b0b45ca3b3b60ea73958fcf4bbea6bcef53c53dab5003e89ddea87423499aebd" HandleID="k8s-pod-network.b0b45ca3b3b60ea73958fcf4bbea6bcef53c53dab5003e89ddea87423499aebd" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-calico--apiserver--c7c7c7dd6--kgtcm-eth0" Jan 16 23:58:05.491972 containerd[1469]: 2026-01-16 23:58:05.486 [INFO][5029] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:58:05.491972 containerd[1469]: 2026-01-16 23:58:05.489 [INFO][5022] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b0b45ca3b3b60ea73958fcf4bbea6bcef53c53dab5003e89ddea87423499aebd" Jan 16 23:58:05.491972 containerd[1469]: time="2026-01-16T23:58:05.491903255Z" level=info msg="TearDown network for sandbox \"b0b45ca3b3b60ea73958fcf4bbea6bcef53c53dab5003e89ddea87423499aebd\" successfully" Jan 16 23:58:05.496304 containerd[1469]: time="2026-01-16T23:58:05.496246984Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b0b45ca3b3b60ea73958fcf4bbea6bcef53c53dab5003e89ddea87423499aebd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 16 23:58:05.496751 containerd[1469]: time="2026-01-16T23:58:05.496314863Z" level=info msg="RemovePodSandbox \"b0b45ca3b3b60ea73958fcf4bbea6bcef53c53dab5003e89ddea87423499aebd\" returns successfully" Jan 16 23:58:05.497728 containerd[1469]: time="2026-01-16T23:58:05.497433655Z" level=info msg="StopPodSandbox for \"d5b43f50f38347de7ac99fc30ba9192dcbe4e414ef3d800b6f310f994e3ea191\"" Jan 16 23:58:05.602738 containerd[1469]: 2026-01-16 23:58:05.549 [WARNING][5043] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d5b43f50f38347de7ac99fc30ba9192dcbe4e414ef3d800b6f310f994e3ea191" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--fe2a5b3650-k8s-goldmane--666569f655--jr25d-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"195fd954-db29-4a46-a5c3-26216d80a6af", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 57, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-fe2a5b3650", ContainerID:"2d89683823b8dfa24558f775487f2a13559ca39e8b8914b9214118bf3e18ee6c", Pod:"goldmane-666569f655-jr25d", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.52.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid738080d3eb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:58:05.602738 containerd[1469]: 2026-01-16 23:58:05.550 [INFO][5043] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d5b43f50f38347de7ac99fc30ba9192dcbe4e414ef3d800b6f310f994e3ea191" Jan 16 23:58:05.602738 containerd[1469]: 2026-01-16 23:58:05.550 [INFO][5043] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d5b43f50f38347de7ac99fc30ba9192dcbe4e414ef3d800b6f310f994e3ea191" iface="eth0" netns="" Jan 16 23:58:05.602738 containerd[1469]: 2026-01-16 23:58:05.550 [INFO][5043] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d5b43f50f38347de7ac99fc30ba9192dcbe4e414ef3d800b6f310f994e3ea191" Jan 16 23:58:05.602738 containerd[1469]: 2026-01-16 23:58:05.550 [INFO][5043] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d5b43f50f38347de7ac99fc30ba9192dcbe4e414ef3d800b6f310f994e3ea191" Jan 16 23:58:05.602738 containerd[1469]: 2026-01-16 23:58:05.578 [INFO][5050] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d5b43f50f38347de7ac99fc30ba9192dcbe4e414ef3d800b6f310f994e3ea191" HandleID="k8s-pod-network.d5b43f50f38347de7ac99fc30ba9192dcbe4e414ef3d800b6f310f994e3ea191" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-goldmane--666569f655--jr25d-eth0" Jan 16 23:58:05.602738 containerd[1469]: 2026-01-16 23:58:05.578 [INFO][5050] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:58:05.602738 containerd[1469]: 2026-01-16 23:58:05.578 [INFO][5050] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:58:05.602738 containerd[1469]: 2026-01-16 23:58:05.594 [WARNING][5050] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d5b43f50f38347de7ac99fc30ba9192dcbe4e414ef3d800b6f310f994e3ea191" HandleID="k8s-pod-network.d5b43f50f38347de7ac99fc30ba9192dcbe4e414ef3d800b6f310f994e3ea191" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-goldmane--666569f655--jr25d-eth0" Jan 16 23:58:05.602738 containerd[1469]: 2026-01-16 23:58:05.594 [INFO][5050] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d5b43f50f38347de7ac99fc30ba9192dcbe4e414ef3d800b6f310f994e3ea191" HandleID="k8s-pod-network.d5b43f50f38347de7ac99fc30ba9192dcbe4e414ef3d800b6f310f994e3ea191" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-goldmane--666569f655--jr25d-eth0" Jan 16 23:58:05.602738 containerd[1469]: 2026-01-16 23:58:05.597 [INFO][5050] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:58:05.602738 containerd[1469]: 2026-01-16 23:58:05.600 [INFO][5043] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d5b43f50f38347de7ac99fc30ba9192dcbe4e414ef3d800b6f310f994e3ea191" Jan 16 23:58:05.602738 containerd[1469]: time="2026-01-16T23:58:05.602753883Z" level=info msg="TearDown network for sandbox \"d5b43f50f38347de7ac99fc30ba9192dcbe4e414ef3d800b6f310f994e3ea191\" successfully" Jan 16 23:58:05.602738 containerd[1469]: time="2026-01-16T23:58:05.602785243Z" level=info msg="StopPodSandbox for \"d5b43f50f38347de7ac99fc30ba9192dcbe4e414ef3d800b6f310f994e3ea191\" returns successfully" Jan 16 23:58:05.604285 containerd[1469]: time="2026-01-16T23:58:05.604217752Z" level=info msg="RemovePodSandbox for \"d5b43f50f38347de7ac99fc30ba9192dcbe4e414ef3d800b6f310f994e3ea191\"" Jan 16 23:58:05.604285 containerd[1469]: time="2026-01-16T23:58:05.604279512Z" level=info msg="Forcibly stopping sandbox \"d5b43f50f38347de7ac99fc30ba9192dcbe4e414ef3d800b6f310f994e3ea191\"" Jan 16 23:58:05.711810 containerd[1469]: 2026-01-16 23:58:05.660 [WARNING][5064] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d5b43f50f38347de7ac99fc30ba9192dcbe4e414ef3d800b6f310f994e3ea191" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--fe2a5b3650-k8s-goldmane--666569f655--jr25d-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"195fd954-db29-4a46-a5c3-26216d80a6af", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 57, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-fe2a5b3650", ContainerID:"2d89683823b8dfa24558f775487f2a13559ca39e8b8914b9214118bf3e18ee6c", Pod:"goldmane-666569f655-jr25d", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.52.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid738080d3eb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:58:05.711810 containerd[1469]: 2026-01-16 23:58:05.660 [INFO][5064] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d5b43f50f38347de7ac99fc30ba9192dcbe4e414ef3d800b6f310f994e3ea191" Jan 16 23:58:05.711810 containerd[1469]: 2026-01-16 23:58:05.660 [INFO][5064] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d5b43f50f38347de7ac99fc30ba9192dcbe4e414ef3d800b6f310f994e3ea191" iface="eth0" netns="" Jan 16 23:58:05.711810 containerd[1469]: 2026-01-16 23:58:05.660 [INFO][5064] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d5b43f50f38347de7ac99fc30ba9192dcbe4e414ef3d800b6f310f994e3ea191" Jan 16 23:58:05.711810 containerd[1469]: 2026-01-16 23:58:05.660 [INFO][5064] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d5b43f50f38347de7ac99fc30ba9192dcbe4e414ef3d800b6f310f994e3ea191" Jan 16 23:58:05.711810 containerd[1469]: 2026-01-16 23:58:05.689 [INFO][5071] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d5b43f50f38347de7ac99fc30ba9192dcbe4e414ef3d800b6f310f994e3ea191" HandleID="k8s-pod-network.d5b43f50f38347de7ac99fc30ba9192dcbe4e414ef3d800b6f310f994e3ea191" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-goldmane--666569f655--jr25d-eth0" Jan 16 23:58:05.711810 containerd[1469]: 2026-01-16 23:58:05.690 [INFO][5071] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:58:05.711810 containerd[1469]: 2026-01-16 23:58:05.690 [INFO][5071] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:58:05.711810 containerd[1469]: 2026-01-16 23:58:05.703 [WARNING][5071] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d5b43f50f38347de7ac99fc30ba9192dcbe4e414ef3d800b6f310f994e3ea191" HandleID="k8s-pod-network.d5b43f50f38347de7ac99fc30ba9192dcbe4e414ef3d800b6f310f994e3ea191" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-goldmane--666569f655--jr25d-eth0" Jan 16 23:58:05.711810 containerd[1469]: 2026-01-16 23:58:05.703 [INFO][5071] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d5b43f50f38347de7ac99fc30ba9192dcbe4e414ef3d800b6f310f994e3ea191" HandleID="k8s-pod-network.d5b43f50f38347de7ac99fc30ba9192dcbe4e414ef3d800b6f310f994e3ea191" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-goldmane--666569f655--jr25d-eth0" Jan 16 23:58:05.711810 containerd[1469]: 2026-01-16 23:58:05.706 [INFO][5071] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:58:05.711810 containerd[1469]: 2026-01-16 23:58:05.709 [INFO][5064] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d5b43f50f38347de7ac99fc30ba9192dcbe4e414ef3d800b6f310f994e3ea191" Jan 16 23:58:05.712898 containerd[1469]: time="2026-01-16T23:58:05.711869524Z" level=info msg="TearDown network for sandbox \"d5b43f50f38347de7ac99fc30ba9192dcbe4e414ef3d800b6f310f994e3ea191\" successfully" Jan 16 23:58:05.716047 containerd[1469]: time="2026-01-16T23:58:05.715978054Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d5b43f50f38347de7ac99fc30ba9192dcbe4e414ef3d800b6f310f994e3ea191\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 16 23:58:05.716268 containerd[1469]: time="2026-01-16T23:58:05.716065013Z" level=info msg="RemovePodSandbox \"d5b43f50f38347de7ac99fc30ba9192dcbe4e414ef3d800b6f310f994e3ea191\" returns successfully" Jan 16 23:58:05.716822 containerd[1469]: time="2026-01-16T23:58:05.716751088Z" level=info msg="StopPodSandbox for \"86b346fee605a870a4dfa9dfff1a653e49704e18c7a67f4c2476a170ff73ed6b\"" Jan 16 23:58:05.811411 containerd[1469]: 2026-01-16 23:58:05.759 [WARNING][5085] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="86b346fee605a870a4dfa9dfff1a653e49704e18c7a67f4c2476a170ff73ed6b" WorkloadEndpoint="ci--4081--3--6--n--fe2a5b3650-k8s-whisker--65b547c47d--4k7mm-eth0" Jan 16 23:58:05.811411 containerd[1469]: 2026-01-16 23:58:05.760 [INFO][5085] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="86b346fee605a870a4dfa9dfff1a653e49704e18c7a67f4c2476a170ff73ed6b" Jan 16 23:58:05.811411 containerd[1469]: 2026-01-16 23:58:05.760 [INFO][5085] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="86b346fee605a870a4dfa9dfff1a653e49704e18c7a67f4c2476a170ff73ed6b" iface="eth0" netns="" Jan 16 23:58:05.811411 containerd[1469]: 2026-01-16 23:58:05.760 [INFO][5085] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="86b346fee605a870a4dfa9dfff1a653e49704e18c7a67f4c2476a170ff73ed6b" Jan 16 23:58:05.811411 containerd[1469]: 2026-01-16 23:58:05.760 [INFO][5085] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="86b346fee605a870a4dfa9dfff1a653e49704e18c7a67f4c2476a170ff73ed6b" Jan 16 23:58:05.811411 containerd[1469]: 2026-01-16 23:58:05.787 [INFO][5092] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="86b346fee605a870a4dfa9dfff1a653e49704e18c7a67f4c2476a170ff73ed6b" HandleID="k8s-pod-network.86b346fee605a870a4dfa9dfff1a653e49704e18c7a67f4c2476a170ff73ed6b" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-whisker--65b547c47d--4k7mm-eth0" Jan 16 23:58:05.811411 containerd[1469]: 2026-01-16 23:58:05.788 [INFO][5092] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:58:05.811411 containerd[1469]: 2026-01-16 23:58:05.788 [INFO][5092] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:58:05.811411 containerd[1469]: 2026-01-16 23:58:05.804 [WARNING][5092] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="86b346fee605a870a4dfa9dfff1a653e49704e18c7a67f4c2476a170ff73ed6b" HandleID="k8s-pod-network.86b346fee605a870a4dfa9dfff1a653e49704e18c7a67f4c2476a170ff73ed6b" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-whisker--65b547c47d--4k7mm-eth0" Jan 16 23:58:05.811411 containerd[1469]: 2026-01-16 23:58:05.804 [INFO][5092] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="86b346fee605a870a4dfa9dfff1a653e49704e18c7a67f4c2476a170ff73ed6b" HandleID="k8s-pod-network.86b346fee605a870a4dfa9dfff1a653e49704e18c7a67f4c2476a170ff73ed6b" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-whisker--65b547c47d--4k7mm-eth0" Jan 16 23:58:05.811411 containerd[1469]: 2026-01-16 23:58:05.807 [INFO][5092] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:58:05.811411 containerd[1469]: 2026-01-16 23:58:05.809 [INFO][5085] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="86b346fee605a870a4dfa9dfff1a653e49704e18c7a67f4c2476a170ff73ed6b" Jan 16 23:58:05.812272 containerd[1469]: time="2026-01-16T23:58:05.811452434Z" level=info msg="TearDown network for sandbox \"86b346fee605a870a4dfa9dfff1a653e49704e18c7a67f4c2476a170ff73ed6b\" successfully" Jan 16 23:58:05.812272 containerd[1469]: time="2026-01-16T23:58:05.811480474Z" level=info msg="StopPodSandbox for \"86b346fee605a870a4dfa9dfff1a653e49704e18c7a67f4c2476a170ff73ed6b\" returns successfully" Jan 16 23:58:05.812272 containerd[1469]: time="2026-01-16T23:58:05.812141509Z" level=info msg="RemovePodSandbox for \"86b346fee605a870a4dfa9dfff1a653e49704e18c7a67f4c2476a170ff73ed6b\"" Jan 16 23:58:05.812272 containerd[1469]: time="2026-01-16T23:58:05.812170469Z" level=info msg="Forcibly stopping sandbox \"86b346fee605a870a4dfa9dfff1a653e49704e18c7a67f4c2476a170ff73ed6b\"" Jan 16 23:58:05.908757 containerd[1469]: 2026-01-16 23:58:05.863 [WARNING][5106] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="86b346fee605a870a4dfa9dfff1a653e49704e18c7a67f4c2476a170ff73ed6b" WorkloadEndpoint="ci--4081--3--6--n--fe2a5b3650-k8s-whisker--65b547c47d--4k7mm-eth0" Jan 16 23:58:05.908757 containerd[1469]: 2026-01-16 23:58:05.863 [INFO][5106] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="86b346fee605a870a4dfa9dfff1a653e49704e18c7a67f4c2476a170ff73ed6b" Jan 16 23:58:05.908757 containerd[1469]: 2026-01-16 23:58:05.863 [INFO][5106] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="86b346fee605a870a4dfa9dfff1a653e49704e18c7a67f4c2476a170ff73ed6b" iface="eth0" netns="" Jan 16 23:58:05.908757 containerd[1469]: 2026-01-16 23:58:05.863 [INFO][5106] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="86b346fee605a870a4dfa9dfff1a653e49704e18c7a67f4c2476a170ff73ed6b" Jan 16 23:58:05.908757 containerd[1469]: 2026-01-16 23:58:05.863 [INFO][5106] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="86b346fee605a870a4dfa9dfff1a653e49704e18c7a67f4c2476a170ff73ed6b" Jan 16 23:58:05.908757 containerd[1469]: 2026-01-16 23:58:05.892 [INFO][5113] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="86b346fee605a870a4dfa9dfff1a653e49704e18c7a67f4c2476a170ff73ed6b" HandleID="k8s-pod-network.86b346fee605a870a4dfa9dfff1a653e49704e18c7a67f4c2476a170ff73ed6b" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-whisker--65b547c47d--4k7mm-eth0" Jan 16 23:58:05.908757 containerd[1469]: 2026-01-16 23:58:05.892 [INFO][5113] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:58:05.908757 containerd[1469]: 2026-01-16 23:58:05.892 [INFO][5113] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:58:05.908757 containerd[1469]: 2026-01-16 23:58:05.902 [WARNING][5113] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="86b346fee605a870a4dfa9dfff1a653e49704e18c7a67f4c2476a170ff73ed6b" HandleID="k8s-pod-network.86b346fee605a870a4dfa9dfff1a653e49704e18c7a67f4c2476a170ff73ed6b" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-whisker--65b547c47d--4k7mm-eth0" Jan 16 23:58:05.908757 containerd[1469]: 2026-01-16 23:58:05.902 [INFO][5113] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="86b346fee605a870a4dfa9dfff1a653e49704e18c7a67f4c2476a170ff73ed6b" HandleID="k8s-pod-network.86b346fee605a870a4dfa9dfff1a653e49704e18c7a67f4c2476a170ff73ed6b" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-whisker--65b547c47d--4k7mm-eth0" Jan 16 23:58:05.908757 containerd[1469]: 2026-01-16 23:58:05.904 [INFO][5113] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:58:05.908757 containerd[1469]: 2026-01-16 23:58:05.906 [INFO][5106] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="86b346fee605a870a4dfa9dfff1a653e49704e18c7a67f4c2476a170ff73ed6b" Jan 16 23:58:05.909173 containerd[1469]: time="2026-01-16T23:58:05.908788921Z" level=info msg="TearDown network for sandbox \"86b346fee605a870a4dfa9dfff1a653e49704e18c7a67f4c2476a170ff73ed6b\" successfully" Jan 16 23:58:05.917497 containerd[1469]: time="2026-01-16T23:58:05.917167700Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"86b346fee605a870a4dfa9dfff1a653e49704e18c7a67f4c2476a170ff73ed6b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 16 23:58:05.917497 containerd[1469]: time="2026-01-16T23:58:05.917238139Z" level=info msg="RemovePodSandbox \"86b346fee605a870a4dfa9dfff1a653e49704e18c7a67f4c2476a170ff73ed6b\" returns successfully" Jan 16 23:58:05.918127 containerd[1469]: time="2026-01-16T23:58:05.917726576Z" level=info msg="StopPodSandbox for \"1c099c7b6edcf90c7c98d9cfb00e2eba031feee97a663feca17d01d07b01764c\"" Jan 16 23:58:06.030120 containerd[1469]: 2026-01-16 23:58:05.979 [WARNING][5127] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1c099c7b6edcf90c7c98d9cfb00e2eba031feee97a663feca17d01d07b01764c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--fe2a5b3650-k8s-calico--kube--controllers--68bd9998fd--lpljt-eth0", GenerateName:"calico-kube-controllers-68bd9998fd-", Namespace:"calico-system", SelfLink:"", UID:"e83705ab-d8ce-46ca-880d-899f69158672", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 57, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68bd9998fd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-fe2a5b3650", ContainerID:"83b0456ccd50f25e75495536e147bb55a2ae359ea09f4046eaba506efd0b842b", Pod:"calico-kube-controllers-68bd9998fd-lpljt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.52.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic17203ebb8a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:58:06.030120 containerd[1469]: 2026-01-16 23:58:05.979 [INFO][5127] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1c099c7b6edcf90c7c98d9cfb00e2eba031feee97a663feca17d01d07b01764c" Jan 16 23:58:06.030120 containerd[1469]: 2026-01-16 23:58:05.979 [INFO][5127] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1c099c7b6edcf90c7c98d9cfb00e2eba031feee97a663feca17d01d07b01764c" iface="eth0" netns="" Jan 16 23:58:06.030120 containerd[1469]: 2026-01-16 23:58:05.979 [INFO][5127] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1c099c7b6edcf90c7c98d9cfb00e2eba031feee97a663feca17d01d07b01764c" Jan 16 23:58:06.030120 containerd[1469]: 2026-01-16 23:58:05.979 [INFO][5127] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1c099c7b6edcf90c7c98d9cfb00e2eba031feee97a663feca17d01d07b01764c" Jan 16 23:58:06.030120 containerd[1469]: 2026-01-16 23:58:06.007 [INFO][5134] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1c099c7b6edcf90c7c98d9cfb00e2eba031feee97a663feca17d01d07b01764c" HandleID="k8s-pod-network.1c099c7b6edcf90c7c98d9cfb00e2eba031feee97a663feca17d01d07b01764c" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-calico--kube--controllers--68bd9998fd--lpljt-eth0" Jan 16 23:58:06.030120 containerd[1469]: 2026-01-16 23:58:06.008 [INFO][5134] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:58:06.030120 containerd[1469]: 2026-01-16 23:58:06.008 [INFO][5134] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:58:06.030120 containerd[1469]: 2026-01-16 23:58:06.021 [WARNING][5134] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1c099c7b6edcf90c7c98d9cfb00e2eba031feee97a663feca17d01d07b01764c" HandleID="k8s-pod-network.1c099c7b6edcf90c7c98d9cfb00e2eba031feee97a663feca17d01d07b01764c" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-calico--kube--controllers--68bd9998fd--lpljt-eth0" Jan 16 23:58:06.030120 containerd[1469]: 2026-01-16 23:58:06.021 [INFO][5134] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1c099c7b6edcf90c7c98d9cfb00e2eba031feee97a663feca17d01d07b01764c" HandleID="k8s-pod-network.1c099c7b6edcf90c7c98d9cfb00e2eba031feee97a663feca17d01d07b01764c" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-calico--kube--controllers--68bd9998fd--lpljt-eth0" Jan 16 23:58:06.030120 containerd[1469]: 2026-01-16 23:58:06.024 [INFO][5134] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:58:06.030120 containerd[1469]: 2026-01-16 23:58:06.027 [INFO][5127] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1c099c7b6edcf90c7c98d9cfb00e2eba031feee97a663feca17d01d07b01764c" Jan 16 23:58:06.030611 containerd[1469]: time="2026-01-16T23:58:06.030168914Z" level=info msg="TearDown network for sandbox \"1c099c7b6edcf90c7c98d9cfb00e2eba031feee97a663feca17d01d07b01764c\" successfully" Jan 16 23:58:06.030611 containerd[1469]: time="2026-01-16T23:58:06.030196513Z" level=info msg="StopPodSandbox for \"1c099c7b6edcf90c7c98d9cfb00e2eba031feee97a663feca17d01d07b01764c\" returns successfully" Jan 16 23:58:06.031686 containerd[1469]: time="2026-01-16T23:58:06.031514064Z" level=info msg="RemovePodSandbox for \"1c099c7b6edcf90c7c98d9cfb00e2eba031feee97a663feca17d01d07b01764c\"" Jan 16 23:58:06.031686 containerd[1469]: time="2026-01-16T23:58:06.031593343Z" level=info msg="Forcibly stopping sandbox \"1c099c7b6edcf90c7c98d9cfb00e2eba031feee97a663feca17d01d07b01764c\"" Jan 16 23:58:06.120198 containerd[1469]: 2026-01-16 23:58:06.073 [WARNING][5148] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1c099c7b6edcf90c7c98d9cfb00e2eba031feee97a663feca17d01d07b01764c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--fe2a5b3650-k8s-calico--kube--controllers--68bd9998fd--lpljt-eth0", GenerateName:"calico-kube-controllers-68bd9998fd-", Namespace:"calico-system", SelfLink:"", UID:"e83705ab-d8ce-46ca-880d-899f69158672", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 57, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68bd9998fd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-fe2a5b3650", ContainerID:"83b0456ccd50f25e75495536e147bb55a2ae359ea09f4046eaba506efd0b842b", Pod:"calico-kube-controllers-68bd9998fd-lpljt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.52.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic17203ebb8a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:58:06.120198 containerd[1469]: 2026-01-16 23:58:06.074 [INFO][5148] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1c099c7b6edcf90c7c98d9cfb00e2eba031feee97a663feca17d01d07b01764c" Jan 16 23:58:06.120198 containerd[1469]: 2026-01-16 23:58:06.074 [INFO][5148] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1c099c7b6edcf90c7c98d9cfb00e2eba031feee97a663feca17d01d07b01764c" iface="eth0" netns="" Jan 16 23:58:06.120198 containerd[1469]: 2026-01-16 23:58:06.074 [INFO][5148] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1c099c7b6edcf90c7c98d9cfb00e2eba031feee97a663feca17d01d07b01764c" Jan 16 23:58:06.120198 containerd[1469]: 2026-01-16 23:58:06.074 [INFO][5148] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1c099c7b6edcf90c7c98d9cfb00e2eba031feee97a663feca17d01d07b01764c" Jan 16 23:58:06.120198 containerd[1469]: 2026-01-16 23:58:06.100 [INFO][5155] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1c099c7b6edcf90c7c98d9cfb00e2eba031feee97a663feca17d01d07b01764c" HandleID="k8s-pod-network.1c099c7b6edcf90c7c98d9cfb00e2eba031feee97a663feca17d01d07b01764c" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-calico--kube--controllers--68bd9998fd--lpljt-eth0" Jan 16 23:58:06.120198 containerd[1469]: 2026-01-16 23:58:06.100 [INFO][5155] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:58:06.120198 containerd[1469]: 2026-01-16 23:58:06.100 [INFO][5155] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:58:06.120198 containerd[1469]: 2026-01-16 23:58:06.112 [WARNING][5155] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1c099c7b6edcf90c7c98d9cfb00e2eba031feee97a663feca17d01d07b01764c" HandleID="k8s-pod-network.1c099c7b6edcf90c7c98d9cfb00e2eba031feee97a663feca17d01d07b01764c" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-calico--kube--controllers--68bd9998fd--lpljt-eth0" Jan 16 23:58:06.120198 containerd[1469]: 2026-01-16 23:58:06.112 [INFO][5155] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1c099c7b6edcf90c7c98d9cfb00e2eba031feee97a663feca17d01d07b01764c" HandleID="k8s-pod-network.1c099c7b6edcf90c7c98d9cfb00e2eba031feee97a663feca17d01d07b01764c" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-calico--kube--controllers--68bd9998fd--lpljt-eth0" Jan 16 23:58:06.120198 containerd[1469]: 2026-01-16 23:58:06.114 [INFO][5155] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:58:06.120198 containerd[1469]: 2026-01-16 23:58:06.116 [INFO][5148] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1c099c7b6edcf90c7c98d9cfb00e2eba031feee97a663feca17d01d07b01764c" Jan 16 23:58:06.120198 containerd[1469]: time="2026-01-16T23:58:06.120167820Z" level=info msg="TearDown network for sandbox \"1c099c7b6edcf90c7c98d9cfb00e2eba031feee97a663feca17d01d07b01764c\" successfully" Jan 16 23:58:06.130604 containerd[1469]: time="2026-01-16T23:58:06.130548265Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1c099c7b6edcf90c7c98d9cfb00e2eba031feee97a663feca17d01d07b01764c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 16 23:58:06.130755 containerd[1469]: time="2026-01-16T23:58:06.130646584Z" level=info msg="RemovePodSandbox \"1c099c7b6edcf90c7c98d9cfb00e2eba031feee97a663feca17d01d07b01764c\" returns successfully" Jan 16 23:58:06.131842 containerd[1469]: time="2026-01-16T23:58:06.131147740Z" level=info msg="StopPodSandbox for \"a7ee57f7bb023a0ee0f2568b9564b7017c07a878d1c81bc06782f6229a5dcbc0\"" Jan 16 23:58:06.220076 containerd[1469]: 2026-01-16 23:58:06.182 [WARNING][5170] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a7ee57f7bb023a0ee0f2568b9564b7017c07a878d1c81bc06782f6229a5dcbc0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--fe2a5b3650-k8s-calico--apiserver--c7c7c7dd6--26hzj-eth0", GenerateName:"calico-apiserver-c7c7c7dd6-", Namespace:"calico-apiserver", SelfLink:"", UID:"b9c78c11-17fe-4d54-827b-16ba9d81154b", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 57, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c7c7c7dd6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-fe2a5b3650", ContainerID:"6d2ea36f147ff898c1b4bdf37b7fe81c49c79d024caa69926b151130c3946f0c", Pod:"calico-apiserver-c7c7c7dd6-26hzj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.52.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0beb6567bb1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:58:06.220076 containerd[1469]: 2026-01-16 23:58:06.182 [INFO][5170] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a7ee57f7bb023a0ee0f2568b9564b7017c07a878d1c81bc06782f6229a5dcbc0" Jan 16 23:58:06.220076 containerd[1469]: 2026-01-16 23:58:06.183 [INFO][5170] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a7ee57f7bb023a0ee0f2568b9564b7017c07a878d1c81bc06782f6229a5dcbc0" iface="eth0" netns="" Jan 16 23:58:06.220076 containerd[1469]: 2026-01-16 23:58:06.183 [INFO][5170] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a7ee57f7bb023a0ee0f2568b9564b7017c07a878d1c81bc06782f6229a5dcbc0" Jan 16 23:58:06.220076 containerd[1469]: 2026-01-16 23:58:06.183 [INFO][5170] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a7ee57f7bb023a0ee0f2568b9564b7017c07a878d1c81bc06782f6229a5dcbc0" Jan 16 23:58:06.220076 containerd[1469]: 2026-01-16 23:58:06.202 [INFO][5177] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a7ee57f7bb023a0ee0f2568b9564b7017c07a878d1c81bc06782f6229a5dcbc0" HandleID="k8s-pod-network.a7ee57f7bb023a0ee0f2568b9564b7017c07a878d1c81bc06782f6229a5dcbc0" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-calico--apiserver--c7c7c7dd6--26hzj-eth0" Jan 16 23:58:06.220076 containerd[1469]: 2026-01-16 23:58:06.202 [INFO][5177] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:58:06.220076 containerd[1469]: 2026-01-16 23:58:06.202 [INFO][5177] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:58:06.220076 containerd[1469]: 2026-01-16 23:58:06.213 [WARNING][5177] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a7ee57f7bb023a0ee0f2568b9564b7017c07a878d1c81bc06782f6229a5dcbc0" HandleID="k8s-pod-network.a7ee57f7bb023a0ee0f2568b9564b7017c07a878d1c81bc06782f6229a5dcbc0" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-calico--apiserver--c7c7c7dd6--26hzj-eth0" Jan 16 23:58:06.220076 containerd[1469]: 2026-01-16 23:58:06.213 [INFO][5177] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a7ee57f7bb023a0ee0f2568b9564b7017c07a878d1c81bc06782f6229a5dcbc0" HandleID="k8s-pod-network.a7ee57f7bb023a0ee0f2568b9564b7017c07a878d1c81bc06782f6229a5dcbc0" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-calico--apiserver--c7c7c7dd6--26hzj-eth0" Jan 16 23:58:06.220076 containerd[1469]: 2026-01-16 23:58:06.216 [INFO][5177] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:58:06.220076 containerd[1469]: 2026-01-16 23:58:06.218 [INFO][5170] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a7ee57f7bb023a0ee0f2568b9564b7017c07a878d1c81bc06782f6229a5dcbc0" Jan 16 23:58:06.221483 containerd[1469]: time="2026-01-16T23:58:06.220398052Z" level=info msg="TearDown network for sandbox \"a7ee57f7bb023a0ee0f2568b9564b7017c07a878d1c81bc06782f6229a5dcbc0\" successfully" Jan 16 23:58:06.221483 containerd[1469]: time="2026-01-16T23:58:06.220433012Z" level=info msg="StopPodSandbox for \"a7ee57f7bb023a0ee0f2568b9564b7017c07a878d1c81bc06782f6229a5dcbc0\" returns successfully" Jan 16 23:58:06.222620 containerd[1469]: time="2026-01-16T23:58:06.222198079Z" level=info msg="RemovePodSandbox for \"a7ee57f7bb023a0ee0f2568b9564b7017c07a878d1c81bc06782f6229a5dcbc0\"" Jan 16 23:58:06.222620 containerd[1469]: time="2026-01-16T23:58:06.222238479Z" level=info msg="Forcibly stopping sandbox \"a7ee57f7bb023a0ee0f2568b9564b7017c07a878d1c81bc06782f6229a5dcbc0\"" Jan 16 23:58:06.336985 containerd[1469]: 2026-01-16 23:58:06.288 [WARNING][5191] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a7ee57f7bb023a0ee0f2568b9564b7017c07a878d1c81bc06782f6229a5dcbc0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--fe2a5b3650-k8s-calico--apiserver--c7c7c7dd6--26hzj-eth0", GenerateName:"calico-apiserver-c7c7c7dd6-", Namespace:"calico-apiserver", SelfLink:"", UID:"b9c78c11-17fe-4d54-827b-16ba9d81154b", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 57, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c7c7c7dd6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-fe2a5b3650", ContainerID:"6d2ea36f147ff898c1b4bdf37b7fe81c49c79d024caa69926b151130c3946f0c", Pod:"calico-apiserver-c7c7c7dd6-26hzj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.52.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0beb6567bb1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:58:06.336985 containerd[1469]: 2026-01-16 23:58:06.288 [INFO][5191] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a7ee57f7bb023a0ee0f2568b9564b7017c07a878d1c81bc06782f6229a5dcbc0" Jan 16 23:58:06.336985 containerd[1469]: 2026-01-16 23:58:06.288 [INFO][5191] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a7ee57f7bb023a0ee0f2568b9564b7017c07a878d1c81bc06782f6229a5dcbc0" iface="eth0" netns="" Jan 16 23:58:06.336985 containerd[1469]: 2026-01-16 23:58:06.288 [INFO][5191] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a7ee57f7bb023a0ee0f2568b9564b7017c07a878d1c81bc06782f6229a5dcbc0" Jan 16 23:58:06.336985 containerd[1469]: 2026-01-16 23:58:06.288 [INFO][5191] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a7ee57f7bb023a0ee0f2568b9564b7017c07a878d1c81bc06782f6229a5dcbc0" Jan 16 23:58:06.336985 containerd[1469]: 2026-01-16 23:58:06.315 [INFO][5199] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a7ee57f7bb023a0ee0f2568b9564b7017c07a878d1c81bc06782f6229a5dcbc0" HandleID="k8s-pod-network.a7ee57f7bb023a0ee0f2568b9564b7017c07a878d1c81bc06782f6229a5dcbc0" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-calico--apiserver--c7c7c7dd6--26hzj-eth0" Jan 16 23:58:06.336985 containerd[1469]: 2026-01-16 23:58:06.315 [INFO][5199] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:58:06.336985 containerd[1469]: 2026-01-16 23:58:06.315 [INFO][5199] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:58:06.336985 containerd[1469]: 2026-01-16 23:58:06.329 [WARNING][5199] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a7ee57f7bb023a0ee0f2568b9564b7017c07a878d1c81bc06782f6229a5dcbc0" HandleID="k8s-pod-network.a7ee57f7bb023a0ee0f2568b9564b7017c07a878d1c81bc06782f6229a5dcbc0" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-calico--apiserver--c7c7c7dd6--26hzj-eth0" Jan 16 23:58:06.336985 containerd[1469]: 2026-01-16 23:58:06.329 [INFO][5199] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a7ee57f7bb023a0ee0f2568b9564b7017c07a878d1c81bc06782f6229a5dcbc0" HandleID="k8s-pod-network.a7ee57f7bb023a0ee0f2568b9564b7017c07a878d1c81bc06782f6229a5dcbc0" Workload="ci--4081--3--6--n--fe2a5b3650-k8s-calico--apiserver--c7c7c7dd6--26hzj-eth0" Jan 16 23:58:06.336985 containerd[1469]: 2026-01-16 23:58:06.332 [INFO][5199] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:58:06.336985 containerd[1469]: 2026-01-16 23:58:06.334 [INFO][5191] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a7ee57f7bb023a0ee0f2568b9564b7017c07a878d1c81bc06782f6229a5dcbc0" Jan 16 23:58:06.338990 containerd[1469]: time="2026-01-16T23:58:06.337620321Z" level=info msg="TearDown network for sandbox \"a7ee57f7bb023a0ee0f2568b9564b7017c07a878d1c81bc06782f6229a5dcbc0\" successfully" Jan 16 23:58:06.343445 containerd[1469]: time="2026-01-16T23:58:06.343326120Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a7ee57f7bb023a0ee0f2568b9564b7017c07a878d1c81bc06782f6229a5dcbc0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 16 23:58:06.344085 containerd[1469]: time="2026-01-16T23:58:06.343706757Z" level=info msg="RemovePodSandbox \"a7ee57f7bb023a0ee0f2568b9564b7017c07a878d1c81bc06782f6229a5dcbc0\" returns successfully" Jan 16 23:58:07.585456 containerd[1469]: time="2026-01-16T23:58:07.584546703Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 16 23:58:07.923313 containerd[1469]: time="2026-01-16T23:58:07.922833707Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:58:07.925220 containerd[1469]: time="2026-01-16T23:58:07.925134731Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 16 23:58:07.925402 containerd[1469]: time="2026-01-16T23:58:07.925345929Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 16 23:58:07.925655 kubelet[2566]: E0116 23:58:07.925547 2566 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 23:58:07.925655 kubelet[2566]: E0116 23:58:07.925669 2566 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 23:58:07.926979 kubelet[2566]: E0116 23:58:07.925834 2566 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mcknv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-jr25d_calico-system(195fd954-db29-4a46-a5c3-26216d80a6af): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 16 23:58:07.927316 kubelet[2566]: E0116 23:58:07.927239 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jr25d" podUID="195fd954-db29-4a46-a5c3-26216d80a6af" Jan 16 23:58:09.584281 containerd[1469]: time="2026-01-16T23:58:09.584207109Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 23:58:09.925536 containerd[1469]: time="2026-01-16T23:58:09.925197262Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:58:09.927710 containerd[1469]: time="2026-01-16T23:58:09.927597439Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 23:58:09.928211 containerd[1469]: time="2026-01-16T23:58:09.927741572Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 16 23:58:09.928284 kubelet[2566]: E0116 23:58:09.927897 2566 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 23:58:09.928284 kubelet[2566]: E0116 23:58:09.927958 2566 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 23:58:09.928284 kubelet[2566]: E0116 23:58:09.928085 2566 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6hczh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-c7c7c7dd6-26hzj_calico-apiserver(b9c78c11-17fe-4d54-827b-16ba9d81154b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 23:58:09.930091 kubelet[2566]: E0116 23:58:09.929489 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c7c7c7dd6-26hzj" podUID="b9c78c11-17fe-4d54-827b-16ba9d81154b" Jan 16 23:58:10.586316 containerd[1469]: time="2026-01-16T23:58:10.585282370Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 16 23:58:10.930924 containerd[1469]: time="2026-01-16T23:58:10.930645350Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:58:10.932500 containerd[1469]: time="2026-01-16T23:58:10.932385943Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 16 23:58:10.932698 containerd[1469]: time="2026-01-16T23:58:10.932598242Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 16 23:58:10.932971 kubelet[2566]: E0116 23:58:10.932886 2566 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 23:58:10.934963 kubelet[2566]: E0116 23:58:10.932985 2566 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 23:58:10.934963 kubelet[2566]: E0116 23:58:10.933390 2566 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-75f2p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-68bd9998fd-lpljt_calico-system(e83705ab-d8ce-46ca-880d-899f69158672): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 16 23:58:10.934963 kubelet[2566]: E0116 23:58:10.934605 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68bd9998fd-lpljt" podUID="e83705ab-d8ce-46ca-880d-899f69158672" Jan 16 23:58:10.935281 containerd[1469]: time="2026-01-16T23:58:10.933480559Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 23:58:11.275236 containerd[1469]: time="2026-01-16T23:58:11.275179661Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:58:11.276927 containerd[1469]: time="2026-01-16T23:58:11.276807280Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 23:58:11.278006 containerd[1469]: time="2026-01-16T23:58:11.276935091Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 16 23:58:11.278424 kubelet[2566]: E0116 23:58:11.277626 2566 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 23:58:11.278424 kubelet[2566]: E0116 23:58:11.277685 2566 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 23:58:11.279088 kubelet[2566]: E0116 23:58:11.277883 2566 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5md25,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-c7c7c7dd6-kgtcm_calico-apiserver(861b4149-53db-42c9-9886-651961041ffb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 23:58:11.280930 kubelet[2566]: E0116 23:58:11.279848 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c7c7c7dd6-kgtcm" podUID="861b4149-53db-42c9-9886-651961041ffb" Jan 16 23:58:13.585098 containerd[1469]: time="2026-01-16T23:58:13.584982823Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 16 23:58:13.929480 containerd[1469]: time="2026-01-16T23:58:13.929029678Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:58:13.931719 containerd[1469]: time="2026-01-16T23:58:13.931589163Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 16 23:58:13.931719 containerd[1469]: time="2026-01-16T23:58:13.931668210Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 16 23:58:13.932099 kubelet[2566]: E0116 23:58:13.932000 2566 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 23:58:13.932883 kubelet[2566]: E0116 23:58:13.932099 2566 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 23:58:13.932883 kubelet[2566]: E0116 23:58:13.932267 2566 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-54dj2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-j4ltk_calico-system(f9b64606-aa04-4801-bf16-55e0f797524c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 16 23:58:13.937392 containerd[1469]: time="2026-01-16T23:58:13.936007678Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 16 23:58:14.279957 containerd[1469]: time="2026-01-16T23:58:14.279887578Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:58:14.282307 containerd[1469]: time="2026-01-16T23:58:14.281998343Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 16 23:58:14.282307 containerd[1469]: time="2026-01-16T23:58:14.282080710Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 16 23:58:14.285194 kubelet[2566]: E0116 23:58:14.282522 2566 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 23:58:14.285194 kubelet[2566]: E0116 23:58:14.282581 2566 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 23:58:14.285194 kubelet[2566]: E0116 23:58:14.282718 2566 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-54dj2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-j4ltk_calico-system(f9b64606-aa04-4801-bf16-55e0f797524c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 16 23:58:14.285828 kubelet[2566]: E0116 23:58:14.285769 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-j4ltk" podUID="f9b64606-aa04-4801-bf16-55e0f797524c" Jan 16 23:58:18.590933 kubelet[2566]: E0116 23:58:18.590874 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7c4f4c46c6-chbl9" podUID="377f520a-36e6-491f-865e-cdb387ff596c" Jan 16 23:58:21.584341 kubelet[2566]: E0116 23:58:21.583922 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jr25d" podUID="195fd954-db29-4a46-a5c3-26216d80a6af" Jan 16 23:58:22.587604 kubelet[2566]: E0116 23:58:22.587190 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68bd9998fd-lpljt" podUID="e83705ab-d8ce-46ca-880d-899f69158672" Jan 16 23:58:23.600901 systemd[1]: Started sshd@9-46.224.42.239:22-180.184.160.202:16272.service - OpenSSH per-connection server daemon (180.184.160.202:16272). Jan 16 23:58:24.588019 kubelet[2566]: E0116 23:58:24.586178 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c7c7c7dd6-26hzj" podUID="b9c78c11-17fe-4d54-827b-16ba9d81154b" Jan 16 23:58:24.735132 sshd[5239]: Received disconnect from 180.184.160.202 port 16272:11: Bye Bye [preauth] Jan 16 23:58:24.735132 sshd[5239]: Disconnected from authenticating user root 180.184.160.202 port 16272 [preauth] Jan 16 23:58:24.740056 systemd[1]: sshd@9-46.224.42.239:22-180.184.160.202:16272.service: Deactivated successfully. Jan 16 23:58:26.585968 kubelet[2566]: E0116 23:58:26.585520 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c7c7c7dd6-kgtcm" podUID="861b4149-53db-42c9-9886-651961041ffb" Jan 16 23:58:27.585042 kubelet[2566]: E0116 23:58:27.584924 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-j4ltk" podUID="f9b64606-aa04-4801-bf16-55e0f797524c" Jan 16 23:58:29.583886 containerd[1469]: time="2026-01-16T23:58:29.583842948Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 16 23:58:29.918008 containerd[1469]: time="2026-01-16T23:58:29.917123624Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:58:29.920038 containerd[1469]: time="2026-01-16T23:58:29.919820919Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 16 23:58:29.920189 containerd[1469]: time="2026-01-16T23:58:29.920101933Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 16 23:58:29.920272 kubelet[2566]: E0116 23:58:29.920229 2566 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 23:58:29.920567 kubelet[2566]: E0116 23:58:29.920283 2566 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 23:58:29.920567 kubelet[2566]: E0116 23:58:29.920384 2566 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:740b13404779446abb486266eca53865,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wdf9c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7c4f4c46c6-chbl9_calico-system(377f520a-36e6-491f-865e-cdb387ff596c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 16 23:58:29.923161 containerd[1469]: time="2026-01-16T23:58:29.923121324Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 16 23:58:30.271257 containerd[1469]: time="2026-01-16T23:58:30.271064270Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:58:30.272819 containerd[1469]: time="2026-01-16T23:58:30.272607026Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 16 23:58:30.272819 containerd[1469]: time="2026-01-16T23:58:30.272743392Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 16 23:58:30.273006 kubelet[2566]: E0116 23:58:30.272939 2566 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 23:58:30.273062 kubelet[2566]: E0116 23:58:30.273010 2566 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 23:58:30.273201 kubelet[2566]: E0116 23:58:30.273159 2566 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wdf9c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7c4f4c46c6-chbl9_calico-system(377f520a-36e6-491f-865e-cdb387ff596c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 16 23:58:30.274699 kubelet[2566]: E0116 23:58:30.274642 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7c4f4c46c6-chbl9" podUID="377f520a-36e6-491f-865e-cdb387ff596c" Jan 16 23:58:35.585548 containerd[1469]: time="2026-01-16T23:58:35.585484003Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 16 23:58:35.928263 containerd[1469]: time="2026-01-16T23:58:35.927791821Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:58:35.929794 containerd[1469]: time="2026-01-16T23:58:35.929720782Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 16 23:58:35.930629 containerd[1469]: time="2026-01-16T23:58:35.929765424Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 16 23:58:35.931041 kubelet[2566]: E0116 23:58:35.930922 2566 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 23:58:35.932105 kubelet[2566]: E0116 23:58:35.931196 2566 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 23:58:35.932308 kubelet[2566]: E0116 23:58:35.931787 2566 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mcknv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-jr25d_calico-system(195fd954-db29-4a46-a5c3-26216d80a6af): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 16 23:58:35.933804 kubelet[2566]: E0116 23:58:35.933734 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jr25d" podUID="195fd954-db29-4a46-a5c3-26216d80a6af" Jan 16 23:58:36.585875 containerd[1469]: time="2026-01-16T23:58:36.584668476Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 23:58:36.927485 containerd[1469]: time="2026-01-16T23:58:36.926691838Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:58:36.929034 containerd[1469]: time="2026-01-16T23:58:36.928816246Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 23:58:36.929034 containerd[1469]: time="2026-01-16T23:58:36.928858047Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 16 23:58:36.929514 kubelet[2566]: E0116 23:58:36.929197 2566 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 23:58:36.929514 kubelet[2566]: E0116 23:58:36.929241 2566 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 23:58:36.930147 kubelet[2566]: E0116 23:58:36.929517 2566 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6hczh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-c7c7c7dd6-26hzj_calico-apiserver(b9c78c11-17fe-4d54-827b-16ba9d81154b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 23:58:36.931515 kubelet[2566]: E0116 23:58:36.930913 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c7c7c7dd6-26hzj" podUID="b9c78c11-17fe-4d54-827b-16ba9d81154b" Jan 16 23:58:36.932758 containerd[1469]: time="2026-01-16T23:58:36.930588998Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 16 23:58:37.273282 containerd[1469]: time="2026-01-16T23:58:37.273220314Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:58:37.274940 containerd[1469]: time="2026-01-16T23:58:37.274883061Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 16 23:58:37.275066 containerd[1469]: time="2026-01-16T23:58:37.275014266Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 16 23:58:37.275671 kubelet[2566]: E0116 23:58:37.275400 2566 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 23:58:37.275671 kubelet[2566]: E0116 23:58:37.275458 2566 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 23:58:37.275671 kubelet[2566]: E0116 23:58:37.275601 2566 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-75f2p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-68bd9998fd-lpljt_calico-system(e83705ab-d8ce-46ca-880d-899f69158672): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 16 23:58:37.277335 kubelet[2566]: E0116 23:58:37.277274 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68bd9998fd-lpljt" podUID="e83705ab-d8ce-46ca-880d-899f69158672" Jan 16 23:58:37.585187 containerd[1469]: time="2026-01-16T23:58:37.584602263Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 23:58:37.918385 containerd[1469]: time="2026-01-16T23:58:37.917877125Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:58:37.920452 containerd[1469]: time="2026-01-16T23:58:37.920348864Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 23:58:37.920571 containerd[1469]: time="2026-01-16T23:58:37.920433427Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 16 23:58:37.920970 kubelet[2566]: E0116 23:58:37.920867 2566 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 23:58:37.921158 kubelet[2566]: E0116 23:58:37.920978 2566 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 23:58:37.921654 kubelet[2566]: E0116 23:58:37.921510 2566 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5md25,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-c7c7c7dd6-kgtcm_calico-apiserver(861b4149-53db-42c9-9886-651961041ffb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 23:58:37.923137 kubelet[2566]: E0116 23:58:37.923083 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c7c7c7dd6-kgtcm" podUID="861b4149-53db-42c9-9886-651961041ffb" Jan 16 23:58:41.284353 systemd[1]: Started sshd@10-46.224.42.239:22-201.217.12.57:38414.service - OpenSSH per-connection server daemon (201.217.12.57:38414). Jan 16 23:58:41.585026 containerd[1469]: time="2026-01-16T23:58:41.583654192Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 16 23:58:41.923912 containerd[1469]: time="2026-01-16T23:58:41.923762370Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:58:41.925502 containerd[1469]: time="2026-01-16T23:58:41.925167620Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 16 23:58:41.925502 containerd[1469]: time="2026-01-16T23:58:41.925285584Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 16 23:58:41.926144 kubelet[2566]: E0116 23:58:41.925761 2566 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 23:58:41.926144 kubelet[2566]: E0116 23:58:41.925844 2566 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 23:58:41.926144 kubelet[2566]: E0116 23:58:41.926036 2566 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-54dj2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-j4ltk_calico-system(f9b64606-aa04-4801-bf16-55e0f797524c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 16 23:58:41.929077 containerd[1469]: time="2026-01-16T23:58:41.928766948Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 16 23:58:42.257807 containerd[1469]: time="2026-01-16T23:58:42.257671757Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:58:42.259842 containerd[1469]: time="2026-01-16T23:58:42.259727108Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 16 23:58:42.259842 containerd[1469]: time="2026-01-16T23:58:42.259799710Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 16 23:58:42.260046 kubelet[2566]: E0116 23:58:42.259989 2566 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 23:58:42.260150 kubelet[2566]: E0116 23:58:42.260044 2566 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 23:58:42.260655 kubelet[2566]: E0116 23:58:42.260234 2566 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-54dj2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-j4ltk_calico-system(f9b64606-aa04-4801-bf16-55e0f797524c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 16 23:58:42.261539 kubelet[2566]: E0116 23:58:42.261476 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-j4ltk" podUID="f9b64606-aa04-4801-bf16-55e0f797524c" Jan 16 23:58:43.503480 sshd[5255]: Invalid user akash from 201.217.12.57 port 38414 Jan 16 23:58:43.585401 kubelet[2566]: E0116 23:58:43.585218 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7c4f4c46c6-chbl9" podUID="377f520a-36e6-491f-865e-cdb387ff596c" Jan 16 23:58:43.780061 sshd[5255]: Received disconnect from 201.217.12.57 port 38414:11: Bye Bye [preauth] Jan 16 23:58:43.780061 sshd[5255]: Disconnected from invalid user akash 201.217.12.57 port 38414 [preauth] Jan 16 23:58:43.784664 systemd[1]: sshd@10-46.224.42.239:22-201.217.12.57:38414.service: Deactivated successfully. Jan 16 23:58:47.584551 kubelet[2566]: E0116 23:58:47.584307 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jr25d" podUID="195fd954-db29-4a46-a5c3-26216d80a6af" Jan 16 23:58:48.586763 kubelet[2566]: E0116 23:58:48.586694 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c7c7c7dd6-26hzj" podUID="b9c78c11-17fe-4d54-827b-16ba9d81154b" Jan 16 23:58:48.590969 kubelet[2566]: E0116 23:58:48.587745 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c7c7c7dd6-kgtcm" podUID="861b4149-53db-42c9-9886-651961041ffb" Jan 16 23:58:48.597204 kubelet[2566]: E0116 23:58:48.587819 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68bd9998fd-lpljt" podUID="e83705ab-d8ce-46ca-880d-899f69158672" Jan 16 23:58:57.588959 kubelet[2566]: E0116 23:58:57.588584 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7c4f4c46c6-chbl9" podUID="377f520a-36e6-491f-865e-cdb387ff596c" Jan 16 23:58:57.590199 kubelet[2566]: E0116 23:58:57.590145 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-j4ltk" podUID="f9b64606-aa04-4801-bf16-55e0f797524c" Jan 16 23:58:59.584973 kubelet[2566]: E0116 23:58:59.584856 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jr25d" podUID="195fd954-db29-4a46-a5c3-26216d80a6af" Jan 16 23:58:59.586148 kubelet[2566]: E0116 23:58:59.585390 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68bd9998fd-lpljt" podUID="e83705ab-d8ce-46ca-880d-899f69158672" Jan 16 23:59:01.584648 kubelet[2566]: E0116 23:59:01.584243 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c7c7c7dd6-26hzj" podUID="b9c78c11-17fe-4d54-827b-16ba9d81154b" Jan 16 23:59:03.587113 kubelet[2566]: E0116 23:59:03.585952 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c7c7c7dd6-kgtcm" podUID="861b4149-53db-42c9-9886-651961041ffb" Jan 16 23:59:08.589499 kubelet[2566]: E0116 23:59:08.589435 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-j4ltk" podUID="f9b64606-aa04-4801-bf16-55e0f797524c" Jan 16 23:59:10.587968 kubelet[2566]: E0116 23:59:10.587439 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jr25d" podUID="195fd954-db29-4a46-a5c3-26216d80a6af" Jan 16 23:59:11.583630 containerd[1469]: time="2026-01-16T23:59:11.583177863Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 16 23:59:11.585211 kubelet[2566]: E0116 23:59:11.584989 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68bd9998fd-lpljt" podUID="e83705ab-d8ce-46ca-880d-899f69158672" Jan 16 23:59:11.936246 containerd[1469]: time="2026-01-16T23:59:11.935548966Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:59:11.937610 containerd[1469]: time="2026-01-16T23:59:11.937452798Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 16 23:59:11.937610 containerd[1469]: time="2026-01-16T23:59:11.937489598Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 16 23:59:11.937891 kubelet[2566]: E0116 23:59:11.937816 2566 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 23:59:11.939136 kubelet[2566]: E0116 23:59:11.937895 2566 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 23:59:11.939136 kubelet[2566]: E0116 23:59:11.938097 2566 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:740b13404779446abb486266eca53865,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wdf9c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7c4f4c46c6-chbl9_calico-system(377f520a-36e6-491f-865e-cdb387ff596c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 16 23:59:11.942235 containerd[1469]: time="2026-01-16T23:59:11.942188236Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 16 23:59:12.474727 containerd[1469]: time="2026-01-16T23:59:12.474541336Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:59:12.476867 containerd[1469]: time="2026-01-16T23:59:12.476748572Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 16 23:59:12.476867 containerd[1469]: time="2026-01-16T23:59:12.476804333Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 16 23:59:12.477546 kubelet[2566]: E0116 23:59:12.477241 2566 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 23:59:12.477546 kubelet[2566]: E0116 23:59:12.477295 2566 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 23:59:12.477546 kubelet[2566]: E0116 23:59:12.477476 2566 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wdf9c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7c4f4c46c6-chbl9_calico-system(377f520a-36e6-491f-865e-cdb387ff596c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 16 23:59:12.479044 kubelet[2566]: E0116 23:59:12.478830 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7c4f4c46c6-chbl9" podUID="377f520a-36e6-491f-865e-cdb387ff596c" Jan 16 23:59:12.583446 kubelet[2566]: E0116 23:59:12.583054 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c7c7c7dd6-26hzj" podUID="b9c78c11-17fe-4d54-827b-16ba9d81154b" Jan 16 23:59:14.587204 kubelet[2566]: E0116 23:59:14.587096 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c7c7c7dd6-kgtcm" podUID="861b4149-53db-42c9-9886-651961041ffb" Jan 16 23:59:20.584234 kubelet[2566]: E0116 23:59:20.584156 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-j4ltk" podUID="f9b64606-aa04-4801-bf16-55e0f797524c" Jan 16 23:59:24.586553 kubelet[2566]: E0116 23:59:24.586463 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7c4f4c46c6-chbl9" podUID="377f520a-36e6-491f-865e-cdb387ff596c" Jan 16 23:59:25.584865 containerd[1469]: time="2026-01-16T23:59:25.584788851Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 16 23:59:25.933628 containerd[1469]: time="2026-01-16T23:59:25.933472548Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:59:25.935485 containerd[1469]: time="2026-01-16T23:59:25.935387691Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 16 23:59:25.935638 containerd[1469]: time="2026-01-16T23:59:25.935534573Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 16 23:59:25.936750 kubelet[2566]: E0116 23:59:25.936132 2566 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 23:59:25.936750 kubelet[2566]: E0116 23:59:25.936193 2566 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 23:59:25.936750 kubelet[2566]: E0116 23:59:25.936332 2566 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mcknv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-jr25d_calico-system(195fd954-db29-4a46-a5c3-26216d80a6af): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 16 23:59:25.937991 kubelet[2566]: E0116 23:59:25.937901 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jr25d" podUID="195fd954-db29-4a46-a5c3-26216d80a6af" Jan 16 23:59:26.587220 containerd[1469]: time="2026-01-16T23:59:26.587163507Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 16 23:59:26.941152 containerd[1469]: time="2026-01-16T23:59:26.940882860Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:59:26.944963 containerd[1469]: time="2026-01-16T23:59:26.942829083Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 16 23:59:26.944963 containerd[1469]: time="2026-01-16T23:59:26.942961044Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 16 23:59:26.945354 kubelet[2566]: E0116 23:59:26.945296 2566 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 23:59:26.945770 kubelet[2566]: E0116 23:59:26.945358 2566 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 23:59:26.945770 kubelet[2566]: E0116 23:59:26.945534 2566 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-75f2p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-68bd9998fd-lpljt_calico-system(e83705ab-d8ce-46ca-880d-899f69158672): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 16 23:59:26.946799 kubelet[2566]: E0116 23:59:26.946736 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68bd9998fd-lpljt" podUID="e83705ab-d8ce-46ca-880d-899f69158672" Jan 16 23:59:27.584061 containerd[1469]: time="2026-01-16T23:59:27.583997059Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 23:59:27.926046 containerd[1469]: time="2026-01-16T23:59:27.925877591Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:59:27.930812 containerd[1469]: time="2026-01-16T23:59:27.930718847Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 23:59:27.930812 containerd[1469]: time="2026-01-16T23:59:27.930778088Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 16 23:59:27.931391 kubelet[2566]: E0116 23:59:27.930978 2566 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 23:59:27.931391 kubelet[2566]: E0116 23:59:27.931026 2566 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 23:59:27.931639 containerd[1469]: time="2026-01-16T23:59:27.931377415Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 23:59:27.931993 kubelet[2566]: E0116 23:59:27.931841 2566 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6hczh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-c7c7c7dd6-26hzj_calico-apiserver(b9c78c11-17fe-4d54-827b-16ba9d81154b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 23:59:27.933746 kubelet[2566]: E0116 23:59:27.933606 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c7c7c7dd6-26hzj" podUID="b9c78c11-17fe-4d54-827b-16ba9d81154b" Jan 16 23:59:28.266585 containerd[1469]: time="2026-01-16T23:59:28.266411646Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:59:28.268174 containerd[1469]: time="2026-01-16T23:59:28.267988344Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 23:59:28.268358 containerd[1469]: time="2026-01-16T23:59:28.268127386Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 16 23:59:28.268615 kubelet[2566]: E0116 23:59:28.268556 2566 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 23:59:28.269875 kubelet[2566]: E0116 23:59:28.268610 2566 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 23:59:28.269875 kubelet[2566]: E0116 23:59:28.268739 2566 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5md25,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-c7c7c7dd6-kgtcm_calico-apiserver(861b4149-53db-42c9-9886-651961041ffb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 23:59:28.270672 kubelet[2566]: E0116 23:59:28.270452 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c7c7c7dd6-kgtcm" podUID="861b4149-53db-42c9-9886-651961041ffb" Jan 16 23:59:31.584213 containerd[1469]: time="2026-01-16T23:59:31.583130269Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 16 23:59:31.926363 containerd[1469]: time="2026-01-16T23:59:31.926193278Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:59:31.929091 containerd[1469]: time="2026-01-16T23:59:31.928589904Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 16 23:59:31.929091 containerd[1469]: time="2026-01-16T23:59:31.928669065Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 16 23:59:31.929327 kubelet[2566]: E0116 23:59:31.929206 2566 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 23:59:31.929654 kubelet[2566]: E0116 23:59:31.929319 2566 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 23:59:31.930256 kubelet[2566]: E0116 23:59:31.929806 2566 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-54dj2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-j4ltk_calico-system(f9b64606-aa04-4801-bf16-55e0f797524c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 16 23:59:31.932762 containerd[1469]: time="2026-01-16T23:59:31.932484426Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 16 23:59:32.268640 containerd[1469]: time="2026-01-16T23:59:32.268399424Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:59:32.270200 containerd[1469]: time="2026-01-16T23:59:32.270154523Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 16 23:59:32.270442 containerd[1469]: time="2026-01-16T23:59:32.270223244Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 16 23:59:32.271086 kubelet[2566]: E0116 23:59:32.270575 2566 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 23:59:32.271086 kubelet[2566]: E0116 23:59:32.270622 2566 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 23:59:32.271086 kubelet[2566]: E0116 23:59:32.270726 2566 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-54dj2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-j4ltk_calico-system(f9b64606-aa04-4801-bf16-55e0f797524c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 16 23:59:32.272152 kubelet[2566]: E0116 23:59:32.272091 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-j4ltk" podUID="f9b64606-aa04-4801-bf16-55e0f797524c" Jan 16 23:59:36.588644 kubelet[2566]: E0116 23:59:36.588577 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7c4f4c46c6-chbl9" podUID="377f520a-36e6-491f-865e-cdb387ff596c" Jan 16 23:59:37.483385 systemd[1]: Started sshd@11-46.224.42.239:22-4.153.228.146:49256.service - OpenSSH per-connection server daemon (4.153.228.146:49256). Jan 16 23:59:38.112100 sshd[5344]: Accepted publickey for core from 4.153.228.146 port 49256 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 16 23:59:38.115666 sshd[5344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 23:59:38.124647 systemd-logind[1448]: New session 8 of user core. Jan 16 23:59:38.128692 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 16 23:59:38.584160 kubelet[2566]: E0116 23:59:38.583981 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68bd9998fd-lpljt" podUID="e83705ab-d8ce-46ca-880d-899f69158672" Jan 16 23:59:38.678006 sshd[5344]: pam_unix(sshd:session): session closed for user core Jan 16 23:59:38.684165 systemd[1]: sshd@11-46.224.42.239:22-4.153.228.146:49256.service: Deactivated successfully. Jan 16 23:59:38.688137 systemd[1]: session-8.scope: Deactivated successfully. Jan 16 23:59:38.690199 systemd-logind[1448]: Session 8 logged out. Waiting for processes to exit. Jan 16 23:59:38.692835 systemd-logind[1448]: Removed session 8. Jan 16 23:59:39.583941 kubelet[2566]: E0116 23:59:39.583121 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c7c7c7dd6-kgtcm" podUID="861b4149-53db-42c9-9886-651961041ffb" Jan 16 23:59:41.584030 kubelet[2566]: E0116 23:59:41.583894 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c7c7c7dd6-26hzj" podUID="b9c78c11-17fe-4d54-827b-16ba9d81154b" Jan 16 23:59:41.584030 kubelet[2566]: E0116 23:59:41.583922 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jr25d" podUID="195fd954-db29-4a46-a5c3-26216d80a6af" Jan 16 23:59:43.798613 systemd[1]: Started sshd@12-46.224.42.239:22-4.153.228.146:49258.service - OpenSSH per-connection server daemon (4.153.228.146:49258). Jan 16 23:59:44.430964 sshd[5359]: Accepted publickey for core from 4.153.228.146 port 49258 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 16 23:59:44.433163 sshd[5359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 23:59:44.442211 systemd-logind[1448]: New session 9 of user core. Jan 16 23:59:44.450182 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 16 23:59:44.975776 sshd[5359]: pam_unix(sshd:session): session closed for user core Jan 16 23:59:44.983674 systemd[1]: sshd@12-46.224.42.239:22-4.153.228.146:49258.service: Deactivated successfully. Jan 16 23:59:44.989974 systemd[1]: session-9.scope: Deactivated successfully. Jan 16 23:59:44.993732 systemd-logind[1448]: Session 9 logged out. Waiting for processes to exit. Jan 16 23:59:44.996409 systemd-logind[1448]: Removed session 9. Jan 16 23:59:45.584377 kubelet[2566]: E0116 23:59:45.584311 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-j4ltk" podUID="f9b64606-aa04-4801-bf16-55e0f797524c" Jan 16 23:59:47.586109 kubelet[2566]: E0116 23:59:47.586020 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7c4f4c46c6-chbl9" podUID="377f520a-36e6-491f-865e-cdb387ff596c" Jan 16 23:59:50.080647 systemd[1]: Started sshd@13-46.224.42.239:22-4.153.228.146:50676.service - OpenSSH per-connection server daemon (4.153.228.146:50676). Jan 16 23:59:50.696676 sshd[5396]: Accepted publickey for core from 4.153.228.146 port 50676 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 16 23:59:50.701175 sshd[5396]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 23:59:50.706864 systemd-logind[1448]: New session 10 of user core. Jan 16 23:59:50.712223 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 16 23:59:51.235420 sshd[5396]: pam_unix(sshd:session): session closed for user core Jan 16 23:59:51.242656 systemd[1]: sshd@13-46.224.42.239:22-4.153.228.146:50676.service: Deactivated successfully. Jan 16 23:59:51.247608 systemd[1]: session-10.scope: Deactivated successfully. Jan 16 23:59:51.249485 systemd-logind[1448]: Session 10 logged out. Waiting for processes to exit. Jan 16 23:59:51.250647 systemd-logind[1448]: Removed session 10. Jan 16 23:59:51.362488 systemd[1]: Started sshd@14-46.224.42.239:22-4.153.228.146:50692.service - OpenSSH per-connection server daemon (4.153.228.146:50692). Jan 16 23:59:51.584730 kubelet[2566]: E0116 23:59:51.584686 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68bd9998fd-lpljt" podUID="e83705ab-d8ce-46ca-880d-899f69158672" Jan 16 23:59:51.992667 sshd[5410]: Accepted publickey for core from 4.153.228.146 port 50692 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 16 23:59:51.994240 sshd[5410]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 23:59:52.002077 systemd-logind[1448]: New session 11 of user core. Jan 16 23:59:52.011208 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 16 23:59:52.595110 systemd[1]: Started sshd@15-46.224.42.239:22-159.89.121.21:54312.service - OpenSSH per-connection server daemon (159.89.121.21:54312). Jan 16 23:59:52.610228 sshd[5410]: pam_unix(sshd:session): session closed for user core Jan 16 23:59:52.620686 systemd[1]: sshd@14-46.224.42.239:22-4.153.228.146:50692.service: Deactivated successfully. Jan 16 23:59:52.625811 systemd[1]: session-11.scope: Deactivated successfully. Jan 16 23:59:52.627385 systemd-logind[1448]: Session 11 logged out. Waiting for processes to exit. Jan 16 23:59:52.630589 systemd-logind[1448]: Removed session 11. Jan 16 23:59:52.718417 systemd[1]: Started sshd@16-46.224.42.239:22-4.153.228.146:50694.service - OpenSSH per-connection server daemon (4.153.228.146:50694). Jan 16 23:59:53.165261 sshd[5423]: Invalid user frank from 159.89.121.21 port 54312 Jan 16 23:59:53.268284 sshd[5423]: Received disconnect from 159.89.121.21 port 54312:11: Bye Bye [preauth] Jan 16 23:59:53.268284 sshd[5423]: Disconnected from invalid user frank 159.89.121.21 port 54312 [preauth] Jan 16 23:59:53.271565 systemd[1]: sshd@15-46.224.42.239:22-159.89.121.21:54312.service: Deactivated successfully. Jan 16 23:59:53.354987 sshd[5428]: Accepted publickey for core from 4.153.228.146 port 50694 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 16 23:59:53.356153 sshd[5428]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 23:59:53.365499 systemd-logind[1448]: New session 12 of user core. Jan 16 23:59:53.375227 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 16 23:59:53.583841 kubelet[2566]: E0116 23:59:53.583188 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c7c7c7dd6-kgtcm" podUID="861b4149-53db-42c9-9886-651961041ffb" Jan 16 23:59:53.896047 sshd[5428]: pam_unix(sshd:session): session closed for user core Jan 16 23:59:53.901849 systemd[1]: sshd@16-46.224.42.239:22-4.153.228.146:50694.service: Deactivated successfully. Jan 16 23:59:53.906422 systemd[1]: session-12.scope: Deactivated successfully. Jan 16 23:59:53.908714 systemd-logind[1448]: Session 12 logged out. Waiting for processes to exit. Jan 16 23:59:53.910668 systemd-logind[1448]: Removed session 12. Jan 16 23:59:54.591502 kubelet[2566]: E0116 23:59:54.590653 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c7c7c7dd6-26hzj" podUID="b9c78c11-17fe-4d54-827b-16ba9d81154b" Jan 16 23:59:54.594060 kubelet[2566]: E0116 23:59:54.592289 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jr25d" podUID="195fd954-db29-4a46-a5c3-26216d80a6af" Jan 16 23:59:59.024466 systemd[1]: Started sshd@17-46.224.42.239:22-4.153.228.146:59904.service - OpenSSH per-connection server daemon (4.153.228.146:59904). Jan 16 23:59:59.586178 kubelet[2566]: E0116 23:59:59.586118 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-j4ltk" podUID="f9b64606-aa04-4801-bf16-55e0f797524c" Jan 16 23:59:59.661198 sshd[5443]: Accepted publickey for core from 4.153.228.146 port 59904 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 16 23:59:59.664887 sshd[5443]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 23:59:59.674084 systemd-logind[1448]: New session 13 of user core. Jan 16 23:59:59.678764 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 00:00:00.204845 sshd[5443]: pam_unix(sshd:session): session closed for user core Jan 17 00:00:00.213479 systemd[1]: sshd@17-46.224.42.239:22-4.153.228.146:59904.service: Deactivated successfully. Jan 17 00:00:00.221919 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 00:00:00.225771 systemd-logind[1448]: Session 13 logged out. Waiting for processes to exit. Jan 17 00:00:00.232551 systemd[1]: Started logrotate.service - Rotate and Compress System Logs. Jan 17 00:00:00.235442 systemd-logind[1448]: Removed session 13. Jan 17 00:00:00.259611 systemd[1]: logrotate.service: Deactivated successfully. Jan 17 00:00:00.324466 systemd[1]: Started sshd@18-46.224.42.239:22-4.153.228.146:59906.service - OpenSSH per-connection server daemon (4.153.228.146:59906). Jan 17 00:00:00.990507 sshd[5458]: Accepted publickey for core from 4.153.228.146 port 59906 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:00:00.993444 sshd[5458]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:00:01.000874 systemd-logind[1448]: New session 14 of user core. Jan 17 00:00:01.007219 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 00:00:01.585798 kubelet[2566]: E0117 00:00:01.585622 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7c4f4c46c6-chbl9" podUID="377f520a-36e6-491f-865e-cdb387ff596c" Jan 17 00:00:01.826730 sshd[5458]: pam_unix(sshd:session): session closed for user core Jan 17 00:00:01.834595 systemd[1]: sshd@18-46.224.42.239:22-4.153.228.146:59906.service: Deactivated successfully. Jan 17 00:00:01.838731 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 00:00:01.841174 systemd-logind[1448]: Session 14 logged out. Waiting for processes to exit. Jan 17 00:00:01.844559 systemd-logind[1448]: Removed session 14. Jan 17 00:00:01.944472 systemd[1]: Started sshd@19-46.224.42.239:22-4.153.228.146:59916.service - OpenSSH per-connection server daemon (4.153.228.146:59916). Jan 17 00:00:02.591565 sshd[5469]: Accepted publickey for core from 4.153.228.146 port 59916 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:00:02.593835 sshd[5469]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:00:02.604647 systemd-logind[1448]: New session 15 of user core. Jan 17 00:00:02.608340 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 00:00:02.755696 systemd[1]: Started sshd@20-46.224.42.239:22-93.123.109.38:55814.service - OpenSSH per-connection server daemon (93.123.109.38:55814). Jan 17 00:00:02.821122 sshd[5473]: Connection closed by 93.123.109.38 port 55814 Jan 17 00:00:02.823505 systemd[1]: sshd@20-46.224.42.239:22-93.123.109.38:55814.service: Deactivated successfully. Jan 17 00:00:03.582777 kubelet[2566]: E0117 00:00:03.582717 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68bd9998fd-lpljt" podUID="e83705ab-d8ce-46ca-880d-899f69158672" Jan 17 00:00:03.941625 sshd[5469]: pam_unix(sshd:session): session closed for user core Jan 17 00:00:03.947915 systemd-logind[1448]: Session 15 logged out. Waiting for processes to exit. Jan 17 00:00:03.949468 systemd[1]: sshd@19-46.224.42.239:22-4.153.228.146:59916.service: Deactivated successfully. Jan 17 00:00:03.955718 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 00:00:03.960443 systemd-logind[1448]: Removed session 15. Jan 17 00:00:04.056395 systemd[1]: Started sshd@21-46.224.42.239:22-4.153.228.146:59922.service - OpenSSH per-connection server daemon (4.153.228.146:59922). Jan 17 00:00:04.699210 sshd[5494]: Accepted publickey for core from 4.153.228.146 port 59922 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:00:04.702509 sshd[5494]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:00:04.719491 systemd-logind[1448]: New session 16 of user core. Jan 17 00:00:04.724446 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 00:00:05.478207 sshd[5494]: pam_unix(sshd:session): session closed for user core Jan 17 00:00:05.482510 systemd-logind[1448]: Session 16 logged out. Waiting for processes to exit. Jan 17 00:00:05.483220 systemd[1]: sshd@21-46.224.42.239:22-4.153.228.146:59922.service: Deactivated successfully. Jan 17 00:00:05.487631 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 00:00:05.492804 systemd-logind[1448]: Removed session 16. Jan 17 00:00:05.585982 kubelet[2566]: E0117 00:00:05.582916 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c7c7c7dd6-26hzj" podUID="b9c78c11-17fe-4d54-827b-16ba9d81154b" Jan 17 00:00:05.605092 systemd[1]: Started sshd@22-46.224.42.239:22-4.153.228.146:36666.service - OpenSSH per-connection server daemon (4.153.228.146:36666). Jan 17 00:00:06.235665 sshd[5507]: Accepted publickey for core from 4.153.228.146 port 36666 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:00:06.238677 sshd[5507]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:00:06.247683 systemd-logind[1448]: New session 17 of user core. Jan 17 00:00:06.257545 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 00:00:06.590609 kubelet[2566]: E0117 00:00:06.590257 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jr25d" podUID="195fd954-db29-4a46-a5c3-26216d80a6af" Jan 17 00:00:06.783379 systemd[1]: Started sshd@23-46.224.42.239:22-42.112.42.129:30606.service - OpenSSH per-connection server daemon (42.112.42.129:30606). Jan 17 00:00:06.807857 sshd[5507]: pam_unix(sshd:session): session closed for user core Jan 17 00:00:06.814642 systemd[1]: sshd@22-46.224.42.239:22-4.153.228.146:36666.service: Deactivated successfully. Jan 17 00:00:06.820173 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 00:00:06.822015 systemd-logind[1448]: Session 17 logged out. Waiting for processes to exit. Jan 17 00:00:06.824479 systemd-logind[1448]: Removed session 17. Jan 17 00:00:07.782459 sshd[5519]: Invalid user postgres from 42.112.42.129 port 30606 Jan 17 00:00:07.979538 sshd[5519]: Received disconnect from 42.112.42.129 port 30606:11: Bye Bye [preauth] Jan 17 00:00:07.979701 sshd[5519]: Disconnected from invalid user postgres 42.112.42.129 port 30606 [preauth] Jan 17 00:00:07.984346 systemd[1]: sshd@23-46.224.42.239:22-42.112.42.129:30606.service: Deactivated successfully. Jan 17 00:00:08.584655 kubelet[2566]: E0117 00:00:08.584271 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c7c7c7dd6-kgtcm" podUID="861b4149-53db-42c9-9886-651961041ffb" Jan 17 00:00:11.931993 systemd[1]: Started sshd@24-46.224.42.239:22-4.153.228.146:36678.service - OpenSSH per-connection server daemon (4.153.228.146:36678). Jan 17 00:00:12.563773 sshd[5528]: Accepted publickey for core from 4.153.228.146 port 36678 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:00:12.565109 sshd[5528]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:00:12.572272 systemd-logind[1448]: New session 18 of user core. Jan 17 00:00:12.578260 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 00:00:12.593606 kubelet[2566]: E0117 00:00:12.590032 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7c4f4c46c6-chbl9" podUID="377f520a-36e6-491f-865e-cdb387ff596c" Jan 17 00:00:13.106511 sshd[5528]: pam_unix(sshd:session): session closed for user core Jan 17 00:00:13.112724 systemd-logind[1448]: Session 18 logged out. Waiting for processes to exit. Jan 17 00:00:13.113752 systemd[1]: sshd@24-46.224.42.239:22-4.153.228.146:36678.service: Deactivated successfully. Jan 17 00:00:13.117577 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 00:00:13.120098 systemd-logind[1448]: Removed session 18. Jan 17 00:00:13.584809 kubelet[2566]: E0117 00:00:13.584739 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-j4ltk" podUID="f9b64606-aa04-4801-bf16-55e0f797524c" Jan 17 00:00:17.584037 kubelet[2566]: E0117 00:00:17.583460 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68bd9998fd-lpljt" podUID="e83705ab-d8ce-46ca-880d-899f69158672" Jan 17 00:00:18.233379 systemd[1]: Started sshd@25-46.224.42.239:22-4.153.228.146:33036.service - OpenSSH per-connection server daemon (4.153.228.146:33036). Jan 17 00:00:18.873173 sshd[5543]: Accepted publickey for core from 4.153.228.146 port 33036 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:00:18.875431 sshd[5543]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:00:18.889293 systemd-logind[1448]: New session 19 of user core. Jan 17 00:00:18.899778 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 00:00:19.466876 sshd[5543]: pam_unix(sshd:session): session closed for user core Jan 17 00:00:19.479327 systemd[1]: sshd@25-46.224.42.239:22-4.153.228.146:33036.service: Deactivated successfully. Jan 17 00:00:19.488033 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 00:00:19.490205 systemd-logind[1448]: Session 19 logged out. Waiting for processes to exit. Jan 17 00:00:19.494408 systemd-logind[1448]: Removed session 19. Jan 17 00:00:20.585396 kubelet[2566]: E0117 00:00:20.585307 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c7c7c7dd6-26hzj" podUID="b9c78c11-17fe-4d54-827b-16ba9d81154b" Jan 17 00:00:21.583561 kubelet[2566]: E0117 00:00:21.583470 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jr25d" podUID="195fd954-db29-4a46-a5c3-26216d80a6af" Jan 17 00:00:23.583061 kubelet[2566]: E0117 00:00:23.582555 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c7c7c7dd6-kgtcm" podUID="861b4149-53db-42c9-9886-651961041ffb" Jan 17 00:00:24.595637 systemd[1]: Started sshd@26-46.224.42.239:22-4.153.228.146:52202.service - OpenSSH per-connection server daemon (4.153.228.146:52202). Jan 17 00:00:25.241899 sshd[5578]: Accepted publickey for core from 4.153.228.146 port 52202 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:00:25.243638 sshd[5578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:00:25.253645 systemd-logind[1448]: New session 20 of user core. Jan 17 00:00:25.260370 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 00:00:25.825307 sshd[5578]: pam_unix(sshd:session): session closed for user core Jan 17 00:00:25.833292 systemd-logind[1448]: Session 20 logged out. Waiting for processes to exit. Jan 17 00:00:25.834377 systemd[1]: sshd@26-46.224.42.239:22-4.153.228.146:52202.service: Deactivated successfully. Jan 17 00:00:25.844017 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 00:00:25.855580 systemd-logind[1448]: Removed session 20. Jan 17 00:00:25.973866 systemd[1]: Started sshd@27-46.224.42.239:22-181.123.136.11:55700.service - OpenSSH per-connection server daemon (181.123.136.11:55700). Jan 17 00:00:26.595001 kubelet[2566]: E0117 00:00:26.594310 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7c4f4c46c6-chbl9" podUID="377f520a-36e6-491f-865e-cdb387ff596c" Jan 17 00:00:27.304434 sshd[5590]: Invalid user k8s from 181.123.136.11 port 55700 Jan 17 00:00:27.552639 sshd[5590]: Received disconnect from 181.123.136.11 port 55700:11: Bye Bye [preauth] Jan 17 00:00:27.552639 sshd[5590]: Disconnected from invalid user k8s 181.123.136.11 port 55700 [preauth] Jan 17 00:00:27.555831 systemd[1]: sshd@27-46.224.42.239:22-181.123.136.11:55700.service: Deactivated successfully. Jan 17 00:00:28.594318 kubelet[2566]: E0117 00:00:28.594243 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68bd9998fd-lpljt" podUID="e83705ab-d8ce-46ca-880d-899f69158672" Jan 17 00:00:28.597207 kubelet[2566]: E0117 00:00:28.597131 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-j4ltk" podUID="f9b64606-aa04-4801-bf16-55e0f797524c" Jan 17 00:00:30.959572 systemd[1]: Started sshd@28-46.224.42.239:22-4.153.228.146:52210.service - OpenSSH per-connection server daemon (4.153.228.146:52210). Jan 17 00:00:31.617224 sshd[5601]: Accepted publickey for core from 4.153.228.146 port 52210 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:00:31.618902 sshd[5601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:00:31.632589 systemd-logind[1448]: New session 21 of user core. Jan 17 00:00:31.637741 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 17 00:00:32.204910 sshd[5601]: pam_unix(sshd:session): session closed for user core Jan 17 00:00:32.215497 systemd[1]: sshd@28-46.224.42.239:22-4.153.228.146:52210.service: Deactivated successfully. Jan 17 00:00:32.223580 systemd[1]: session-21.scope: Deactivated successfully. Jan 17 00:00:32.227607 systemd-logind[1448]: Session 21 logged out. Waiting for processes to exit. Jan 17 00:00:32.231523 systemd-logind[1448]: Removed session 21. Jan 17 00:00:32.591387 kubelet[2566]: E0117 00:00:32.590509 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jr25d" podUID="195fd954-db29-4a46-a5c3-26216d80a6af" Jan 17 00:00:33.584514 kubelet[2566]: E0117 00:00:33.584413 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c7c7c7dd6-26hzj" podUID="b9c78c11-17fe-4d54-827b-16ba9d81154b" Jan 17 00:00:36.585206 kubelet[2566]: E0117 00:00:36.584733 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c7c7c7dd6-kgtcm" podUID="861b4149-53db-42c9-9886-651961041ffb" Jan 17 00:00:37.314113 systemd[1]: Started sshd@29-46.224.42.239:22-4.153.228.146:51824.service - OpenSSH per-connection server daemon (4.153.228.146:51824). Jan 17 00:00:37.931293 sshd[5614]: Accepted publickey for core from 4.153.228.146 port 51824 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:00:37.933248 sshd[5614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:00:37.942201 systemd-logind[1448]: New session 22 of user core. Jan 17 00:00:37.949259 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 17 00:00:38.299404 systemd[1]: Started sshd@30-46.224.42.239:22-52.237.80.79:38972.service - OpenSSH per-connection server daemon (52.237.80.79:38972). Jan 17 00:00:38.526037 sshd[5614]: pam_unix(sshd:session): session closed for user core Jan 17 00:00:38.532170 systemd[1]: sshd@29-46.224.42.239:22-4.153.228.146:51824.service: Deactivated successfully. Jan 17 00:00:38.537863 systemd[1]: session-22.scope: Deactivated successfully. Jan 17 00:00:38.542851 systemd-logind[1448]: Session 22 logged out. Waiting for processes to exit. Jan 17 00:00:38.544843 systemd-logind[1448]: Removed session 22. Jan 17 00:00:39.176958 sshd[5624]: Invalid user ctf from 52.237.80.79 port 38972 Jan 17 00:00:39.344159 sshd[5624]: Received disconnect from 52.237.80.79 port 38972:11: Bye Bye [preauth] Jan 17 00:00:39.344159 sshd[5624]: Disconnected from invalid user ctf 52.237.80.79 port 38972 [preauth] Jan 17 00:00:39.348021 systemd[1]: sshd@30-46.224.42.239:22-52.237.80.79:38972.service: Deactivated successfully. Jan 17 00:00:41.584711 containerd[1469]: time="2026-01-17T00:00:41.584657925Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:00:42.347313 containerd[1469]: time="2026-01-17T00:00:42.347118649Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:00:42.349340 containerd[1469]: time="2026-01-17T00:00:42.348689782Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:00:42.349340 containerd[1469]: time="2026-01-17T00:00:42.348767463Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:00:42.349507 kubelet[2566]: E0117 00:00:42.348912 2566 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:00:42.349507 kubelet[2566]: E0117 00:00:42.348973 2566 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:00:42.349507 kubelet[2566]: E0117 00:00:42.349102 2566 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:740b13404779446abb486266eca53865,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wdf9c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7c4f4c46c6-chbl9_calico-system(377f520a-36e6-491f-865e-cdb387ff596c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:00:42.352860 containerd[1469]: time="2026-01-17T00:00:42.352744097Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:00:42.584192 kubelet[2566]: E0117 00:00:42.583498 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68bd9998fd-lpljt" podUID="e83705ab-d8ce-46ca-880d-899f69158672" Jan 17 00:00:42.586111 kubelet[2566]: E0117 00:00:42.584764 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-j4ltk" podUID="f9b64606-aa04-4801-bf16-55e0f797524c" Jan 17 00:00:44.584001 kubelet[2566]: E0117 00:00:44.583891 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jr25d" podUID="195fd954-db29-4a46-a5c3-26216d80a6af" Jan 17 00:00:46.221346 containerd[1469]: time="2026-01-17T00:00:46.221215280Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:00:46.223328 containerd[1469]: time="2026-01-17T00:00:46.222998495Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:00:46.223328 containerd[1469]: time="2026-01-17T00:00:46.223142136Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:00:46.223703 kubelet[2566]: E0117 00:00:46.223295 2566 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:00:46.223703 kubelet[2566]: E0117 00:00:46.223352 2566 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:00:46.223703 kubelet[2566]: E0117 00:00:46.223515 2566 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wdf9c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7c4f4c46c6-chbl9_calico-system(377f520a-36e6-491f-865e-cdb387ff596c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:00:46.224838 kubelet[2566]: E0117 00:00:46.224776 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7c4f4c46c6-chbl9" podUID="377f520a-36e6-491f-865e-cdb387ff596c" Jan 17 00:00:46.587613 kubelet[2566]: E0117 00:00:46.587530 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c7c7c7dd6-26hzj" podUID="b9c78c11-17fe-4d54-827b-16ba9d81154b" Jan 17 00:00:47.583317 kubelet[2566]: E0117 00:00:47.583236 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c7c7c7dd6-kgtcm" podUID="861b4149-53db-42c9-9886-651961041ffb" Jan 17 00:00:49.922237 systemd[1]: run-containerd-runc-k8s.io-41cc996a9e73c13dc12c83be9ac2c05532b570111d0e48005612074aef88e165-runc.ZdIDrV.mount: Deactivated successfully. Jan 17 00:00:53.499328 systemd[1]: cri-containerd-0ffa7bef398892635dc1edee70a9b83960b09ed469815fcf569af244792b7f6f.scope: Deactivated successfully. Jan 17 00:00:53.500919 systemd[1]: cri-containerd-0ffa7bef398892635dc1edee70a9b83960b09ed469815fcf569af244792b7f6f.scope: Consumed 44.224s CPU time. Jan 17 00:00:53.508845 systemd[1]: cri-containerd-b0387641e8c7b6d69bf0e25fbfa10c9bd3eb8c58a47622b8b556a0b76120c394.scope: Deactivated successfully. Jan 17 00:00:53.509275 systemd[1]: cri-containerd-b0387641e8c7b6d69bf0e25fbfa10c9bd3eb8c58a47622b8b556a0b76120c394.scope: Consumed 6.121s CPU time, 17.9M memory peak, 0B memory swap peak. Jan 17 00:00:53.544153 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0ffa7bef398892635dc1edee70a9b83960b09ed469815fcf569af244792b7f6f-rootfs.mount: Deactivated successfully. Jan 17 00:00:53.552316 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b0387641e8c7b6d69bf0e25fbfa10c9bd3eb8c58a47622b8b556a0b76120c394-rootfs.mount: Deactivated successfully. Jan 17 00:00:53.560965 containerd[1469]: time="2026-01-17T00:00:53.560829448Z" level=info msg="shim disconnected" id=b0387641e8c7b6d69bf0e25fbfa10c9bd3eb8c58a47622b8b556a0b76120c394 namespace=k8s.io Jan 17 00:00:53.560965 containerd[1469]: time="2026-01-17T00:00:53.560895649Z" level=warning msg="cleaning up after shim disconnected" id=b0387641e8c7b6d69bf0e25fbfa10c9bd3eb8c58a47622b8b556a0b76120c394 namespace=k8s.io Jan 17 00:00:53.560965 containerd[1469]: time="2026-01-17T00:00:53.560909049Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:00:53.563566 containerd[1469]: time="2026-01-17T00:00:53.563310068Z" level=info msg="shim disconnected" id=0ffa7bef398892635dc1edee70a9b83960b09ed469815fcf569af244792b7f6f namespace=k8s.io Jan 17 00:00:53.563566 containerd[1469]: time="2026-01-17T00:00:53.563369389Z" level=warning msg="cleaning up after shim disconnected" id=0ffa7bef398892635dc1edee70a9b83960b09ed469815fcf569af244792b7f6f namespace=k8s.io Jan 17 00:00:53.563566 containerd[1469]: time="2026-01-17T00:00:53.563379709Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:00:53.747136 kubelet[2566]: E0117 00:00:53.745771 2566 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:48842->10.0.0.2:2379: read: connection timed out" Jan 17 00:00:54.569251 kubelet[2566]: I0117 00:00:54.569206 2566 scope.go:117] "RemoveContainer" containerID="0ffa7bef398892635dc1edee70a9b83960b09ed469815fcf569af244792b7f6f" Jan 17 00:00:54.569769 kubelet[2566]: I0117 00:00:54.569554 2566 scope.go:117] "RemoveContainer" containerID="b0387641e8c7b6d69bf0e25fbfa10c9bd3eb8c58a47622b8b556a0b76120c394" Jan 17 00:00:54.572352 containerd[1469]: time="2026-01-17T00:00:54.572111298Z" level=info msg="CreateContainer within sandbox \"9ffddec2d6a41feda0db6ff22926a3c5ead6779b53ec352c37c1aaf672a32055\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 17 00:00:54.572352 containerd[1469]: time="2026-01-17T00:00:54.572347420Z" level=info msg="CreateContainer within sandbox \"7ff48d85cb350cd537e4ddf46380b212559767d506543f875eda01254032738c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 17 00:00:54.591441 containerd[1469]: time="2026-01-17T00:00:54.591011768Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:00:54.621538 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1322387638.mount: Deactivated successfully. Jan 17 00:00:54.623735 containerd[1469]: time="2026-01-17T00:00:54.623695148Z" level=info msg="CreateContainer within sandbox \"9ffddec2d6a41feda0db6ff22926a3c5ead6779b53ec352c37c1aaf672a32055\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"44b9ec6ba29c67d4e33fa996af11a61ffb564913519fd01bda5571f2a464eff3\"" Jan 17 00:00:54.624345 containerd[1469]: time="2026-01-17T00:00:54.624282833Z" level=info msg="StartContainer for \"44b9ec6ba29c67d4e33fa996af11a61ffb564913519fd01bda5571f2a464eff3\"" Jan 17 00:00:54.626377 containerd[1469]: time="2026-01-17T00:00:54.626259088Z" level=info msg="CreateContainer within sandbox \"7ff48d85cb350cd537e4ddf46380b212559767d506543f875eda01254032738c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"2c1221f8b1a0a3f680956a19b47b06efbc6a0d80300328e3cae5ff7cdcf60502\"" Jan 17 00:00:54.627457 containerd[1469]: time="2026-01-17T00:00:54.627323417Z" level=info msg="StartContainer for \"2c1221f8b1a0a3f680956a19b47b06efbc6a0d80300328e3cae5ff7cdcf60502\"" Jan 17 00:00:54.664403 systemd[1]: Started cri-containerd-44b9ec6ba29c67d4e33fa996af11a61ffb564913519fd01bda5571f2a464eff3.scope - libcontainer container 44b9ec6ba29c67d4e33fa996af11a61ffb564913519fd01bda5571f2a464eff3. Jan 17 00:00:54.684251 systemd[1]: Started cri-containerd-2c1221f8b1a0a3f680956a19b47b06efbc6a0d80300328e3cae5ff7cdcf60502.scope - libcontainer container 2c1221f8b1a0a3f680956a19b47b06efbc6a0d80300328e3cae5ff7cdcf60502. Jan 17 00:00:54.731627 containerd[1469]: time="2026-01-17T00:00:54.731412963Z" level=info msg="StartContainer for \"44b9ec6ba29c67d4e33fa996af11a61ffb564913519fd01bda5571f2a464eff3\" returns successfully" Jan 17 00:00:54.750169 containerd[1469]: time="2026-01-17T00:00:54.750058551Z" level=info msg="StartContainer for \"2c1221f8b1a0a3f680956a19b47b06efbc6a0d80300328e3cae5ff7cdcf60502\" returns successfully" Jan 17 00:00:55.033164 kubelet[2566]: E0117 00:00:55.025208 2566 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:48686->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{goldmane-666569f655-jr25d.188b5b7de4c735ed calico-system 1754 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-system,Name:goldmane-666569f655-jr25d,UID:195fd954-db29-4a46-a5c3-26216d80a6af,APIVersion:v1,ResourceVersion:795,FieldPath:spec.containers{goldmane},},Reason:BackOff,Message:Back-off pulling image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\",Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-fe2a5b3650,},FirstTimestamp:2026-01-16 23:57:54 +0000 UTC,LastTimestamp:2026-01-17 00:00:44.583838766 +0000 UTC m=+220.161972389,Count:12,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-fe2a5b3650,}" Jan 17 00:00:56.366272 containerd[1469]: time="2026-01-17T00:00:56.366182028Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:00:56.368325 containerd[1469]: time="2026-01-17T00:00:56.368225284Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:00:56.368495 containerd[1469]: time="2026-01-17T00:00:56.368339605Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:00:56.368674 kubelet[2566]: E0117 00:00:56.368618 2566 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:00:56.369274 kubelet[2566]: E0117 00:00:56.368683 2566 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:00:56.369478 containerd[1469]: time="2026-01-17T00:00:56.369438654Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:00:56.369797 kubelet[2566]: E0117 00:00:56.369212 2566 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-75f2p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-68bd9998fd-lpljt_calico-system(e83705ab-d8ce-46ca-880d-899f69158672): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:00:56.371113 kubelet[2566]: E0117 00:00:56.371056 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68bd9998fd-lpljt" podUID="e83705ab-d8ce-46ca-880d-899f69158672" Jan 17 00:00:56.952998 containerd[1469]: time="2026-01-17T00:00:56.952648154Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:00:56.954350 containerd[1469]: time="2026-01-17T00:00:56.954247966Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:00:56.954517 containerd[1469]: time="2026-01-17T00:00:56.954393687Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:00:56.954908 kubelet[2566]: E0117 00:00:56.954696 2566 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:00:56.954908 kubelet[2566]: E0117 00:00:56.954766 2566 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:00:56.955278 kubelet[2566]: E0117 00:00:56.954901 2566 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-54dj2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-j4ltk_calico-system(f9b64606-aa04-4801-bf16-55e0f797524c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:00:56.958122 containerd[1469]: time="2026-01-17T00:00:56.957810074Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:00:57.583086 kubelet[2566]: E0117 00:00:57.583000 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7c4f4c46c6-chbl9" podUID="377f520a-36e6-491f-865e-cdb387ff596c" Jan 17 00:00:58.104435 containerd[1469]: time="2026-01-17T00:00:58.104117865Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:00:58.108912 containerd[1469]: time="2026-01-17T00:00:58.108713380Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:00:58.109842 containerd[1469]: time="2026-01-17T00:00:58.108845101Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:00:58.109882 kubelet[2566]: E0117 00:00:58.109244 2566 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:00:58.109882 kubelet[2566]: E0117 00:00:58.109347 2566 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:00:58.109882 kubelet[2566]: E0117 00:00:58.109480 2566 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-54dj2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-j4ltk_calico-system(f9b64606-aa04-4801-bf16-55e0f797524c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:00:58.110848 kubelet[2566]: E0117 00:00:58.110712 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-j4ltk" podUID="f9b64606-aa04-4801-bf16-55e0f797524c"