Jan 16 23:55:49.900048 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 16 23:55:49.900085 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jan 16 22:28:08 -00 2026 Jan 16 23:55:49.900097 kernel: KASLR enabled Jan 16 23:55:49.900103 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Jan 16 23:55:49.900109 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390c1018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 Jan 16 23:55:49.900115 kernel: random: crng init done Jan 16 23:55:49.900123 kernel: ACPI: Early table checksum verification disabled Jan 16 23:55:49.900129 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Jan 16 23:55:49.900135 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Jan 16 23:55:49.900143 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 23:55:49.900149 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 23:55:49.900155 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 23:55:49.900162 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 23:55:49.900168 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 23:55:49.900175 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 23:55:49.900184 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 23:55:49.900191 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 23:55:49.900197 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 23:55:49.900204 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Jan 16 23:55:49.900210 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Jan 16 23:55:49.900217 kernel: NUMA: Failed to initialise from firmware Jan 16 23:55:49.900223 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Jan 16 23:55:49.900230 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] Jan 16 23:55:49.900236 kernel: Zone ranges: Jan 16 23:55:49.900242 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 16 23:55:49.900250 kernel: DMA32 empty Jan 16 23:55:49.900257 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Jan 16 23:55:49.900263 kernel: Movable zone start for each node Jan 16 23:55:49.900270 kernel: Early memory node ranges Jan 16 23:55:49.900276 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Jan 16 23:55:49.900283 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Jan 16 23:55:49.900289 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Jan 16 23:55:49.900296 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Jan 16 23:55:49.900302 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Jan 16 23:55:49.900309 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Jan 16 23:55:49.900315 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Jan 16 23:55:49.900322 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Jan 16 23:55:49.900330 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Jan 16 23:55:49.900336 kernel: psci: probing for conduit method from ACPI. Jan 16 23:55:49.900343 kernel: psci: PSCIv1.1 detected in firmware. Jan 16 23:55:49.900352 kernel: psci: Using standard PSCI v0.2 function IDs Jan 16 23:55:49.900359 kernel: psci: Trusted OS migration not required Jan 16 23:55:49.900366 kernel: psci: SMC Calling Convention v1.1 Jan 16 23:55:49.900375 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 16 23:55:49.900382 kernel: percpu: Embedded 30 pages/cpu s85672 r8192 d29016 u122880 Jan 16 23:55:49.900388 kernel: pcpu-alloc: s85672 r8192 d29016 u122880 alloc=30*4096 Jan 16 23:55:49.900395 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 16 23:55:49.900402 kernel: Detected PIPT I-cache on CPU0 Jan 16 23:55:49.900409 kernel: CPU features: detected: GIC system register CPU interface Jan 16 23:55:49.900416 kernel: CPU features: detected: Hardware dirty bit management Jan 16 23:55:49.900423 kernel: CPU features: detected: Spectre-v4 Jan 16 23:55:49.900429 kernel: CPU features: detected: Spectre-BHB Jan 16 23:55:49.900436 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 16 23:55:49.900444 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 16 23:55:49.900451 kernel: CPU features: detected: ARM erratum 1418040 Jan 16 23:55:49.900458 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 16 23:55:49.900465 kernel: alternatives: applying boot alternatives Jan 16 23:55:49.900473 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=d499dc3f7d5d4118d4e4300ad00f17ad72271d2a2f6bb9119457036ac5212c83 Jan 16 23:55:49.900480 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 16 23:55:49.900487 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 16 23:55:49.900494 kernel: Fallback order for Node 0: 0 Jan 16 23:55:49.900501 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Jan 16 23:55:49.900508 kernel: Policy zone: Normal Jan 16 23:55:49.900514 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 16 23:55:49.900523 kernel: software IO TLB: area num 2. Jan 16 23:55:49.900529 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Jan 16 23:55:49.900537 kernel: Memory: 3882816K/4096000K available (10304K kernel code, 2180K rwdata, 8112K rodata, 39424K init, 897K bss, 213184K reserved, 0K cma-reserved) Jan 16 23:55:49.900544 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 16 23:55:49.900551 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 16 23:55:49.900558 kernel: rcu: RCU event tracing is enabled. Jan 16 23:55:49.900565 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 16 23:55:49.900572 kernel: Trampoline variant of Tasks RCU enabled. Jan 16 23:55:49.900579 kernel: Tracing variant of Tasks RCU enabled. Jan 16 23:55:49.900586 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 16 23:55:49.900593 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 16 23:55:49.900600 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 16 23:55:49.900608 kernel: GICv3: 256 SPIs implemented Jan 16 23:55:49.900615 kernel: GICv3: 0 Extended SPIs implemented Jan 16 23:55:49.901226 kernel: Root IRQ handler: gic_handle_irq Jan 16 23:55:49.901251 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 16 23:55:49.901258 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 16 23:55:49.901266 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 16 23:55:49.901273 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Jan 16 23:55:49.901280 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Jan 16 23:55:49.901287 kernel: GICv3: using LPI property table @0x00000001000e0000 Jan 16 23:55:49.901294 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Jan 16 23:55:49.901301 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 16 23:55:49.901314 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 16 23:55:49.901321 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 16 23:55:49.901328 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 16 23:55:49.901335 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 16 23:55:49.901342 kernel: Console: colour dummy device 80x25 Jan 16 23:55:49.901350 kernel: ACPI: Core revision 20230628 Jan 16 23:55:49.901357 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 16 23:55:49.901365 kernel: pid_max: default: 32768 minimum: 301 Jan 16 23:55:49.901372 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 16 23:55:49.901379 kernel: landlock: Up and running. Jan 16 23:55:49.901388 kernel: SELinux: Initializing. Jan 16 23:55:49.901395 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 16 23:55:49.901402 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 16 23:55:49.901409 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 16 23:55:49.901416 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 16 23:55:49.901424 kernel: rcu: Hierarchical SRCU implementation. Jan 16 23:55:49.901431 kernel: rcu: Max phase no-delay instances is 400. Jan 16 23:55:49.901439 kernel: Platform MSI: ITS@0x8080000 domain created Jan 16 23:55:49.901446 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 16 23:55:49.901454 kernel: Remapping and enabling EFI services. Jan 16 23:55:49.901461 kernel: smp: Bringing up secondary CPUs ... Jan 16 23:55:49.901469 kernel: Detected PIPT I-cache on CPU1 Jan 16 23:55:49.901476 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 16 23:55:49.901483 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Jan 16 23:55:49.901498 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 16 23:55:49.901505 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 16 23:55:49.901512 kernel: smp: Brought up 1 node, 2 CPUs Jan 16 23:55:49.901519 kernel: SMP: Total of 2 processors activated. Jan 16 23:55:49.901528 kernel: CPU features: detected: 32-bit EL0 Support Jan 16 23:55:49.901536 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 16 23:55:49.901543 kernel: CPU features: detected: Common not Private translations Jan 16 23:55:49.901557 kernel: CPU features: detected: CRC32 instructions Jan 16 23:55:49.901566 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 16 23:55:49.901574 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 16 23:55:49.901582 kernel: CPU features: detected: LSE atomic instructions Jan 16 23:55:49.901589 kernel: CPU features: detected: Privileged Access Never Jan 16 23:55:49.901596 kernel: CPU features: detected: RAS Extension Support Jan 16 23:55:49.901606 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 16 23:55:49.901613 kernel: CPU: All CPU(s) started at EL1 Jan 16 23:55:49.901621 kernel: alternatives: applying system-wide alternatives Jan 16 23:55:49.901693 kernel: devtmpfs: initialized Jan 16 23:55:49.901702 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 16 23:55:49.901710 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 16 23:55:49.901717 kernel: pinctrl core: initialized pinctrl subsystem Jan 16 23:55:49.901725 kernel: SMBIOS 3.0.0 present. Jan 16 23:55:49.901736 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Jan 16 23:55:49.901743 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 16 23:55:49.901751 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 16 23:55:49.901759 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 16 23:55:49.901767 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 16 23:55:49.901774 kernel: audit: initializing netlink subsys (disabled) Jan 16 23:55:49.901782 kernel: audit: type=2000 audit(0.014:1): state=initialized audit_enabled=0 res=1 Jan 16 23:55:49.901790 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 16 23:55:49.901798 kernel: cpuidle: using governor menu Jan 16 23:55:49.901808 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 16 23:55:49.901815 kernel: ASID allocator initialised with 32768 entries Jan 16 23:55:49.901823 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 16 23:55:49.901831 kernel: Serial: AMBA PL011 UART driver Jan 16 23:55:49.901838 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 16 23:55:49.901846 kernel: Modules: 0 pages in range for non-PLT usage Jan 16 23:55:49.901854 kernel: Modules: 509008 pages in range for PLT usage Jan 16 23:55:49.901861 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 16 23:55:49.901869 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 16 23:55:49.901878 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 16 23:55:49.901886 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 16 23:55:49.901894 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 16 23:55:49.901901 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 16 23:55:49.901909 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 16 23:55:49.901916 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 16 23:55:49.901924 kernel: ACPI: Added _OSI(Module Device) Jan 16 23:55:49.901932 kernel: ACPI: Added _OSI(Processor Device) Jan 16 23:55:49.901939 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 16 23:55:49.901948 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 16 23:55:49.901956 kernel: ACPI: Interpreter enabled Jan 16 23:55:49.901963 kernel: ACPI: Using GIC for interrupt routing Jan 16 23:55:49.901971 kernel: ACPI: MCFG table detected, 1 entries Jan 16 23:55:49.901978 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 16 23:55:49.901986 kernel: printk: console [ttyAMA0] enabled Jan 16 23:55:49.901993 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 16 23:55:49.902161 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 16 23:55:49.902241 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 16 23:55:49.902308 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 16 23:55:49.902373 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 16 23:55:49.902440 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 16 23:55:49.902450 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 16 23:55:49.902458 kernel: PCI host bridge to bus 0000:00 Jan 16 23:55:49.902530 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 16 23:55:49.902593 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 16 23:55:49.902702 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 16 23:55:49.902771 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 16 23:55:49.902859 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 16 23:55:49.902939 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Jan 16 23:55:49.903006 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Jan 16 23:55:49.903074 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Jan 16 23:55:49.903153 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 16 23:55:49.903221 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Jan 16 23:55:49.903301 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 16 23:55:49.903369 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Jan 16 23:55:49.903444 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 16 23:55:49.903511 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Jan 16 23:55:49.903590 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 16 23:55:49.903732 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Jan 16 23:55:49.903821 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 16 23:55:49.903906 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Jan 16 23:55:49.903985 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 16 23:55:49.904054 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Jan 16 23:55:49.904137 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 16 23:55:49.904217 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Jan 16 23:55:49.904293 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 16 23:55:49.904361 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Jan 16 23:55:49.904437 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Jan 16 23:55:49.904506 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Jan 16 23:55:49.904596 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Jan 16 23:55:49.904689 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Jan 16 23:55:49.904780 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Jan 16 23:55:49.904853 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Jan 16 23:55:49.904923 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 16 23:55:49.904993 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 16 23:55:49.905071 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 16 23:55:49.905146 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Jan 16 23:55:49.905222 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Jan 16 23:55:49.905292 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Jan 16 23:55:49.905360 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Jan 16 23:55:49.905436 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Jan 16 23:55:49.907822 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Jan 16 23:55:49.907946 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 16 23:55:49.908019 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Jan 16 23:55:49.908091 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Jan 16 23:55:49.908172 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Jan 16 23:55:49.908248 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Jan 16 23:55:49.908318 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Jan 16 23:55:49.908402 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Jan 16 23:55:49.908473 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Jan 16 23:55:49.908543 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Jan 16 23:55:49.908612 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 16 23:55:49.908934 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jan 16 23:55:49.909007 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Jan 16 23:55:49.909073 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Jan 16 23:55:49.909157 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jan 16 23:55:49.909233 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jan 16 23:55:49.909299 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Jan 16 23:55:49.909369 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 16 23:55:49.909436 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Jan 16 23:55:49.909503 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Jan 16 23:55:49.909574 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 16 23:55:49.909658 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Jan 16 23:55:49.909792 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jan 16 23:55:49.909880 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 16 23:55:49.909962 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Jan 16 23:55:49.910038 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Jan 16 23:55:49.910118 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 16 23:55:49.910184 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Jan 16 23:55:49.910249 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Jan 16 23:55:49.910323 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 16 23:55:49.910389 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Jan 16 23:55:49.910454 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Jan 16 23:55:49.910524 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 16 23:55:49.910597 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Jan 16 23:55:49.912807 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Jan 16 23:55:49.912915 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 16 23:55:49.912993 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Jan 16 23:55:49.913060 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Jan 16 23:55:49.913132 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Jan 16 23:55:49.913199 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Jan 16 23:55:49.913270 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Jan 16 23:55:49.913339 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Jan 16 23:55:49.913409 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Jan 16 23:55:49.913480 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Jan 16 23:55:49.913551 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Jan 16 23:55:49.913617 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Jan 16 23:55:49.913734 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Jan 16 23:55:49.913805 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Jan 16 23:55:49.913876 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Jan 16 23:55:49.913944 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 16 23:55:49.914019 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Jan 16 23:55:49.914087 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 16 23:55:49.914153 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Jan 16 23:55:49.914223 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 16 23:55:49.914309 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Jan 16 23:55:49.914386 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Jan 16 23:55:49.914465 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Jan 16 23:55:49.916843 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Jan 16 23:55:49.916942 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Jan 16 23:55:49.917012 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jan 16 23:55:49.917086 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Jan 16 23:55:49.917166 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jan 16 23:55:49.917239 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Jan 16 23:55:49.917308 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jan 16 23:55:49.917381 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Jan 16 23:55:49.917864 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jan 16 23:55:49.917953 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Jan 16 23:55:49.918020 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jan 16 23:55:49.918098 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Jan 16 23:55:49.918167 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jan 16 23:55:49.918237 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Jan 16 23:55:49.918303 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jan 16 23:55:49.918372 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Jan 16 23:55:49.918448 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jan 16 23:55:49.918518 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Jan 16 23:55:49.918587 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Jan 16 23:55:49.918748 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Jan 16 23:55:49.918856 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Jan 16 23:55:49.918929 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 16 23:55:49.918999 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Jan 16 23:55:49.919067 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 16 23:55:49.919164 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 16 23:55:49.919237 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Jan 16 23:55:49.919303 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Jan 16 23:55:49.919379 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Jan 16 23:55:49.919454 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 16 23:55:49.919521 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 16 23:55:49.920589 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Jan 16 23:55:49.921046 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Jan 16 23:55:49.921134 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Jan 16 23:55:49.921204 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Jan 16 23:55:49.921273 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 16 23:55:49.921340 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 16 23:55:49.921412 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Jan 16 23:55:49.921478 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Jan 16 23:55:49.921554 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Jan 16 23:55:49.921709 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 16 23:55:49.921804 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 16 23:55:49.921870 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Jan 16 23:55:49.921939 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Jan 16 23:55:49.922015 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Jan 16 23:55:49.922090 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Jan 16 23:55:49.922158 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 16 23:55:49.922223 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 16 23:55:49.922288 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Jan 16 23:55:49.922353 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Jan 16 23:55:49.922430 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Jan 16 23:55:49.922500 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Jan 16 23:55:49.922568 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 16 23:55:49.922649 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 16 23:55:49.922731 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Jan 16 23:55:49.922803 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 16 23:55:49.922879 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Jan 16 23:55:49.922949 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Jan 16 23:55:49.923020 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Jan 16 23:55:49.923089 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 16 23:55:49.923156 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 16 23:55:49.923226 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Jan 16 23:55:49.923293 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 16 23:55:49.923362 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 16 23:55:49.923427 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 16 23:55:49.923492 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Jan 16 23:55:49.923558 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 16 23:55:49.925755 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 16 23:55:49.925886 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Jan 16 23:55:49.925963 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Jan 16 23:55:49.926031 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Jan 16 23:55:49.926100 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 16 23:55:49.926159 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 16 23:55:49.926218 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 16 23:55:49.926299 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 16 23:55:49.926362 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Jan 16 23:55:49.926426 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Jan 16 23:55:49.926497 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Jan 16 23:55:49.926558 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Jan 16 23:55:49.926619 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Jan 16 23:55:49.926852 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Jan 16 23:55:49.926915 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Jan 16 23:55:49.926980 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Jan 16 23:55:49.927048 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jan 16 23:55:49.927108 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Jan 16 23:55:49.927184 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Jan 16 23:55:49.927256 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Jan 16 23:55:49.927324 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Jan 16 23:55:49.927384 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Jan 16 23:55:49.927457 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Jan 16 23:55:49.927519 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Jan 16 23:55:49.927581 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 16 23:55:49.927664 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Jan 16 23:55:49.927781 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Jan 16 23:55:49.927849 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 16 23:55:49.927920 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Jan 16 23:55:49.927982 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Jan 16 23:55:49.928044 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 16 23:55:49.928111 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Jan 16 23:55:49.928173 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Jan 16 23:55:49.928239 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Jan 16 23:55:49.928250 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 16 23:55:49.928258 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 16 23:55:49.928266 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 16 23:55:49.928274 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 16 23:55:49.928282 kernel: iommu: Default domain type: Translated Jan 16 23:55:49.928290 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 16 23:55:49.928298 kernel: efivars: Registered efivars operations Jan 16 23:55:49.928308 kernel: vgaarb: loaded Jan 16 23:55:49.928316 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 16 23:55:49.928324 kernel: VFS: Disk quotas dquot_6.6.0 Jan 16 23:55:49.928332 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 16 23:55:49.928340 kernel: pnp: PnP ACPI init Jan 16 23:55:49.928425 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 16 23:55:49.928437 kernel: pnp: PnP ACPI: found 1 devices Jan 16 23:55:49.928445 kernel: NET: Registered PF_INET protocol family Jan 16 23:55:49.928453 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 16 23:55:49.928463 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 16 23:55:49.928471 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 16 23:55:49.928479 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 16 23:55:49.928487 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 16 23:55:49.928495 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 16 23:55:49.928503 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 16 23:55:49.928511 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 16 23:55:49.928519 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 16 23:55:49.928595 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Jan 16 23:55:49.928609 kernel: PCI: CLS 0 bytes, default 64 Jan 16 23:55:49.928617 kernel: kvm [1]: HYP mode not available Jan 16 23:55:49.930735 kernel: Initialise system trusted keyrings Jan 16 23:55:49.930750 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 16 23:55:49.930760 kernel: Key type asymmetric registered Jan 16 23:55:49.930768 kernel: Asymmetric key parser 'x509' registered Jan 16 23:55:49.930776 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 16 23:55:49.930784 kernel: io scheduler mq-deadline registered Jan 16 23:55:49.930793 kernel: io scheduler kyber registered Jan 16 23:55:49.930807 kernel: io scheduler bfq registered Jan 16 23:55:49.930817 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 16 23:55:49.930931 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Jan 16 23:55:49.931003 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Jan 16 23:55:49.931069 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 16 23:55:49.931141 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Jan 16 23:55:49.931208 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Jan 16 23:55:49.931278 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 16 23:55:49.931353 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Jan 16 23:55:49.931432 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Jan 16 23:55:49.931499 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 16 23:55:49.931569 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Jan 16 23:55:49.931657 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Jan 16 23:55:49.931746 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 16 23:55:49.931820 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Jan 16 23:55:49.931887 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Jan 16 23:55:49.931952 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 16 23:55:49.932022 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Jan 16 23:55:49.932103 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Jan 16 23:55:49.932174 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 16 23:55:49.932245 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Jan 16 23:55:49.932314 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Jan 16 23:55:49.932379 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 16 23:55:49.932450 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Jan 16 23:55:49.932520 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Jan 16 23:55:49.932589 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 16 23:55:49.932601 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Jan 16 23:55:49.933534 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Jan 16 23:55:49.933618 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Jan 16 23:55:49.933730 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 16 23:55:49.933743 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 16 23:55:49.933759 kernel: ACPI: button: Power Button [PWRB] Jan 16 23:55:49.933768 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 16 23:55:49.933846 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Jan 16 23:55:49.933923 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Jan 16 23:55:49.933935 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 16 23:55:49.933943 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 16 23:55:49.934014 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Jan 16 23:55:49.934025 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Jan 16 23:55:49.934033 kernel: thunder_xcv, ver 1.0 Jan 16 23:55:49.934044 kernel: thunder_bgx, ver 1.0 Jan 16 23:55:49.934052 kernel: nicpf, ver 1.0 Jan 16 23:55:49.934060 kernel: nicvf, ver 1.0 Jan 16 23:55:49.934140 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 16 23:55:49.934203 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-16T23:55:49 UTC (1768607749) Jan 16 23:55:49.934214 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 16 23:55:49.934222 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 16 23:55:49.934230 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 16 23:55:49.934241 kernel: watchdog: Hard watchdog permanently disabled Jan 16 23:55:49.934250 kernel: NET: Registered PF_INET6 protocol family Jan 16 23:55:49.934257 kernel: Segment Routing with IPv6 Jan 16 23:55:49.934265 kernel: In-situ OAM (IOAM) with IPv6 Jan 16 23:55:49.934273 kernel: NET: Registered PF_PACKET protocol family Jan 16 23:55:49.934281 kernel: Key type dns_resolver registered Jan 16 23:55:49.934289 kernel: registered taskstats version 1 Jan 16 23:55:49.934297 kernel: Loading compiled-in X.509 certificates Jan 16 23:55:49.934305 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 0aabad27df82424bfffc9b1a502a9ae84b35bad4' Jan 16 23:55:49.934315 kernel: Key type .fscrypt registered Jan 16 23:55:49.934323 kernel: Key type fscrypt-provisioning registered Jan 16 23:55:49.934330 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 16 23:55:49.934339 kernel: ima: Allocated hash algorithm: sha1 Jan 16 23:55:49.934346 kernel: ima: No architecture policies found Jan 16 23:55:49.934355 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 16 23:55:49.934362 kernel: clk: Disabling unused clocks Jan 16 23:55:49.934370 kernel: Freeing unused kernel memory: 39424K Jan 16 23:55:49.934378 kernel: Run /init as init process Jan 16 23:55:49.934387 kernel: with arguments: Jan 16 23:55:49.934395 kernel: /init Jan 16 23:55:49.934403 kernel: with environment: Jan 16 23:55:49.934411 kernel: HOME=/ Jan 16 23:55:49.934418 kernel: TERM=linux Jan 16 23:55:49.934428 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 16 23:55:49.934438 systemd[1]: Detected virtualization kvm. Jan 16 23:55:49.934447 systemd[1]: Detected architecture arm64. Jan 16 23:55:49.934457 systemd[1]: Running in initrd. Jan 16 23:55:49.934465 systemd[1]: No hostname configured, using default hostname. Jan 16 23:55:49.934473 systemd[1]: Hostname set to . Jan 16 23:55:49.934481 systemd[1]: Initializing machine ID from VM UUID. Jan 16 23:55:49.934490 systemd[1]: Queued start job for default target initrd.target. Jan 16 23:55:49.934498 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 16 23:55:49.934506 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 16 23:55:49.934516 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 16 23:55:49.934526 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 16 23:55:49.934534 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 16 23:55:49.934544 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 16 23:55:49.934554 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 16 23:55:49.934563 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 16 23:55:49.934571 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 16 23:55:49.934580 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 16 23:55:49.934590 systemd[1]: Reached target paths.target - Path Units. Jan 16 23:55:49.934598 systemd[1]: Reached target slices.target - Slice Units. Jan 16 23:55:49.934606 systemd[1]: Reached target swap.target - Swaps. Jan 16 23:55:49.934615 systemd[1]: Reached target timers.target - Timer Units. Jan 16 23:55:49.934641 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 16 23:55:49.934651 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 16 23:55:49.934659 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 16 23:55:49.934668 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 16 23:55:49.934679 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 16 23:55:49.934696 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 16 23:55:49.934705 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 16 23:55:49.934713 systemd[1]: Reached target sockets.target - Socket Units. Jan 16 23:55:49.934722 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 16 23:55:49.934730 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 16 23:55:49.934739 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 16 23:55:49.934747 systemd[1]: Starting systemd-fsck-usr.service... Jan 16 23:55:49.934756 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 16 23:55:49.934768 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 16 23:55:49.934776 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 23:55:49.934784 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 16 23:55:49.934819 systemd-journald[237]: Collecting audit messages is disabled. Jan 16 23:55:49.934844 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 16 23:55:49.934853 systemd[1]: Finished systemd-fsck-usr.service. Jan 16 23:55:49.934862 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 16 23:55:49.934870 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 23:55:49.934881 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 16 23:55:49.934890 systemd-journald[237]: Journal started Jan 16 23:55:49.934911 systemd-journald[237]: Runtime Journal (/run/log/journal/60eae71b81bc44c6824b60ca6f6d7592) is 8.0M, max 76.6M, 68.6M free. Jan 16 23:55:49.918005 systemd-modules-load[238]: Inserted module 'overlay' Jan 16 23:55:49.938869 systemd[1]: Started systemd-journald.service - Journal Service. Jan 16 23:55:49.940668 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 16 23:55:49.942697 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 16 23:55:49.945776 kernel: Bridge firewalling registered Jan 16 23:55:49.945509 systemd-modules-load[238]: Inserted module 'br_netfilter' Jan 16 23:55:49.949089 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 16 23:55:49.952822 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 16 23:55:49.956883 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 16 23:55:49.959897 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 16 23:55:49.980416 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 16 23:55:49.981386 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 16 23:55:49.987443 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 16 23:55:49.995892 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 16 23:55:49.998458 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 16 23:55:50.005924 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 16 23:55:50.024813 systemd-resolved[272]: Positive Trust Anchors: Jan 16 23:55:50.025511 systemd-resolved[272]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 16 23:55:50.026822 dracut-cmdline[274]: dracut-dracut-053 Jan 16 23:55:50.026387 systemd-resolved[272]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 16 23:55:50.035830 systemd-resolved[272]: Defaulting to hostname 'linux'. Jan 16 23:55:50.037573 dracut-cmdline[274]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=d499dc3f7d5d4118d4e4300ad00f17ad72271d2a2f6bb9119457036ac5212c83 Jan 16 23:55:50.040881 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 16 23:55:50.041561 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 16 23:55:50.109719 kernel: SCSI subsystem initialized Jan 16 23:55:50.114667 kernel: Loading iSCSI transport class v2.0-870. Jan 16 23:55:50.122713 kernel: iscsi: registered transport (tcp) Jan 16 23:55:50.136650 kernel: iscsi: registered transport (qla4xxx) Jan 16 23:55:50.136751 kernel: QLogic iSCSI HBA Driver Jan 16 23:55:50.194947 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 16 23:55:50.201966 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 16 23:55:50.227733 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 16 23:55:50.227825 kernel: device-mapper: uevent: version 1.0.3 Jan 16 23:55:50.227846 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 16 23:55:50.277708 kernel: raid6: neonx8 gen() 15574 MB/s Jan 16 23:55:50.294679 kernel: raid6: neonx4 gen() 15477 MB/s Jan 16 23:55:50.311659 kernel: raid6: neonx2 gen() 13108 MB/s Jan 16 23:55:50.328671 kernel: raid6: neonx1 gen() 10438 MB/s Jan 16 23:55:50.345668 kernel: raid6: int64x8 gen() 6927 MB/s Jan 16 23:55:50.362715 kernel: raid6: int64x4 gen() 7302 MB/s Jan 16 23:55:50.379679 kernel: raid6: int64x2 gen() 6106 MB/s Jan 16 23:55:50.396890 kernel: raid6: int64x1 gen() 5036 MB/s Jan 16 23:55:50.396966 kernel: raid6: using algorithm neonx8 gen() 15574 MB/s Jan 16 23:55:50.413717 kernel: raid6: .... xor() 11903 MB/s, rmw enabled Jan 16 23:55:50.413813 kernel: raid6: using neon recovery algorithm Jan 16 23:55:50.418656 kernel: xor: measuring software checksum speed Jan 16 23:55:50.418735 kernel: 8regs : 17796 MB/sec Jan 16 23:55:50.419783 kernel: 32regs : 19674 MB/sec Jan 16 23:55:50.419803 kernel: arm64_neon : 26954 MB/sec Jan 16 23:55:50.419823 kernel: xor: using function: arm64_neon (26954 MB/sec) Jan 16 23:55:50.472784 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 16 23:55:50.486587 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 16 23:55:50.494956 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 16 23:55:50.509742 systemd-udevd[455]: Using default interface naming scheme 'v255'. Jan 16 23:55:50.513263 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 16 23:55:50.520809 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 16 23:55:50.537700 dracut-pre-trigger[463]: rd.md=0: removing MD RAID activation Jan 16 23:55:50.576512 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 16 23:55:50.582841 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 16 23:55:50.635000 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 16 23:55:50.643841 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 16 23:55:50.664931 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 16 23:55:50.666372 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 16 23:55:50.667187 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 16 23:55:50.669792 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 16 23:55:50.678314 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 16 23:55:50.697981 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 16 23:55:50.750094 kernel: ACPI: bus type USB registered Jan 16 23:55:50.750160 kernel: usbcore: registered new interface driver usbfs Jan 16 23:55:50.750172 kernel: usbcore: registered new interface driver hub Jan 16 23:55:50.751027 kernel: usbcore: registered new device driver usb Jan 16 23:55:50.760524 kernel: scsi host0: Virtio SCSI HBA Jan 16 23:55:50.768143 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 16 23:55:50.768427 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 16 23:55:50.769705 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 16 23:55:50.779835 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 16 23:55:50.783432 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 16 23:55:50.780005 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 23:55:50.785700 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 16 23:55:50.782870 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 23:55:50.792359 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 23:55:50.810205 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 23:55:50.813984 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 16 23:55:50.814179 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jan 16 23:55:50.816652 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 16 23:55:50.817887 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 16 23:55:50.822804 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 16 23:55:50.823045 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jan 16 23:55:50.823257 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jan 16 23:55:50.824938 kernel: hub 1-0:1.0: USB hub found Jan 16 23:55:50.825118 kernel: hub 1-0:1.0: 4 ports detected Jan 16 23:55:50.828639 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 16 23:55:50.829709 kernel: hub 2-0:1.0: USB hub found Jan 16 23:55:50.829821 kernel: hub 2-0:1.0: 4 ports detected Jan 16 23:55:50.829903 kernel: sr 0:0:0:0: Power-on or device reset occurred Jan 16 23:55:50.830007 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Jan 16 23:55:50.834211 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 16 23:55:50.836757 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Jan 16 23:55:50.843192 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 16 23:55:50.848006 kernel: sd 0:0:0:1: Power-on or device reset occurred Jan 16 23:55:50.848233 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Jan 16 23:55:50.848942 kernel: sd 0:0:0:1: [sda] Write Protect is off Jan 16 23:55:50.850663 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Jan 16 23:55:50.850838 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 16 23:55:50.856594 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 16 23:55:50.856677 kernel: GPT:17805311 != 80003071 Jan 16 23:55:50.856704 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 16 23:55:50.856715 kernel: GPT:17805311 != 80003071 Jan 16 23:55:50.856723 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 16 23:55:50.856733 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 16 23:55:50.858728 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Jan 16 23:55:50.909666 kernel: BTRFS: device fsid 257557f7-4bf9-4b29-86df-93ad67770d31 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (516) Jan 16 23:55:50.909763 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (509) Jan 16 23:55:50.918569 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 16 23:55:50.926614 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 16 23:55:50.934357 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jan 16 23:55:50.936104 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 16 23:55:50.942384 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 16 23:55:50.950902 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 16 23:55:50.974452 disk-uuid[571]: Primary Header is updated. Jan 16 23:55:50.974452 disk-uuid[571]: Secondary Entries is updated. Jan 16 23:55:50.974452 disk-uuid[571]: Secondary Header is updated. Jan 16 23:55:50.981827 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 16 23:55:50.989431 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 16 23:55:50.994659 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 16 23:55:51.064646 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 16 23:55:51.200661 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Jan 16 23:55:51.202814 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jan 16 23:55:51.204650 kernel: usbcore: registered new interface driver usbhid Jan 16 23:55:51.204749 kernel: usbhid: USB HID core driver Jan 16 23:55:51.307728 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Jan 16 23:55:51.438660 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Jan 16 23:55:51.491796 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Jan 16 23:55:51.995359 disk-uuid[572]: The operation has completed successfully. Jan 16 23:55:51.996735 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 16 23:55:52.051072 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 16 23:55:52.051197 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 16 23:55:52.065932 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 16 23:55:52.073797 sh[590]: Success Jan 16 23:55:52.086862 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 16 23:55:52.148799 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 16 23:55:52.151432 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 16 23:55:52.152210 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 16 23:55:52.181984 kernel: BTRFS info (device dm-0): first mount of filesystem 257557f7-4bf9-4b29-86df-93ad67770d31 Jan 16 23:55:52.182053 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 16 23:55:52.182068 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 16 23:55:52.182094 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 16 23:55:52.182754 kernel: BTRFS info (device dm-0): using free space tree Jan 16 23:55:52.189685 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 16 23:55:52.191157 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 16 23:55:52.193413 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 16 23:55:52.202923 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 16 23:55:52.206968 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 16 23:55:52.220334 kernel: BTRFS info (device sda6): first mount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 16 23:55:52.220397 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 16 23:55:52.220410 kernel: BTRFS info (device sda6): using free space tree Jan 16 23:55:52.224657 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 16 23:55:52.224730 kernel: BTRFS info (device sda6): auto enabling async discard Jan 16 23:55:52.240828 kernel: BTRFS info (device sda6): last unmount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 16 23:55:52.240590 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 16 23:55:52.246975 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 16 23:55:52.252883 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 16 23:55:52.356569 ignition[672]: Ignition 2.19.0 Jan 16 23:55:52.356579 ignition[672]: Stage: fetch-offline Jan 16 23:55:52.356639 ignition[672]: no configs at "/usr/lib/ignition/base.d" Jan 16 23:55:52.361351 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 16 23:55:52.356648 ignition[672]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 16 23:55:52.356836 ignition[672]: parsed url from cmdline: "" Jan 16 23:55:52.356839 ignition[672]: no config URL provided Jan 16 23:55:52.356844 ignition[672]: reading system config file "/usr/lib/ignition/user.ign" Jan 16 23:55:52.356852 ignition[672]: no config at "/usr/lib/ignition/user.ign" Jan 16 23:55:52.356858 ignition[672]: failed to fetch config: resource requires networking Jan 16 23:55:52.357235 ignition[672]: Ignition finished successfully Jan 16 23:55:52.377007 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 16 23:55:52.380846 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 16 23:55:52.405340 systemd-networkd[778]: lo: Link UP Jan 16 23:55:52.405986 systemd-networkd[778]: lo: Gained carrier Jan 16 23:55:52.407521 systemd-networkd[778]: Enumeration completed Jan 16 23:55:52.407861 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 16 23:55:52.408513 systemd[1]: Reached target network.target - Network. Jan 16 23:55:52.409229 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 16 23:55:52.409232 systemd-networkd[778]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 16 23:55:52.411002 systemd-networkd[778]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 16 23:55:52.411005 systemd-networkd[778]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 16 23:55:52.411524 systemd-networkd[778]: eth0: Link UP Jan 16 23:55:52.411527 systemd-networkd[778]: eth0: Gained carrier Jan 16 23:55:52.411535 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 16 23:55:52.417048 systemd-networkd[778]: eth1: Link UP Jan 16 23:55:52.417052 systemd-networkd[778]: eth1: Gained carrier Jan 16 23:55:52.417061 systemd-networkd[778]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 16 23:55:52.419822 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 16 23:55:52.432316 ignition[780]: Ignition 2.19.0 Jan 16 23:55:52.432328 ignition[780]: Stage: fetch Jan 16 23:55:52.432519 ignition[780]: no configs at "/usr/lib/ignition/base.d" Jan 16 23:55:52.432529 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 16 23:55:52.432646 ignition[780]: parsed url from cmdline: "" Jan 16 23:55:52.432650 ignition[780]: no config URL provided Jan 16 23:55:52.432654 ignition[780]: reading system config file "/usr/lib/ignition/user.ign" Jan 16 23:55:52.432663 ignition[780]: no config at "/usr/lib/ignition/user.ign" Jan 16 23:55:52.432708 ignition[780]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jan 16 23:55:52.433367 ignition[780]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 16 23:55:52.459762 systemd-networkd[778]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Jan 16 23:55:52.476750 systemd-networkd[778]: eth0: DHCPv4 address 188.245.124.206/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 16 23:55:52.633583 ignition[780]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Jan 16 23:55:52.638912 ignition[780]: GET result: OK Jan 16 23:55:52.639047 ignition[780]: parsing config with SHA512: e419443367c562fe347f3fb5b4fe2d55fa9c9a63ca9b49151af94fb7cb68943f9838215bf9f40c83cfa44ab4f019befcdf9bb3d2682973f3a5dddf5847363be2 Jan 16 23:55:52.645057 unknown[780]: fetched base config from "system" Jan 16 23:55:52.645066 unknown[780]: fetched base config from "system" Jan 16 23:55:52.645538 ignition[780]: fetch: fetch complete Jan 16 23:55:52.645071 unknown[780]: fetched user config from "hetzner" Jan 16 23:55:52.645543 ignition[780]: fetch: fetch passed Jan 16 23:55:52.647232 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 16 23:55:52.645596 ignition[780]: Ignition finished successfully Jan 16 23:55:52.652876 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 16 23:55:52.666922 ignition[787]: Ignition 2.19.0 Jan 16 23:55:52.666931 ignition[787]: Stage: kargs Jan 16 23:55:52.667112 ignition[787]: no configs at "/usr/lib/ignition/base.d" Jan 16 23:55:52.667122 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 16 23:55:52.670986 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 16 23:55:52.668089 ignition[787]: kargs: kargs passed Jan 16 23:55:52.668143 ignition[787]: Ignition finished successfully Jan 16 23:55:52.681235 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 16 23:55:52.695989 ignition[793]: Ignition 2.19.0 Jan 16 23:55:52.696003 ignition[793]: Stage: disks Jan 16 23:55:52.696238 ignition[793]: no configs at "/usr/lib/ignition/base.d" Jan 16 23:55:52.696253 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 16 23:55:52.698052 ignition[793]: disks: disks passed Jan 16 23:55:52.698118 ignition[793]: Ignition finished successfully Jan 16 23:55:52.701447 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 16 23:55:52.702693 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 16 23:55:52.703463 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 16 23:55:52.704582 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 16 23:55:52.705639 systemd[1]: Reached target sysinit.target - System Initialization. Jan 16 23:55:52.706786 systemd[1]: Reached target basic.target - Basic System. Jan 16 23:55:52.718043 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 16 23:55:52.742652 systemd-fsck[801]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 16 23:55:52.746959 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 16 23:55:52.753829 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 16 23:55:52.807085 kernel: EXT4-fs (sda9): mounted filesystem b70ce012-b356-4603-a688-ee0b3b7de551 r/w with ordered data mode. Quota mode: none. Jan 16 23:55:52.807846 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 16 23:55:52.809148 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 16 23:55:52.816769 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 16 23:55:52.819831 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 16 23:55:52.830749 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (809) Jan 16 23:55:52.830969 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 16 23:55:52.832432 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 16 23:55:52.832468 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 16 23:55:52.838186 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 16 23:55:52.844033 kernel: BTRFS info (device sda6): first mount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 16 23:55:52.844064 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 16 23:55:52.844075 kernel: BTRFS info (device sda6): using free space tree Jan 16 23:55:52.845893 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 16 23:55:52.851519 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 16 23:55:52.851577 kernel: BTRFS info (device sda6): auto enabling async discard Jan 16 23:55:52.855148 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 16 23:55:52.892164 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory Jan 16 23:55:52.893566 coreos-metadata[811]: Jan 16 23:55:52.893 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jan 16 23:55:52.896637 coreos-metadata[811]: Jan 16 23:55:52.896 INFO Fetch successful Jan 16 23:55:52.899322 coreos-metadata[811]: Jan 16 23:55:52.896 INFO wrote hostname ci-4081-3-6-n-db2d61d92f to /sysroot/etc/hostname Jan 16 23:55:52.902802 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory Jan 16 23:55:52.901153 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 16 23:55:52.908461 initrd-setup-root[851]: cut: /sysroot/etc/shadow: No such file or directory Jan 16 23:55:52.912977 initrd-setup-root[858]: cut: /sysroot/etc/gshadow: No such file or directory Jan 16 23:55:53.014903 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 16 23:55:53.020747 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 16 23:55:53.025936 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 16 23:55:53.034656 kernel: BTRFS info (device sda6): last unmount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 16 23:55:53.058971 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 16 23:55:53.063691 ignition[925]: INFO : Ignition 2.19.0 Jan 16 23:55:53.063691 ignition[925]: INFO : Stage: mount Jan 16 23:55:53.063691 ignition[925]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 16 23:55:53.063691 ignition[925]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 16 23:55:53.067276 ignition[925]: INFO : mount: mount passed Jan 16 23:55:53.067276 ignition[925]: INFO : Ignition finished successfully Jan 16 23:55:53.068054 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 16 23:55:53.076819 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 16 23:55:53.180995 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 16 23:55:53.189041 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 16 23:55:53.200725 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (937) Jan 16 23:55:53.200811 kernel: BTRFS info (device sda6): first mount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 16 23:55:53.200838 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 16 23:55:53.201648 kernel: BTRFS info (device sda6): using free space tree Jan 16 23:55:53.204773 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 16 23:55:53.204831 kernel: BTRFS info (device sda6): auto enabling async discard Jan 16 23:55:53.208492 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 16 23:55:53.238566 ignition[954]: INFO : Ignition 2.19.0 Jan 16 23:55:53.238566 ignition[954]: INFO : Stage: files Jan 16 23:55:53.240041 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 16 23:55:53.240041 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 16 23:55:53.241579 ignition[954]: DEBUG : files: compiled without relabeling support, skipping Jan 16 23:55:53.241579 ignition[954]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 16 23:55:53.241579 ignition[954]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 16 23:55:53.245934 ignition[954]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 16 23:55:53.246918 ignition[954]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 16 23:55:53.248398 unknown[954]: wrote ssh authorized keys file for user: core Jan 16 23:55:53.250358 ignition[954]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 16 23:55:53.252203 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 16 23:55:53.252203 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 16 23:55:53.252203 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 16 23:55:53.252203 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jan 16 23:55:53.338210 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 16 23:55:53.422474 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 16 23:55:53.425419 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 16 23:55:53.425419 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 16 23:55:53.425419 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 16 23:55:53.429891 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 16 23:55:53.429891 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 16 23:55:53.429891 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 16 23:55:53.429891 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 16 23:55:53.429891 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 16 23:55:53.429891 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 16 23:55:53.429891 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 16 23:55:53.429891 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 16 23:55:53.429891 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 16 23:55:53.429891 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 16 23:55:53.429891 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jan 16 23:55:53.827967 systemd-networkd[778]: eth0: Gained IPv6LL Jan 16 23:55:53.832054 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 16 23:55:53.956222 systemd-networkd[778]: eth1: Gained IPv6LL Jan 16 23:55:55.383713 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 16 23:55:55.383713 ignition[954]: INFO : files: op(c): [started] processing unit "containerd.service" Jan 16 23:55:55.388320 ignition[954]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 16 23:55:55.388320 ignition[954]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 16 23:55:55.388320 ignition[954]: INFO : files: op(c): [finished] processing unit "containerd.service" Jan 16 23:55:55.388320 ignition[954]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jan 16 23:55:55.388320 ignition[954]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 16 23:55:55.388320 ignition[954]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 16 23:55:55.388320 ignition[954]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jan 16 23:55:55.388320 ignition[954]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Jan 16 23:55:55.388320 ignition[954]: INFO : files: op(10): op(11): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 16 23:55:55.388320 ignition[954]: INFO : files: op(10): op(11): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 16 23:55:55.388320 ignition[954]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Jan 16 23:55:55.388320 ignition[954]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jan 16 23:55:55.388320 ignition[954]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jan 16 23:55:55.388320 ignition[954]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 16 23:55:55.388320 ignition[954]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 16 23:55:55.388320 ignition[954]: INFO : files: files passed Jan 16 23:55:55.388320 ignition[954]: INFO : Ignition finished successfully Jan 16 23:55:55.391732 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 16 23:55:55.398951 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 16 23:55:55.402161 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 16 23:55:55.406749 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 16 23:55:55.406891 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 16 23:55:55.427322 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 16 23:55:55.427322 initrd-setup-root-after-ignition[984]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 16 23:55:55.429848 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 16 23:55:55.432065 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 16 23:55:55.434178 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 16 23:55:55.439928 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 16 23:55:55.484019 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 16 23:55:55.484237 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 16 23:55:55.488667 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 16 23:55:55.490327 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 16 23:55:55.492437 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 16 23:55:55.498927 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 16 23:55:55.514524 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 16 23:55:55.520953 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 16 23:55:55.536568 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 16 23:55:55.537401 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 16 23:55:55.538765 systemd[1]: Stopped target timers.target - Timer Units. Jan 16 23:55:55.539643 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 16 23:55:55.539829 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 16 23:55:55.541230 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 16 23:55:55.541874 systemd[1]: Stopped target basic.target - Basic System. Jan 16 23:55:55.542991 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 16 23:55:55.544053 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 16 23:55:55.545095 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 16 23:55:55.546259 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 16 23:55:55.547281 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 16 23:55:55.548440 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 16 23:55:55.549434 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 16 23:55:55.550545 systemd[1]: Stopped target swap.target - Swaps. Jan 16 23:55:55.551467 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 16 23:55:55.551596 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 16 23:55:55.552894 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 16 23:55:55.553522 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 16 23:55:55.554573 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 16 23:55:55.556809 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 16 23:55:55.557495 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 16 23:55:55.557649 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 16 23:55:55.559265 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 16 23:55:55.559384 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 16 23:55:55.560549 systemd[1]: ignition-files.service: Deactivated successfully. Jan 16 23:55:55.560661 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 16 23:55:55.561734 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 16 23:55:55.561831 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 16 23:55:55.574035 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 16 23:55:55.576938 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 16 23:55:55.579857 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 16 23:55:55.580037 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 16 23:55:55.581192 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 16 23:55:55.581604 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 16 23:55:55.593125 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 16 23:55:55.597932 ignition[1008]: INFO : Ignition 2.19.0 Jan 16 23:55:55.597932 ignition[1008]: INFO : Stage: umount Jan 16 23:55:55.597932 ignition[1008]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 16 23:55:55.597932 ignition[1008]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 16 23:55:55.593308 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 16 23:55:55.603174 ignition[1008]: INFO : umount: umount passed Jan 16 23:55:55.603709 ignition[1008]: INFO : Ignition finished successfully Jan 16 23:55:55.608241 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 16 23:55:55.610055 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 16 23:55:55.610260 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 16 23:55:55.611407 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 16 23:55:55.611452 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 16 23:55:55.615298 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 16 23:55:55.615354 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 16 23:55:55.618396 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 16 23:55:55.618447 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 16 23:55:55.620499 systemd[1]: Stopped target network.target - Network. Jan 16 23:55:55.622389 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 16 23:55:55.622461 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 16 23:55:55.625827 systemd[1]: Stopped target paths.target - Path Units. Jan 16 23:55:55.630801 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 16 23:55:55.631254 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 16 23:55:55.632299 systemd[1]: Stopped target slices.target - Slice Units. Jan 16 23:55:55.634304 systemd[1]: Stopped target sockets.target - Socket Units. Jan 16 23:55:55.635448 systemd[1]: iscsid.socket: Deactivated successfully. Jan 16 23:55:55.635499 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 16 23:55:55.636308 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 16 23:55:55.636342 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 16 23:55:55.637253 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 16 23:55:55.637306 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 16 23:55:55.640320 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 16 23:55:55.640382 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 16 23:55:55.642488 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 16 23:55:55.645574 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 16 23:55:55.650690 systemd-networkd[778]: eth1: DHCPv6 lease lost Jan 16 23:55:55.653989 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 16 23:55:55.654100 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 16 23:55:55.655161 systemd-networkd[778]: eth0: DHCPv6 lease lost Jan 16 23:55:55.657167 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 16 23:55:55.657578 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 16 23:55:55.662015 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 16 23:55:55.662576 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 16 23:55:55.665349 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 16 23:55:55.665401 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 16 23:55:55.666161 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 16 23:55:55.666214 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 16 23:55:55.672843 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 16 23:55:55.673321 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 16 23:55:55.673385 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 16 23:55:55.674978 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 16 23:55:55.675024 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 16 23:55:55.676524 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 16 23:55:55.676572 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 16 23:55:55.677264 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 16 23:55:55.677302 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 16 23:55:55.678075 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 16 23:55:55.691337 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 16 23:55:55.691461 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 16 23:55:55.703856 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 16 23:55:55.704157 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 16 23:55:55.707300 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 16 23:55:55.707380 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 16 23:55:55.709141 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 16 23:55:55.709174 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 16 23:55:55.710746 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 16 23:55:55.710804 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 16 23:55:55.714710 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 16 23:55:55.714772 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 16 23:55:55.716273 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 16 23:55:55.716318 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 16 23:55:55.722815 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 16 23:55:55.724074 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 16 23:55:55.724175 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 16 23:55:55.728083 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 16 23:55:55.728169 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 16 23:55:55.729596 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 16 23:55:55.729762 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 16 23:55:55.730613 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 16 23:55:55.730688 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 23:55:55.732634 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 16 23:55:55.732740 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 16 23:55:55.735363 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 16 23:55:55.743146 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 16 23:55:55.755196 systemd[1]: Switching root. Jan 16 23:55:55.791913 systemd-journald[237]: Journal stopped Jan 16 23:55:56.760570 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Jan 16 23:55:56.760687 kernel: SELinux: policy capability network_peer_controls=1 Jan 16 23:55:56.760711 kernel: SELinux: policy capability open_perms=1 Jan 16 23:55:56.760722 kernel: SELinux: policy capability extended_socket_class=1 Jan 16 23:55:56.760733 kernel: SELinux: policy capability always_check_network=0 Jan 16 23:55:56.760742 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 16 23:55:56.760752 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 16 23:55:56.760762 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 16 23:55:56.760771 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 16 23:55:56.760783 kernel: audit: type=1403 audit(1768607755.972:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 16 23:55:56.760794 systemd[1]: Successfully loaded SELinux policy in 41.686ms. Jan 16 23:55:56.760816 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.398ms. Jan 16 23:55:56.760828 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 16 23:55:56.760839 systemd[1]: Detected virtualization kvm. Jan 16 23:55:56.760850 systemd[1]: Detected architecture arm64. Jan 16 23:55:56.760860 systemd[1]: Detected first boot. Jan 16 23:55:56.760870 systemd[1]: Hostname set to . Jan 16 23:55:56.760881 systemd[1]: Initializing machine ID from VM UUID. Jan 16 23:55:56.760892 zram_generator::config[1067]: No configuration found. Jan 16 23:55:56.760905 systemd[1]: Populated /etc with preset unit settings. Jan 16 23:55:56.760916 systemd[1]: Queued start job for default target multi-user.target. Jan 16 23:55:56.760926 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 16 23:55:56.760938 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 16 23:55:56.760953 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 16 23:55:56.760964 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 16 23:55:56.760974 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 16 23:55:56.760985 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 16 23:55:56.760997 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 16 23:55:56.761009 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 16 23:55:56.761019 systemd[1]: Created slice user.slice - User and Session Slice. Jan 16 23:55:56.761030 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 16 23:55:56.761041 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 16 23:55:56.761051 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 16 23:55:56.761062 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 16 23:55:56.761073 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 16 23:55:56.761084 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 16 23:55:56.761096 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 16 23:55:56.761107 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 16 23:55:56.761117 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 16 23:55:56.761127 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 16 23:55:56.761142 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 16 23:55:56.761153 systemd[1]: Reached target slices.target - Slice Units. Jan 16 23:55:56.761164 systemd[1]: Reached target swap.target - Swaps. Jan 16 23:55:56.761176 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 16 23:55:56.761187 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 16 23:55:56.761198 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 16 23:55:56.761209 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 16 23:55:56.761219 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 16 23:55:56.761230 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 16 23:55:56.761240 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 16 23:55:56.761251 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 16 23:55:56.761261 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 16 23:55:56.761274 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 16 23:55:56.761284 systemd[1]: Mounting media.mount - External Media Directory... Jan 16 23:55:56.761295 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 16 23:55:56.761306 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 16 23:55:56.761317 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 16 23:55:56.761331 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 16 23:55:56.761347 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 23:55:56.761357 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 16 23:55:56.761368 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 16 23:55:56.761379 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 16 23:55:56.761390 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 16 23:55:56.761401 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 16 23:55:56.761411 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 16 23:55:56.761422 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 16 23:55:56.761435 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 16 23:55:56.761446 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 16 23:55:56.761458 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 16 23:55:56.761469 kernel: fuse: init (API version 7.39) Jan 16 23:55:56.761479 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 16 23:55:56.761490 kernel: loop: module loaded Jan 16 23:55:56.761500 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 16 23:55:56.761510 kernel: ACPI: bus type drm_connector registered Jan 16 23:55:56.761522 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 16 23:55:56.761533 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 16 23:55:56.761544 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 16 23:55:56.761555 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 16 23:55:56.761566 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 16 23:55:56.761581 systemd[1]: Mounted media.mount - External Media Directory. Jan 16 23:55:56.761593 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 16 23:55:56.761603 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 16 23:55:56.761614 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 16 23:55:56.766464 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 16 23:55:56.766497 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 16 23:55:56.766509 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 16 23:55:56.766520 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 16 23:55:56.766531 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 16 23:55:56.766577 systemd-journald[1152]: Collecting audit messages is disabled. Jan 16 23:55:56.766615 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 16 23:55:56.767949 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 16 23:55:56.767979 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 16 23:55:56.767991 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 16 23:55:56.768003 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 16 23:55:56.768014 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 16 23:55:56.768032 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 16 23:55:56.768043 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 16 23:55:56.768053 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 16 23:55:56.768064 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 16 23:55:56.768080 systemd-journald[1152]: Journal started Jan 16 23:55:56.768111 systemd-journald[1152]: Runtime Journal (/run/log/journal/60eae71b81bc44c6824b60ca6f6d7592) is 8.0M, max 76.6M, 68.6M free. Jan 16 23:55:56.773239 systemd[1]: Started systemd-journald.service - Journal Service. Jan 16 23:55:56.772449 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 16 23:55:56.774026 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 16 23:55:56.786751 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 16 23:55:56.794833 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 16 23:55:56.798912 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 16 23:55:56.799744 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 16 23:55:56.812019 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 16 23:55:56.816068 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 16 23:55:56.822898 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 16 23:55:56.827035 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 16 23:55:56.831223 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 16 23:55:56.835842 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 16 23:55:56.848927 systemd-journald[1152]: Time spent on flushing to /var/log/journal/60eae71b81bc44c6824b60ca6f6d7592 is 46.855ms for 1111 entries. Jan 16 23:55:56.848927 systemd-journald[1152]: System Journal (/var/log/journal/60eae71b81bc44c6824b60ca6f6d7592) is 8.0M, max 584.8M, 576.8M free. Jan 16 23:55:56.906842 systemd-journald[1152]: Received client request to flush runtime journal. Jan 16 23:55:56.853291 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 16 23:55:56.858038 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 16 23:55:56.861021 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 16 23:55:56.861940 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 16 23:55:56.862961 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 16 23:55:56.867550 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 16 23:55:56.877308 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 16 23:55:56.896172 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 16 23:55:56.912229 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 16 23:55:56.914892 systemd-tmpfiles[1205]: ACLs are not supported, ignoring. Jan 16 23:55:56.914902 systemd-tmpfiles[1205]: ACLs are not supported, ignoring. Jan 16 23:55:56.916918 udevadm[1213]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 16 23:55:56.920810 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 16 23:55:56.927025 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 16 23:55:56.962552 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 16 23:55:56.974845 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 16 23:55:56.991040 systemd-tmpfiles[1227]: ACLs are not supported, ignoring. Jan 16 23:55:56.991394 systemd-tmpfiles[1227]: ACLs are not supported, ignoring. Jan 16 23:55:56.998241 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 16 23:55:57.476275 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 16 23:55:57.481963 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 16 23:55:57.507960 systemd-udevd[1234]: Using default interface naming scheme 'v255'. Jan 16 23:55:57.532209 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 16 23:55:57.542915 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 16 23:55:57.562147 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 16 23:55:57.589804 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Jan 16 23:55:57.654276 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 16 23:55:57.753716 systemd-networkd[1238]: lo: Link UP Jan 16 23:55:57.754766 systemd-networkd[1238]: lo: Gained carrier Jan 16 23:55:57.757663 systemd-networkd[1238]: Enumeration completed Jan 16 23:55:57.759269 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Jan 16 23:55:57.759295 systemd[1]: Condition check resulted in dev-vport2p1.device - /dev/vport2p1 being skipped. Jan 16 23:55:57.759398 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 16 23:55:57.760360 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 23:55:57.762030 systemd-networkd[1238]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 16 23:55:57.762037 systemd-networkd[1238]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 16 23:55:57.763356 systemd-networkd[1238]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 16 23:55:57.763436 systemd-networkd[1238]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 16 23:55:57.764359 systemd-networkd[1238]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 16 23:55:57.765859 systemd-networkd[1238]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 16 23:55:57.765891 systemd-networkd[1238]: eth0: Link UP Jan 16 23:55:57.765894 systemd-networkd[1238]: eth0: Gained carrier Jan 16 23:55:57.765903 systemd-networkd[1238]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 16 23:55:57.769945 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 16 23:55:57.774878 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 16 23:55:57.781388 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 16 23:55:57.783761 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 16 23:55:57.783816 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 16 23:55:57.787670 kernel: mousedev: PS/2 mouse device common for all mice Jan 16 23:55:57.791031 systemd-networkd[1238]: eth1: Link UP Jan 16 23:55:57.791041 systemd-networkd[1238]: eth1: Gained carrier Jan 16 23:55:57.791064 systemd-networkd[1238]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 16 23:55:57.791894 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 16 23:55:57.793081 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 16 23:55:57.793286 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 16 23:55:57.831896 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 16 23:55:57.832104 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 16 23:55:57.834425 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 16 23:55:57.848032 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1244) Jan 16 23:55:57.848196 systemd-networkd[1238]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Jan 16 23:55:57.852965 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 16 23:55:57.853941 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 16 23:55:57.859687 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 16 23:55:57.864440 systemd-networkd[1238]: eth0: DHCPv4 address 188.245.124.206/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 16 23:55:57.887085 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Jan 16 23:55:57.887169 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 16 23:55:57.887183 kernel: [drm] features: -context_init Jan 16 23:55:57.887954 kernel: [drm] number of scanouts: 1 Jan 16 23:55:57.888646 kernel: [drm] number of cap sets: 0 Jan 16 23:55:57.889652 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Jan 16 23:55:57.897717 kernel: Console: switching to colour frame buffer device 160x50 Jan 16 23:55:57.913649 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 16 23:55:57.921007 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 23:55:57.930115 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 16 23:55:57.934215 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 16 23:55:57.934505 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 23:55:57.941924 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 23:55:58.010436 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 23:55:58.032349 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 16 23:55:58.041878 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 16 23:55:58.058192 lvm[1306]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 16 23:55:58.083442 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 16 23:55:58.084854 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 16 23:55:58.089907 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 16 23:55:58.104528 lvm[1309]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 16 23:55:58.133401 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 16 23:55:58.136305 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 16 23:55:58.137902 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 16 23:55:58.138026 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 16 23:55:58.138740 systemd[1]: Reached target machines.target - Containers. Jan 16 23:55:58.140901 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 16 23:55:58.146902 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 16 23:55:58.157994 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 16 23:55:58.160390 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 23:55:58.164882 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 16 23:55:58.176064 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 16 23:55:58.180650 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 16 23:55:58.189717 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 16 23:55:58.212652 kernel: loop0: detected capacity change from 0 to 114432 Jan 16 23:55:58.214604 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 16 23:55:58.217911 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 16 23:55:58.221210 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 16 23:55:58.246733 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 16 23:55:58.266701 kernel: loop1: detected capacity change from 0 to 8 Jan 16 23:55:58.295741 kernel: loop2: detected capacity change from 0 to 114328 Jan 16 23:55:58.337796 kernel: loop3: detected capacity change from 0 to 207008 Jan 16 23:55:58.376869 kernel: loop4: detected capacity change from 0 to 114432 Jan 16 23:55:58.395657 kernel: loop5: detected capacity change from 0 to 8 Jan 16 23:55:58.398664 kernel: loop6: detected capacity change from 0 to 114328 Jan 16 23:55:58.409665 kernel: loop7: detected capacity change from 0 to 207008 Jan 16 23:55:58.430727 (sd-merge)[1330]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Jan 16 23:55:58.431267 (sd-merge)[1330]: Merged extensions into '/usr'. Jan 16 23:55:58.437394 systemd[1]: Reloading requested from client PID 1317 ('systemd-sysext') (unit systemd-sysext.service)... Jan 16 23:55:58.437417 systemd[1]: Reloading... Jan 16 23:55:58.530879 zram_generator::config[1361]: No configuration found. Jan 16 23:55:58.628429 ldconfig[1313]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 16 23:55:58.666453 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 16 23:55:58.733488 systemd[1]: Reloading finished in 295 ms. Jan 16 23:55:58.751896 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 16 23:55:58.754155 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 16 23:55:58.764011 systemd[1]: Starting ensure-sysext.service... Jan 16 23:55:58.769966 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 16 23:55:58.774486 systemd[1]: Reloading requested from client PID 1402 ('systemctl') (unit ensure-sysext.service)... Jan 16 23:55:58.774506 systemd[1]: Reloading... Jan 16 23:55:58.802067 systemd-tmpfiles[1403]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 16 23:55:58.802338 systemd-tmpfiles[1403]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 16 23:55:58.803071 systemd-tmpfiles[1403]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 16 23:55:58.803302 systemd-tmpfiles[1403]: ACLs are not supported, ignoring. Jan 16 23:55:58.803350 systemd-tmpfiles[1403]: ACLs are not supported, ignoring. Jan 16 23:55:58.806283 systemd-tmpfiles[1403]: Detected autofs mount point /boot during canonicalization of boot. Jan 16 23:55:58.806302 systemd-tmpfiles[1403]: Skipping /boot Jan 16 23:55:58.816524 systemd-tmpfiles[1403]: Detected autofs mount point /boot during canonicalization of boot. Jan 16 23:55:58.816546 systemd-tmpfiles[1403]: Skipping /boot Jan 16 23:55:58.864657 zram_generator::config[1432]: No configuration found. Jan 16 23:55:58.978574 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 16 23:55:59.047137 systemd[1]: Reloading finished in 272 ms. Jan 16 23:55:59.065458 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 16 23:55:59.081944 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 16 23:55:59.087984 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 16 23:55:59.093190 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 16 23:55:59.104881 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 16 23:55:59.122835 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 16 23:55:59.130883 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 23:55:59.134135 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 16 23:55:59.147094 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 16 23:55:59.155900 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 16 23:55:59.156926 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 23:55:59.162263 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 16 23:55:59.171991 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 16 23:55:59.173123 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 16 23:55:59.176448 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 16 23:55:59.176620 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 16 23:55:59.187402 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 16 23:55:59.188142 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 16 23:55:59.197875 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 23:55:59.203967 systemd-networkd[1238]: eth0: Gained IPv6LL Jan 16 23:55:59.205984 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 16 23:55:59.220163 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 16 23:55:59.230067 augenrules[1513]: No rules Jan 16 23:55:59.233919 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 16 23:55:59.235794 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 23:55:59.244134 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 16 23:55:59.247817 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 16 23:55:59.249223 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 16 23:55:59.256180 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 16 23:55:59.258153 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 16 23:55:59.259573 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 16 23:55:59.260176 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 16 23:55:59.261583 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 16 23:55:59.262099 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 16 23:55:59.264012 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 16 23:55:59.265905 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 16 23:55:59.278397 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 16 23:55:59.281416 systemd-resolved[1480]: Positive Trust Anchors: Jan 16 23:55:59.281440 systemd-resolved[1480]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 16 23:55:59.281473 systemd-resolved[1480]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 16 23:55:59.287440 systemd-resolved[1480]: Using system hostname 'ci-4081-3-6-n-db2d61d92f'. Jan 16 23:55:59.288442 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 16 23:55:59.288860 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 16 23:55:59.288957 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 16 23:55:59.291789 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 16 23:55:59.293035 systemd[1]: Reached target network.target - Network. Jan 16 23:55:59.293850 systemd[1]: Reached target network-online.target - Network is Online. Jan 16 23:55:59.295083 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 16 23:55:59.296120 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 23:55:59.302033 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 16 23:55:59.312204 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 16 23:55:59.318024 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 16 23:55:59.322484 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 16 23:55:59.325024 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 23:55:59.325194 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 16 23:55:59.327489 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 16 23:55:59.327752 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 16 23:55:59.335504 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 16 23:55:59.335879 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 16 23:55:59.339907 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 16 23:55:59.340221 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 16 23:55:59.347550 systemd[1]: Finished ensure-sysext.service. Jan 16 23:55:59.349264 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 16 23:55:59.352237 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 16 23:55:59.359523 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 16 23:55:59.359586 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 16 23:55:59.371961 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 16 23:55:59.419063 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 16 23:55:59.422171 systemd[1]: Reached target sysinit.target - System Initialization. Jan 16 23:55:59.423816 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 16 23:55:59.425006 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 16 23:55:59.426246 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 16 23:55:59.427104 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 16 23:55:59.427147 systemd[1]: Reached target paths.target - Path Units. Jan 16 23:55:59.427660 systemd[1]: Reached target time-set.target - System Time Set. Jan 16 23:55:59.428401 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 16 23:55:59.429173 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 16 23:55:59.429898 systemd[1]: Reached target timers.target - Timer Units. Jan 16 23:55:59.430991 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 16 23:55:59.433362 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 16 23:55:59.435253 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 16 23:55:59.439571 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 16 23:55:59.440525 systemd[1]: Reached target sockets.target - Socket Units. Jan 16 23:55:59.441705 systemd[1]: Reached target basic.target - Basic System. Jan 16 23:55:59.442543 systemd[1]: System is tainted: cgroupsv1 Jan 16 23:55:59.442600 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 16 23:55:59.442644 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 16 23:55:59.445503 systemd[1]: Starting containerd.service - containerd container runtime... Jan 16 23:55:59.465147 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 16 23:55:59.470885 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 16 23:55:59.474476 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 16 23:55:59.484987 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 16 23:55:59.485854 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 16 23:55:59.494802 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 23:55:59.499891 jq[1560]: false Jan 16 23:55:59.506878 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 16 23:55:59.508991 coreos-metadata[1557]: Jan 16 23:55:59.508 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jan 16 23:55:59.508991 coreos-metadata[1557]: Jan 16 23:55:59.508 INFO Fetch successful Jan 16 23:55:59.511792 coreos-metadata[1557]: Jan 16 23:55:59.511 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jan 16 23:55:59.511968 coreos-metadata[1557]: Jan 16 23:55:59.511 INFO Fetch successful Jan 16 23:55:59.518306 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 16 23:55:59.529234 extend-filesystems[1563]: Found loop4 Jan 16 23:55:59.529234 extend-filesystems[1563]: Found loop5 Jan 16 23:55:59.529234 extend-filesystems[1563]: Found loop6 Jan 16 23:55:59.529234 extend-filesystems[1563]: Found loop7 Jan 16 23:55:59.529234 extend-filesystems[1563]: Found sda Jan 16 23:55:59.529234 extend-filesystems[1563]: Found sda1 Jan 16 23:55:59.529234 extend-filesystems[1563]: Found sda2 Jan 16 23:55:59.529234 extend-filesystems[1563]: Found sda3 Jan 16 23:55:59.529234 extend-filesystems[1563]: Found usr Jan 16 23:55:59.529234 extend-filesystems[1563]: Found sda4 Jan 16 23:55:59.529234 extend-filesystems[1563]: Found sda6 Jan 16 23:55:59.529234 extend-filesystems[1563]: Found sda7 Jan 16 23:55:59.529234 extend-filesystems[1563]: Found sda9 Jan 16 23:55:59.529234 extend-filesystems[1563]: Checking size of /dev/sda9 Jan 16 23:55:59.533010 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 16 23:55:59.541582 dbus-daemon[1559]: [system] SELinux support is enabled Jan 16 23:55:59.540800 systemd-timesyncd[1552]: Contacted time server 162.55.190.98:123 (0.flatcar.pool.ntp.org). Jan 16 23:55:59.541054 systemd-timesyncd[1552]: Initial clock synchronization to Fri 2026-01-16 23:55:59.618005 UTC. Jan 16 23:55:59.548048 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jan 16 23:55:59.561809 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 16 23:55:59.563833 extend-filesystems[1563]: Resized partition /dev/sda9 Jan 16 23:55:59.570154 extend-filesystems[1583]: resize2fs 1.47.1 (20-May-2024) Jan 16 23:55:59.577789 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 16 23:55:59.581490 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Jan 16 23:55:59.586353 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 16 23:55:59.589799 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 16 23:55:59.595914 systemd[1]: Starting update-engine.service - Update Engine... Jan 16 23:55:59.602064 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 16 23:55:59.607518 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 16 23:55:59.636095 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 16 23:55:59.637463 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 16 23:55:59.640333 systemd[1]: motdgen.service: Deactivated successfully. Jan 16 23:55:59.640609 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 16 23:55:59.648981 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 16 23:55:59.653937 jq[1594]: true Jan 16 23:55:59.660056 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 16 23:55:59.660349 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 16 23:55:59.688479 update_engine[1591]: I20260116 23:55:59.681273 1591 main.cc:92] Flatcar Update Engine starting Jan 16 23:55:59.703962 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1252) Jan 16 23:55:59.704065 update_engine[1591]: I20260116 23:55:59.700788 1591 update_check_scheduler.cc:74] Next update check in 10m17s Jan 16 23:55:59.709947 (ntainerd)[1615]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 16 23:55:59.727397 systemd[1]: Started update-engine.service - Update Engine. Jan 16 23:55:59.731391 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 16 23:55:59.731765 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 16 23:55:59.733073 jq[1609]: true Jan 16 23:55:59.733880 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 16 23:55:59.733901 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 16 23:55:59.744564 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 16 23:55:59.753901 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 16 23:55:59.779909 systemd-networkd[1238]: eth1: Gained IPv6LL Jan 16 23:55:59.784663 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Jan 16 23:55:59.806900 extend-filesystems[1583]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 16 23:55:59.806900 extend-filesystems[1583]: old_desc_blocks = 1, new_desc_blocks = 5 Jan 16 23:55:59.806900 extend-filesystems[1583]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Jan 16 23:55:59.830655 extend-filesystems[1563]: Resized filesystem in /dev/sda9 Jan 16 23:55:59.830655 extend-filesystems[1563]: Found sr0 Jan 16 23:55:59.831577 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 16 23:55:59.840433 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 16 23:55:59.874814 tar[1607]: linux-arm64/LICENSE Jan 16 23:55:59.874814 tar[1607]: linux-arm64/helm Jan 16 23:55:59.894717 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 16 23:55:59.896977 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 16 23:55:59.924804 systemd-logind[1590]: New seat seat0. Jan 16 23:55:59.932813 systemd-logind[1590]: Watching system buttons on /dev/input/event0 (Power Button) Jan 16 23:55:59.935251 systemd-logind[1590]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Jan 16 23:55:59.935570 systemd[1]: Started systemd-logind.service - User Login Management. Jan 16 23:55:59.980574 bash[1659]: Updated "/home/core/.ssh/authorized_keys" Jan 16 23:55:59.985384 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 16 23:56:00.009965 systemd[1]: Starting sshkeys.service... Jan 16 23:56:00.048965 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 16 23:56:00.076122 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 16 23:56:00.080863 containerd[1615]: time="2026-01-16T23:56:00.076622480Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 16 23:56:00.128180 coreos-metadata[1664]: Jan 16 23:56:00.127 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jan 16 23:56:00.133350 coreos-metadata[1664]: Jan 16 23:56:00.132 INFO Fetch successful Jan 16 23:56:00.134189 unknown[1664]: wrote ssh authorized keys file for user: core Jan 16 23:56:00.169023 update-ssh-keys[1671]: Updated "/home/core/.ssh/authorized_keys" Jan 16 23:56:00.172968 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 16 23:56:00.184396 systemd[1]: Finished sshkeys.service. Jan 16 23:56:00.199744 containerd[1615]: time="2026-01-16T23:56:00.198516071Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 16 23:56:00.204614 containerd[1615]: time="2026-01-16T23:56:00.204548581Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 16 23:56:00.204614 containerd[1615]: time="2026-01-16T23:56:00.204603351Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 16 23:56:00.204614 containerd[1615]: time="2026-01-16T23:56:00.204627505Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 16 23:56:00.205738 containerd[1615]: time="2026-01-16T23:56:00.204851594Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 16 23:56:00.205738 containerd[1615]: time="2026-01-16T23:56:00.204883018Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 16 23:56:00.205738 containerd[1615]: time="2026-01-16T23:56:00.204956045Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 16 23:56:00.205738 containerd[1615]: time="2026-01-16T23:56:00.204969091Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 16 23:56:00.205738 containerd[1615]: time="2026-01-16T23:56:00.205199845Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 16 23:56:00.205738 containerd[1615]: time="2026-01-16T23:56:00.205218465Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 16 23:56:00.205738 containerd[1615]: time="2026-01-16T23:56:00.205231956Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 16 23:56:00.205738 containerd[1615]: time="2026-01-16T23:56:00.205242215Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 16 23:56:00.205738 containerd[1615]: time="2026-01-16T23:56:00.205323158Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 16 23:56:00.205738 containerd[1615]: time="2026-01-16T23:56:00.205527052Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 16 23:56:00.205738 containerd[1615]: time="2026-01-16T23:56:00.205694149Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 16 23:56:00.206021 containerd[1615]: time="2026-01-16T23:56:00.205710912Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 16 23:56:00.206021 containerd[1615]: time="2026-01-16T23:56:00.205794319Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 16 23:56:00.206021 containerd[1615]: time="2026-01-16T23:56:00.205836729Z" level=info msg="metadata content store policy set" policy=shared Jan 16 23:56:00.217302 containerd[1615]: time="2026-01-16T23:56:00.217063596Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 16 23:56:00.217302 containerd[1615]: time="2026-01-16T23:56:00.217133554Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 16 23:56:00.217302 containerd[1615]: time="2026-01-16T23:56:00.217151528Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 16 23:56:00.217302 containerd[1615]: time="2026-01-16T23:56:00.217169300Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 16 23:56:00.217302 containerd[1615]: time="2026-01-16T23:56:00.217188081Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 16 23:56:00.218372 containerd[1615]: time="2026-01-16T23:56:00.217371052Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 16 23:56:00.218372 containerd[1615]: time="2026-01-16T23:56:00.217713043Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 16 23:56:00.218372 containerd[1615]: time="2026-01-16T23:56:00.217890601Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 16 23:56:00.218372 containerd[1615]: time="2026-01-16T23:56:00.217911645Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 16 23:56:00.218372 containerd[1615]: time="2026-01-16T23:56:00.217928003Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 16 23:56:00.218372 containerd[1615]: time="2026-01-16T23:56:00.217943634Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 16 23:56:00.218372 containerd[1615]: time="2026-01-16T23:56:00.217958135Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 16 23:56:00.218372 containerd[1615]: time="2026-01-16T23:56:00.217972393Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 16 23:56:00.218372 containerd[1615]: time="2026-01-16T23:56:00.217986934Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 16 23:56:00.218372 containerd[1615]: time="2026-01-16T23:56:00.218001959Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 16 23:56:00.218372 containerd[1615]: time="2026-01-16T23:56:00.218014480Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 16 23:56:00.218372 containerd[1615]: time="2026-01-16T23:56:00.218026799Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 16 23:56:00.218372 containerd[1615]: time="2026-01-16T23:56:00.218041219Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 16 23:56:00.218372 containerd[1615]: time="2026-01-16T23:56:00.218064444Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 16 23:56:00.218683 containerd[1615]: time="2026-01-16T23:56:00.218079792Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 16 23:56:00.218683 containerd[1615]: time="2026-01-16T23:56:00.218092839Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 16 23:56:00.218683 containerd[1615]: time="2026-01-16T23:56:00.218105966Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 16 23:56:00.218683 containerd[1615]: time="2026-01-16T23:56:00.218122445Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 16 23:56:00.218683 containerd[1615]: time="2026-01-16T23:56:00.218137228Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 16 23:56:00.218683 containerd[1615]: time="2026-01-16T23:56:00.218149426Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 16 23:56:00.218683 containerd[1615]: time="2026-01-16T23:56:00.218169783Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 16 23:56:00.218683 containerd[1615]: time="2026-01-16T23:56:00.218182870Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 16 23:56:00.218683 containerd[1615]: time="2026-01-16T23:56:00.218198501Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 16 23:56:00.218683 containerd[1615]: time="2026-01-16T23:56:00.218210215Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 16 23:56:00.218683 containerd[1615]: time="2026-01-16T23:56:00.218222090Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 16 23:56:00.218683 containerd[1615]: time="2026-01-16T23:56:00.218235701Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 16 23:56:00.218683 containerd[1615]: time="2026-01-16T23:56:00.218257593Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 16 23:56:00.218683 containerd[1615]: time="2026-01-16T23:56:00.218278799Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 16 23:56:00.218683 containerd[1615]: time="2026-01-16T23:56:00.218291481Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 16 23:56:00.218968 containerd[1615]: time="2026-01-16T23:56:00.218302185Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 16 23:56:00.218968 containerd[1615]: time="2026-01-16T23:56:00.218409302Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 16 23:56:00.218968 containerd[1615]: time="2026-01-16T23:56:00.218427680Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 16 23:56:00.218968 containerd[1615]: time="2026-01-16T23:56:00.218438949Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 16 23:56:00.218968 containerd[1615]: time="2026-01-16T23:56:00.218451187Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 16 23:56:00.218968 containerd[1615]: time="2026-01-16T23:56:00.218460639Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 16 23:56:00.218968 containerd[1615]: time="2026-01-16T23:56:00.218473846Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 16 23:56:00.218968 containerd[1615]: time="2026-01-16T23:56:00.218484227Z" level=info msg="NRI interface is disabled by configuration." Jan 16 23:56:00.218968 containerd[1615]: time="2026-01-16T23:56:00.218494688Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 16 23:56:00.223604 containerd[1615]: time="2026-01-16T23:56:00.221916165Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 16 23:56:00.223604 containerd[1615]: time="2026-01-16T23:56:00.222002158Z" level=info msg="Connect containerd service" Jan 16 23:56:00.223604 containerd[1615]: time="2026-01-16T23:56:00.222111375Z" level=info msg="using legacy CRI server" Jan 16 23:56:00.223604 containerd[1615]: time="2026-01-16T23:56:00.222119130Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 16 23:56:00.223604 containerd[1615]: time="2026-01-16T23:56:00.222252258Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 16 23:56:00.223604 containerd[1615]: time="2026-01-16T23:56:00.223333161Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 16 23:56:00.228640 containerd[1615]: time="2026-01-16T23:56:00.227273379Z" level=info msg="Start subscribing containerd event" Jan 16 23:56:00.228640 containerd[1615]: time="2026-01-16T23:56:00.227359210Z" level=info msg="Start recovering state" Jan 16 23:56:00.228640 containerd[1615]: time="2026-01-16T23:56:00.227447787Z" level=info msg="Start event monitor" Jan 16 23:56:00.228640 containerd[1615]: time="2026-01-16T23:56:00.227459581Z" level=info msg="Start snapshots syncer" Jan 16 23:56:00.228640 containerd[1615]: time="2026-01-16T23:56:00.227470971Z" level=info msg="Start cni network conf syncer for default" Jan 16 23:56:00.228640 containerd[1615]: time="2026-01-16T23:56:00.227483291Z" level=info msg="Start streaming server" Jan 16 23:56:00.230141 containerd[1615]: time="2026-01-16T23:56:00.230095616Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 16 23:56:00.231115 containerd[1615]: time="2026-01-16T23:56:00.230166987Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 16 23:56:00.231115 containerd[1615]: time="2026-01-16T23:56:00.230227169Z" level=info msg="containerd successfully booted in 0.157834s" Jan 16 23:56:00.230365 systemd[1]: Started containerd.service - containerd container runtime. Jan 16 23:56:00.284321 locksmithd[1628]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 16 23:56:00.604217 tar[1607]: linux-arm64/README.md Jan 16 23:56:00.625368 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 16 23:56:00.916815 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 23:56:00.918812 (kubelet)[1696]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 23:56:01.078597 sshd_keygen[1606]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 16 23:56:01.111936 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 16 23:56:01.120086 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 16 23:56:01.144108 systemd[1]: issuegen.service: Deactivated successfully. Jan 16 23:56:01.144406 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 16 23:56:01.156818 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 16 23:56:01.170261 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 16 23:56:01.180313 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 16 23:56:01.194193 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 16 23:56:01.196120 systemd[1]: Reached target getty.target - Login Prompts. Jan 16 23:56:01.198213 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 16 23:56:01.199679 systemd[1]: Startup finished in 7.062s (kernel) + 5.268s (userspace) = 12.331s. Jan 16 23:56:01.458180 kubelet[1696]: E0116 23:56:01.457961 1696 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 23:56:01.465896 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 23:56:01.466081 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 23:56:11.717086 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 16 23:56:11.727021 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 23:56:11.870992 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 23:56:11.884318 (kubelet)[1740]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 23:56:11.935805 kubelet[1740]: E0116 23:56:11.935739 1740 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 23:56:11.940787 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 23:56:11.940966 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 23:56:15.401224 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 16 23:56:15.407023 systemd[1]: Started sshd@0-188.245.124.206:22-4.153.228.146:47092.service - OpenSSH per-connection server daemon (4.153.228.146:47092). Jan 16 23:56:16.044858 sshd[1748]: Accepted publickey for core from 4.153.228.146 port 47092 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 16 23:56:16.047764 sshd[1748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 23:56:16.059686 systemd-logind[1590]: New session 1 of user core. Jan 16 23:56:16.061079 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 16 23:56:16.066979 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 16 23:56:16.085798 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 16 23:56:16.098122 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 16 23:56:16.102481 (systemd)[1754]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 16 23:56:16.215245 systemd[1754]: Queued start job for default target default.target. Jan 16 23:56:16.215663 systemd[1754]: Created slice app.slice - User Application Slice. Jan 16 23:56:16.215684 systemd[1754]: Reached target paths.target - Paths. Jan 16 23:56:16.215695 systemd[1754]: Reached target timers.target - Timers. Jan 16 23:56:16.227842 systemd[1754]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 16 23:56:16.240048 systemd[1754]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 16 23:56:16.240435 systemd[1754]: Reached target sockets.target - Sockets. Jan 16 23:56:16.241015 systemd[1754]: Reached target basic.target - Basic System. Jan 16 23:56:16.241196 systemd[1754]: Reached target default.target - Main User Target. Jan 16 23:56:16.241225 systemd[1754]: Startup finished in 131ms. Jan 16 23:56:16.241425 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 16 23:56:16.249111 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 16 23:56:16.710409 systemd[1]: Started sshd@1-188.245.124.206:22-4.153.228.146:47106.service - OpenSSH per-connection server daemon (4.153.228.146:47106). Jan 16 23:56:17.356026 sshd[1766]: Accepted publickey for core from 4.153.228.146 port 47106 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 16 23:56:17.358995 sshd[1766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 23:56:17.363976 systemd-logind[1590]: New session 2 of user core. Jan 16 23:56:17.375252 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 16 23:56:17.816199 sshd[1766]: pam_unix(sshd:session): session closed for user core Jan 16 23:56:17.821167 systemd-logind[1590]: Session 2 logged out. Waiting for processes to exit. Jan 16 23:56:17.823277 systemd[1]: sshd@1-188.245.124.206:22-4.153.228.146:47106.service: Deactivated successfully. Jan 16 23:56:17.826175 systemd[1]: session-2.scope: Deactivated successfully. Jan 16 23:56:17.827202 systemd-logind[1590]: Removed session 2. Jan 16 23:56:17.926410 systemd[1]: Started sshd@2-188.245.124.206:22-4.153.228.146:47116.service - OpenSSH per-connection server daemon (4.153.228.146:47116). Jan 16 23:56:18.549061 sshd[1774]: Accepted publickey for core from 4.153.228.146 port 47116 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 16 23:56:18.551204 sshd[1774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 23:56:18.556166 systemd-logind[1590]: New session 3 of user core. Jan 16 23:56:18.567072 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 16 23:56:18.989683 sshd[1774]: pam_unix(sshd:session): session closed for user core Jan 16 23:56:18.996745 systemd-logind[1590]: Session 3 logged out. Waiting for processes to exit. Jan 16 23:56:18.997282 systemd[1]: sshd@2-188.245.124.206:22-4.153.228.146:47116.service: Deactivated successfully. Jan 16 23:56:19.002371 systemd[1]: session-3.scope: Deactivated successfully. Jan 16 23:56:19.004599 systemd-logind[1590]: Removed session 3. Jan 16 23:56:19.098080 systemd[1]: Started sshd@3-188.245.124.206:22-4.153.228.146:47128.service - OpenSSH per-connection server daemon (4.153.228.146:47128). Jan 16 23:56:19.726941 sshd[1782]: Accepted publickey for core from 4.153.228.146 port 47128 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 16 23:56:19.730040 sshd[1782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 23:56:19.735606 systemd-logind[1590]: New session 4 of user core. Jan 16 23:56:19.746178 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 16 23:56:20.170064 sshd[1782]: pam_unix(sshd:session): session closed for user core Jan 16 23:56:20.176058 systemd-logind[1590]: Session 4 logged out. Waiting for processes to exit. Jan 16 23:56:20.176792 systemd[1]: sshd@3-188.245.124.206:22-4.153.228.146:47128.service: Deactivated successfully. Jan 16 23:56:20.178524 systemd[1]: session-4.scope: Deactivated successfully. Jan 16 23:56:20.181813 systemd-logind[1590]: Removed session 4. Jan 16 23:56:20.280068 systemd[1]: Started sshd@4-188.245.124.206:22-4.153.228.146:47142.service - OpenSSH per-connection server daemon (4.153.228.146:47142). Jan 16 23:56:20.906169 sshd[1790]: Accepted publickey for core from 4.153.228.146 port 47142 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 16 23:56:20.908045 sshd[1790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 23:56:20.912950 systemd-logind[1590]: New session 5 of user core. Jan 16 23:56:20.923138 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 16 23:56:21.263236 sudo[1794]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 16 23:56:21.263521 sudo[1794]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 23:56:21.283066 sudo[1794]: pam_unix(sudo:session): session closed for user root Jan 16 23:56:21.385294 sshd[1790]: pam_unix(sshd:session): session closed for user core Jan 16 23:56:21.391200 systemd[1]: sshd@4-188.245.124.206:22-4.153.228.146:47142.service: Deactivated successfully. Jan 16 23:56:21.394675 systemd[1]: session-5.scope: Deactivated successfully. Jan 16 23:56:21.395510 systemd-logind[1590]: Session 5 logged out. Waiting for processes to exit. Jan 16 23:56:21.398365 systemd-logind[1590]: Removed session 5. Jan 16 23:56:21.494379 systemd[1]: Started sshd@5-188.245.124.206:22-4.153.228.146:47158.service - OpenSSH per-connection server daemon (4.153.228.146:47158). Jan 16 23:56:22.025836 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 16 23:56:22.034915 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 23:56:22.137857 sshd[1799]: Accepted publickey for core from 4.153.228.146 port 47158 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 16 23:56:22.141717 sshd[1799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 23:56:22.150166 systemd-logind[1590]: New session 6 of user core. Jan 16 23:56:22.154875 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 16 23:56:22.160441 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 23:56:22.167069 (kubelet)[1813]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 23:56:22.214711 kubelet[1813]: E0116 23:56:22.214659 1813 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 23:56:22.219978 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 23:56:22.220209 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 23:56:22.486136 sudo[1824]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 16 23:56:22.486462 sudo[1824]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 23:56:22.490273 sudo[1824]: pam_unix(sudo:session): session closed for user root Jan 16 23:56:22.496732 sudo[1823]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 16 23:56:22.497080 sudo[1823]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 23:56:22.514438 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 16 23:56:22.517193 auditctl[1827]: No rules Jan 16 23:56:22.517578 systemd[1]: audit-rules.service: Deactivated successfully. Jan 16 23:56:22.517887 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 16 23:56:22.523193 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 16 23:56:22.551953 augenrules[1846]: No rules Jan 16 23:56:22.553873 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 16 23:56:22.556951 sudo[1823]: pam_unix(sudo:session): session closed for user root Jan 16 23:56:22.659049 sshd[1799]: pam_unix(sshd:session): session closed for user core Jan 16 23:56:22.663392 systemd[1]: sshd@5-188.245.124.206:22-4.153.228.146:47158.service: Deactivated successfully. Jan 16 23:56:22.667371 systemd[1]: session-6.scope: Deactivated successfully. Jan 16 23:56:22.669016 systemd-logind[1590]: Session 6 logged out. Waiting for processes to exit. Jan 16 23:56:22.670370 systemd-logind[1590]: Removed session 6. Jan 16 23:56:22.765141 systemd[1]: Started sshd@6-188.245.124.206:22-4.153.228.146:47164.service - OpenSSH per-connection server daemon (4.153.228.146:47164). Jan 16 23:56:23.378285 sshd[1855]: Accepted publickey for core from 4.153.228.146 port 47164 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 16 23:56:23.380670 sshd[1855]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 23:56:23.385223 systemd-logind[1590]: New session 7 of user core. Jan 16 23:56:23.395115 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 16 23:56:23.716188 sudo[1859]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 16 23:56:23.716528 sudo[1859]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 23:56:24.005097 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 16 23:56:24.006289 (dockerd)[1874]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 16 23:56:24.254615 dockerd[1874]: time="2026-01-16T23:56:24.254028585Z" level=info msg="Starting up" Jan 16 23:56:24.379028 dockerd[1874]: time="2026-01-16T23:56:24.378711644Z" level=info msg="Loading containers: start." Jan 16 23:56:24.491883 kernel: Initializing XFRM netlink socket Jan 16 23:56:24.568255 systemd-networkd[1238]: docker0: Link UP Jan 16 23:56:24.582299 dockerd[1874]: time="2026-01-16T23:56:24.582215464Z" level=info msg="Loading containers: done." Jan 16 23:56:24.599140 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1127900764-merged.mount: Deactivated successfully. Jan 16 23:56:24.601777 dockerd[1874]: time="2026-01-16T23:56:24.601716953Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 16 23:56:24.601893 dockerd[1874]: time="2026-01-16T23:56:24.601835600Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 16 23:56:24.601974 dockerd[1874]: time="2026-01-16T23:56:24.601940882Z" level=info msg="Daemon has completed initialization" Jan 16 23:56:24.643215 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 16 23:56:24.643891 dockerd[1874]: time="2026-01-16T23:56:24.643251376Z" level=info msg="API listen on /run/docker.sock" Jan 16 23:56:25.679262 containerd[1615]: time="2026-01-16T23:56:25.678919994Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 16 23:56:26.470942 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3969931416.mount: Deactivated successfully. Jan 16 23:56:27.305170 containerd[1615]: time="2026-01-16T23:56:27.305090433Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:56:27.306776 containerd[1615]: time="2026-01-16T23:56:27.306713584Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=26442080" Jan 16 23:56:27.308379 containerd[1615]: time="2026-01-16T23:56:27.308325292Z" level=info msg="ImageCreate event name:\"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:56:27.311650 containerd[1615]: time="2026-01-16T23:56:27.311486492Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:56:27.313490 containerd[1615]: time="2026-01-16T23:56:27.312913711Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"26438581\" in 1.633949582s" Jan 16 23:56:27.313490 containerd[1615]: time="2026-01-16T23:56:27.312962804Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\"" Jan 16 23:56:27.313775 containerd[1615]: time="2026-01-16T23:56:27.313736969Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 16 23:56:28.462789 containerd[1615]: time="2026-01-16T23:56:28.462697219Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:56:28.464481 containerd[1615]: time="2026-01-16T23:56:28.464018286Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=22622106" Jan 16 23:56:28.465361 containerd[1615]: time="2026-01-16T23:56:28.465319108Z" level=info msg="ImageCreate event name:\"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:56:28.470317 containerd[1615]: time="2026-01-16T23:56:28.470284582Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:56:28.473267 containerd[1615]: time="2026-01-16T23:56:28.473197619Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"24206567\" in 1.159408357s" Jan 16 23:56:28.473345 containerd[1615]: time="2026-01-16T23:56:28.473263555Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\"" Jan 16 23:56:28.474127 containerd[1615]: time="2026-01-16T23:56:28.474075143Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 16 23:56:29.517658 containerd[1615]: time="2026-01-16T23:56:29.515865073Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:56:29.518191 containerd[1615]: time="2026-01-16T23:56:29.517700767Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=17616767" Jan 16 23:56:29.518473 containerd[1615]: time="2026-01-16T23:56:29.518433035Z" level=info msg="ImageCreate event name:\"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:56:29.524117 containerd[1615]: time="2026-01-16T23:56:29.524082504Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:56:29.526120 containerd[1615]: time="2026-01-16T23:56:29.526060226Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"19201246\" in 1.051914748s" Jan 16 23:56:29.526120 containerd[1615]: time="2026-01-16T23:56:29.526112597Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\"" Jan 16 23:56:29.526876 containerd[1615]: time="2026-01-16T23:56:29.526839385Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 16 23:56:30.607035 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1751220680.mount: Deactivated successfully. Jan 16 23:56:30.920257 containerd[1615]: time="2026-01-16T23:56:30.920125760Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:56:30.921912 containerd[1615]: time="2026-01-16T23:56:30.921727325Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=27558750" Jan 16 23:56:30.923835 containerd[1615]: time="2026-01-16T23:56:30.923599938Z" level=info msg="ImageCreate event name:\"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:56:30.926058 containerd[1615]: time="2026-01-16T23:56:30.926005086Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:56:30.926870 containerd[1615]: time="2026-01-16T23:56:30.926830993Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"27557743\" in 1.399946959s" Jan 16 23:56:30.926988 containerd[1615]: time="2026-01-16T23:56:30.926970577Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\"" Jan 16 23:56:30.928009 containerd[1615]: time="2026-01-16T23:56:30.927800485Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 16 23:56:31.562948 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount554291138.mount: Deactivated successfully. Jan 16 23:56:32.312081 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 16 23:56:32.321988 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 23:56:32.377686 containerd[1615]: time="2026-01-16T23:56:32.376267552Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:56:32.387226 containerd[1615]: time="2026-01-16T23:56:32.387177690Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951714" Jan 16 23:56:32.389761 containerd[1615]: time="2026-01-16T23:56:32.389724813Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:56:32.396221 containerd[1615]: time="2026-01-16T23:56:32.396174455Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:56:32.397465 containerd[1615]: time="2026-01-16T23:56:32.397418752Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.469586781s" Jan 16 23:56:32.397547 containerd[1615]: time="2026-01-16T23:56:32.397469161Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jan 16 23:56:32.398821 containerd[1615]: time="2026-01-16T23:56:32.398790470Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 16 23:56:32.462020 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 23:56:32.467929 (kubelet)[2152]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 23:56:32.526979 kubelet[2152]: E0116 23:56:32.526346 2152 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 23:56:32.533978 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 23:56:32.534313 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 23:56:32.947481 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1787129205.mount: Deactivated successfully. Jan 16 23:56:32.956151 containerd[1615]: time="2026-01-16T23:56:32.955212636Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:56:32.956717 containerd[1615]: time="2026-01-16T23:56:32.956677571Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Jan 16 23:56:32.957752 containerd[1615]: time="2026-01-16T23:56:32.957705949Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:56:32.961096 containerd[1615]: time="2026-01-16T23:56:32.961047771Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:56:32.962039 containerd[1615]: time="2026-01-16T23:56:32.961934885Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 563.006631ms" Jan 16 23:56:32.962039 containerd[1615]: time="2026-01-16T23:56:32.961965811Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 16 23:56:32.962712 containerd[1615]: time="2026-01-16T23:56:32.962677054Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 16 23:56:33.607290 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4294452790.mount: Deactivated successfully. Jan 16 23:56:35.061703 containerd[1615]: time="2026-01-16T23:56:35.061646421Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:56:35.063929 containerd[1615]: time="2026-01-16T23:56:35.063688801Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943239" Jan 16 23:56:35.063929 containerd[1615]: time="2026-01-16T23:56:35.063868868Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:56:35.069942 containerd[1615]: time="2026-01-16T23:56:35.069883030Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:56:35.073649 containerd[1615]: time="2026-01-16T23:56:35.072464449Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.109741548s" Jan 16 23:56:35.073649 containerd[1615]: time="2026-01-16T23:56:35.072533939Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jan 16 23:56:40.073998 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 23:56:40.089181 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 23:56:40.130851 systemd[1]: Reloading requested from client PID 2247 ('systemctl') (unit session-7.scope)... Jan 16 23:56:40.130872 systemd[1]: Reloading... Jan 16 23:56:40.254653 zram_generator::config[2287]: No configuration found. Jan 16 23:56:40.371534 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 16 23:56:40.447795 systemd[1]: Reloading finished in 316 ms. Jan 16 23:56:40.496565 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 16 23:56:40.496680 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 16 23:56:40.497089 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 23:56:40.507120 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 23:56:40.647058 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 23:56:40.649898 (kubelet)[2347]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 16 23:56:40.692667 kubelet[2347]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 23:56:40.692667 kubelet[2347]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 16 23:56:40.692667 kubelet[2347]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 23:56:40.692667 kubelet[2347]: I0116 23:56:40.691605 2347 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 16 23:56:41.560474 kubelet[2347]: I0116 23:56:41.560435 2347 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 16 23:56:41.561661 kubelet[2347]: I0116 23:56:41.560669 2347 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 16 23:56:41.561661 kubelet[2347]: I0116 23:56:41.560979 2347 server.go:954] "Client rotation is on, will bootstrap in background" Jan 16 23:56:41.591927 kubelet[2347]: E0116 23:56:41.591888 2347 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://188.245.124.206:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 188.245.124.206:6443: connect: connection refused" logger="UnhandledError" Jan 16 23:56:41.598309 kubelet[2347]: I0116 23:56:41.597571 2347 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 16 23:56:41.607601 kubelet[2347]: E0116 23:56:41.607558 2347 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 16 23:56:41.607855 kubelet[2347]: I0116 23:56:41.607837 2347 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 16 23:56:41.611577 kubelet[2347]: I0116 23:56:41.611540 2347 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 16 23:56:41.613064 kubelet[2347]: I0116 23:56:41.612994 2347 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 16 23:56:41.613696 kubelet[2347]: I0116 23:56:41.613207 2347 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-db2d61d92f","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jan 16 23:56:41.613696 kubelet[2347]: I0116 23:56:41.613510 2347 topology_manager.go:138] "Creating topology manager with none policy" Jan 16 23:56:41.613696 kubelet[2347]: I0116 23:56:41.613522 2347 container_manager_linux.go:304] "Creating device plugin manager" Jan 16 23:56:41.614058 kubelet[2347]: I0116 23:56:41.614038 2347 state_mem.go:36] "Initialized new in-memory state store" Jan 16 23:56:41.617926 kubelet[2347]: I0116 23:56:41.617723 2347 kubelet.go:446] "Attempting to sync node with API server" Jan 16 23:56:41.617926 kubelet[2347]: I0116 23:56:41.617755 2347 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 16 23:56:41.617926 kubelet[2347]: I0116 23:56:41.617776 2347 kubelet.go:352] "Adding apiserver pod source" Jan 16 23:56:41.617926 kubelet[2347]: I0116 23:56:41.617788 2347 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 16 23:56:41.623522 kubelet[2347]: W0116 23:56:41.623475 2347 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://188.245.124.206:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-db2d61d92f&limit=500&resourceVersion=0": dial tcp 188.245.124.206:6443: connect: connection refused Jan 16 23:56:41.624701 kubelet[2347]: E0116 23:56:41.623721 2347 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://188.245.124.206:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-db2d61d92f&limit=500&resourceVersion=0\": dial tcp 188.245.124.206:6443: connect: connection refused" logger="UnhandledError" Jan 16 23:56:41.624701 kubelet[2347]: I0116 23:56:41.623853 2347 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 16 23:56:41.624701 kubelet[2347]: I0116 23:56:41.624547 2347 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 16 23:56:41.624701 kubelet[2347]: W0116 23:56:41.624675 2347 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 16 23:56:41.627645 kubelet[2347]: I0116 23:56:41.627106 2347 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 16 23:56:41.627645 kubelet[2347]: I0116 23:56:41.627150 2347 server.go:1287] "Started kubelet" Jan 16 23:56:41.628953 kubelet[2347]: I0116 23:56:41.628924 2347 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 16 23:56:41.631264 kubelet[2347]: W0116 23:56:41.631220 2347 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://188.245.124.206:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 188.245.124.206:6443: connect: connection refused Jan 16 23:56:41.631430 kubelet[2347]: E0116 23:56:41.631393 2347 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://188.245.124.206:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 188.245.124.206:6443: connect: connection refused" logger="UnhandledError" Jan 16 23:56:41.634508 kubelet[2347]: I0116 23:56:41.634474 2347 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 16 23:56:41.636651 kubelet[2347]: I0116 23:56:41.636066 2347 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 16 23:56:41.636651 kubelet[2347]: I0116 23:56:41.636168 2347 server.go:479] "Adding debug handlers to kubelet server" Jan 16 23:56:41.636651 kubelet[2347]: E0116 23:56:41.636414 2347 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-db2d61d92f\" not found" Jan 16 23:56:41.638247 kubelet[2347]: I0116 23:56:41.638176 2347 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 16 23:56:41.638637 kubelet[2347]: I0116 23:56:41.638602 2347 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 16 23:56:41.642574 kubelet[2347]: I0116 23:56:41.642538 2347 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 16 23:56:41.642686 kubelet[2347]: I0116 23:56:41.642673 2347 reconciler.go:26] "Reconciler: start to sync state" Jan 16 23:56:41.646647 kubelet[2347]: I0116 23:56:41.645590 2347 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 16 23:56:41.648709 kubelet[2347]: I0116 23:56:41.648685 2347 factory.go:221] Registration of the systemd container factory successfully Jan 16 23:56:41.648998 kubelet[2347]: I0116 23:56:41.648976 2347 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 16 23:56:41.651995 kubelet[2347]: E0116 23:56:41.651565 2347 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://188.245.124.206:6443/api/v1/namespaces/default/events\": dial tcp 188.245.124.206:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-6-n-db2d61d92f.188b5b6cd2f835fe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-n-db2d61d92f,UID:ci-4081-3-6-n-db2d61d92f,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-db2d61d92f,},FirstTimestamp:2026-01-16 23:56:41.627129342 +0000 UTC m=+0.973525211,LastTimestamp:2026-01-16 23:56:41.627129342 +0000 UTC m=+0.973525211,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-db2d61d92f,}" Jan 16 23:56:41.652285 kubelet[2347]: E0116 23:56:41.652235 2347 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.124.206:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-db2d61d92f?timeout=10s\": dial tcp 188.245.124.206:6443: connect: connection refused" interval="200ms" Jan 16 23:56:41.652971 kubelet[2347]: W0116 23:56:41.652925 2347 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://188.245.124.206:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 188.245.124.206:6443: connect: connection refused Jan 16 23:56:41.655140 kubelet[2347]: E0116 23:56:41.655111 2347 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://188.245.124.206:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 188.245.124.206:6443: connect: connection refused" logger="UnhandledError" Jan 16 23:56:41.656660 kubelet[2347]: I0116 23:56:41.654426 2347 factory.go:221] Registration of the containerd container factory successfully Jan 16 23:56:41.656660 kubelet[2347]: I0116 23:56:41.656120 2347 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 16 23:56:41.657332 kubelet[2347]: I0116 23:56:41.657197 2347 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 16 23:56:41.657332 kubelet[2347]: I0116 23:56:41.657228 2347 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 16 23:56:41.657482 kubelet[2347]: I0116 23:56:41.657459 2347 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 16 23:56:41.657482 kubelet[2347]: I0116 23:56:41.657477 2347 kubelet.go:2382] "Starting kubelet main sync loop" Jan 16 23:56:41.657536 kubelet[2347]: E0116 23:56:41.657520 2347 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 16 23:56:41.671908 kubelet[2347]: E0116 23:56:41.671866 2347 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 16 23:56:41.672490 kubelet[2347]: W0116 23:56:41.672341 2347 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://188.245.124.206:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 188.245.124.206:6443: connect: connection refused Jan 16 23:56:41.672490 kubelet[2347]: E0116 23:56:41.672414 2347 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://188.245.124.206:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 188.245.124.206:6443: connect: connection refused" logger="UnhandledError" Jan 16 23:56:41.689927 kubelet[2347]: I0116 23:56:41.689834 2347 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 16 23:56:41.690303 kubelet[2347]: I0116 23:56:41.689982 2347 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 16 23:56:41.690303 kubelet[2347]: I0116 23:56:41.690005 2347 state_mem.go:36] "Initialized new in-memory state store" Jan 16 23:56:41.693157 kubelet[2347]: I0116 23:56:41.693122 2347 policy_none.go:49] "None policy: Start" Jan 16 23:56:41.693945 kubelet[2347]: I0116 23:56:41.693616 2347 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 16 23:56:41.693945 kubelet[2347]: I0116 23:56:41.693671 2347 state_mem.go:35] "Initializing new in-memory state store" Jan 16 23:56:41.698696 kubelet[2347]: I0116 23:56:41.698673 2347 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 16 23:56:41.700655 kubelet[2347]: I0116 23:56:41.699039 2347 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 16 23:56:41.700655 kubelet[2347]: I0116 23:56:41.699058 2347 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 16 23:56:41.700827 kubelet[2347]: I0116 23:56:41.700801 2347 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 16 23:56:41.704336 kubelet[2347]: E0116 23:56:41.704312 2347 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 16 23:56:41.704487 kubelet[2347]: E0116 23:56:41.704475 2347 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-6-n-db2d61d92f\" not found" Jan 16 23:56:41.768799 kubelet[2347]: E0116 23:56:41.768606 2347 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-db2d61d92f\" not found" node="ci-4081-3-6-n-db2d61d92f" Jan 16 23:56:41.771508 kubelet[2347]: E0116 23:56:41.771477 2347 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-db2d61d92f\" not found" node="ci-4081-3-6-n-db2d61d92f" Jan 16 23:56:41.775015 kubelet[2347]: E0116 23:56:41.774959 2347 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-db2d61d92f\" not found" node="ci-4081-3-6-n-db2d61d92f" Jan 16 23:56:41.802380 kubelet[2347]: I0116 23:56:41.801886 2347 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-db2d61d92f" Jan 16 23:56:41.802714 kubelet[2347]: E0116 23:56:41.802681 2347 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://188.245.124.206:6443/api/v1/nodes\": dial tcp 188.245.124.206:6443: connect: connection refused" node="ci-4081-3-6-n-db2d61d92f" Jan 16 23:56:41.844086 kubelet[2347]: I0116 23:56:41.843546 2347 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/da1b61f5262054b0a899dcd7282539df-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-db2d61d92f\" (UID: \"da1b61f5262054b0a899dcd7282539df\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-db2d61d92f" Jan 16 23:56:41.844086 kubelet[2347]: I0116 23:56:41.843615 2347 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/da1b61f5262054b0a899dcd7282539df-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-db2d61d92f\" (UID: \"da1b61f5262054b0a899dcd7282539df\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-db2d61d92f" Jan 16 23:56:41.844086 kubelet[2347]: I0116 23:56:41.843700 2347 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/da1b61f5262054b0a899dcd7282539df-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-db2d61d92f\" (UID: \"da1b61f5262054b0a899dcd7282539df\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-db2d61d92f" Jan 16 23:56:41.844086 kubelet[2347]: I0116 23:56:41.843743 2347 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/da1b61f5262054b0a899dcd7282539df-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-db2d61d92f\" (UID: \"da1b61f5262054b0a899dcd7282539df\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-db2d61d92f" Jan 16 23:56:41.844086 kubelet[2347]: I0116 23:56:41.843791 2347 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0e7046afb65d62fb45eff0d63e15e0ec-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-db2d61d92f\" (UID: \"0e7046afb65d62fb45eff0d63e15e0ec\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-db2d61d92f" Jan 16 23:56:41.844507 kubelet[2347]: I0116 23:56:41.843825 2347 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a8a4e0bc592dc7543eac58b76f1714e8-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-db2d61d92f\" (UID: \"a8a4e0bc592dc7543eac58b76f1714e8\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-db2d61d92f" Jan 16 23:56:41.844507 kubelet[2347]: I0116 23:56:41.843922 2347 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a8a4e0bc592dc7543eac58b76f1714e8-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-db2d61d92f\" (UID: \"a8a4e0bc592dc7543eac58b76f1714e8\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-db2d61d92f" Jan 16 23:56:41.844507 kubelet[2347]: I0116 23:56:41.843975 2347 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a8a4e0bc592dc7543eac58b76f1714e8-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-db2d61d92f\" (UID: \"a8a4e0bc592dc7543eac58b76f1714e8\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-db2d61d92f" Jan 16 23:56:41.844507 kubelet[2347]: I0116 23:56:41.844014 2347 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/da1b61f5262054b0a899dcd7282539df-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-db2d61d92f\" (UID: \"da1b61f5262054b0a899dcd7282539df\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-db2d61d92f" Jan 16 23:56:41.853119 kubelet[2347]: E0116 23:56:41.853032 2347 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.124.206:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-db2d61d92f?timeout=10s\": dial tcp 188.245.124.206:6443: connect: connection refused" interval="400ms" Jan 16 23:56:42.006274 kubelet[2347]: I0116 23:56:42.006032 2347 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-db2d61d92f" Jan 16 23:56:42.007053 kubelet[2347]: E0116 23:56:42.007000 2347 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://188.245.124.206:6443/api/v1/nodes\": dial tcp 188.245.124.206:6443: connect: connection refused" node="ci-4081-3-6-n-db2d61d92f" Jan 16 23:56:42.072276 containerd[1615]: time="2026-01-16T23:56:42.072208157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-db2d61d92f,Uid:a8a4e0bc592dc7543eac58b76f1714e8,Namespace:kube-system,Attempt:0,}" Jan 16 23:56:42.076053 containerd[1615]: time="2026-01-16T23:56:42.075991297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-db2d61d92f,Uid:0e7046afb65d62fb45eff0d63e15e0ec,Namespace:kube-system,Attempt:0,}" Jan 16 23:56:42.076328 containerd[1615]: time="2026-01-16T23:56:42.076285406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-db2d61d92f,Uid:da1b61f5262054b0a899dcd7282539df,Namespace:kube-system,Attempt:0,}" Jan 16 23:56:42.254566 kubelet[2347]: E0116 23:56:42.254436 2347 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.124.206:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-db2d61d92f?timeout=10s\": dial tcp 188.245.124.206:6443: connect: connection refused" interval="800ms" Jan 16 23:56:42.410564 kubelet[2347]: I0116 23:56:42.410527 2347 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-db2d61d92f" Jan 16 23:56:42.411039 kubelet[2347]: E0116 23:56:42.411000 2347 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://188.245.124.206:6443/api/v1/nodes\": dial tcp 188.245.124.206:6443: connect: connection refused" node="ci-4081-3-6-n-db2d61d92f" Jan 16 23:56:42.549946 kubelet[2347]: W0116 23:56:42.549755 2347 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://188.245.124.206:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-db2d61d92f&limit=500&resourceVersion=0": dial tcp 188.245.124.206:6443: connect: connection refused Jan 16 23:56:42.549946 kubelet[2347]: E0116 23:56:42.549816 2347 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://188.245.124.206:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-db2d61d92f&limit=500&resourceVersion=0\": dial tcp 188.245.124.206:6443: connect: connection refused" logger="UnhandledError" Jan 16 23:56:42.635195 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2801204025.mount: Deactivated successfully. Jan 16 23:56:42.643565 containerd[1615]: time="2026-01-16T23:56:42.643246674Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 23:56:42.645997 containerd[1615]: time="2026-01-16T23:56:42.645855816Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 16 23:56:42.648264 containerd[1615]: time="2026-01-16T23:56:42.646799031Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 23:56:42.648876 containerd[1615]: time="2026-01-16T23:56:42.648839956Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 23:56:42.651118 containerd[1615]: time="2026-01-16T23:56:42.651068740Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 23:56:42.653003 containerd[1615]: time="2026-01-16T23:56:42.652878602Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 16 23:56:42.654250 containerd[1615]: time="2026-01-16T23:56:42.653993634Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Jan 16 23:56:42.656704 containerd[1615]: time="2026-01-16T23:56:42.656539489Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 23:56:42.657994 containerd[1615]: time="2026-01-16T23:56:42.657960712Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 581.887367ms" Jan 16 23:56:42.659143 containerd[1615]: time="2026-01-16T23:56:42.658835760Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 582.497509ms" Jan 16 23:56:42.661050 containerd[1615]: time="2026-01-16T23:56:42.661014459Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 588.691291ms" Jan 16 23:56:42.785802 containerd[1615]: time="2026-01-16T23:56:42.784775570Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 23:56:42.785976 containerd[1615]: time="2026-01-16T23:56:42.785299103Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 23:56:42.785976 containerd[1615]: time="2026-01-16T23:56:42.785762629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:56:42.786303 containerd[1615]: time="2026-01-16T23:56:42.785989892Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:56:42.793001 containerd[1615]: time="2026-01-16T23:56:42.792814818Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 23:56:42.793001 containerd[1615]: time="2026-01-16T23:56:42.792877424Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 23:56:42.793755 containerd[1615]: time="2026-01-16T23:56:42.792894546Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:56:42.793755 containerd[1615]: time="2026-01-16T23:56:42.793042080Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:56:42.793903 containerd[1615]: time="2026-01-16T23:56:42.793803357Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 23:56:42.793903 containerd[1615]: time="2026-01-16T23:56:42.793855122Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 23:56:42.793903 containerd[1615]: time="2026-01-16T23:56:42.793870764Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:56:42.794178 containerd[1615]: time="2026-01-16T23:56:42.793974654Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:56:42.814860 kubelet[2347]: W0116 23:56:42.814781 2347 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://188.245.124.206:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 188.245.124.206:6443: connect: connection refused Jan 16 23:56:42.815845 kubelet[2347]: E0116 23:56:42.815719 2347 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://188.245.124.206:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 188.245.124.206:6443: connect: connection refused" logger="UnhandledError" Jan 16 23:56:42.880342 containerd[1615]: time="2026-01-16T23:56:42.880305886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-db2d61d92f,Uid:a8a4e0bc592dc7543eac58b76f1714e8,Namespace:kube-system,Attempt:0,} returns sandbox id \"64572e4adea03728945b3da3aa0ca79a0a7c9fcdbcd72eb2a9a4f2e5dd2985ea\"" Jan 16 23:56:42.886123 containerd[1615]: time="2026-01-16T23:56:42.886003098Z" level=info msg="CreateContainer within sandbox \"64572e4adea03728945b3da3aa0ca79a0a7c9fcdbcd72eb2a9a4f2e5dd2985ea\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 16 23:56:42.890901 containerd[1615]: time="2026-01-16T23:56:42.890774337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-db2d61d92f,Uid:da1b61f5262054b0a899dcd7282539df,Namespace:kube-system,Attempt:0,} returns sandbox id \"2572583c81cc20d03990e115aeb6216c10dc5bcd275ecdf63566c015a585d322\"" Jan 16 23:56:42.893745 containerd[1615]: time="2026-01-16T23:56:42.893492530Z" level=info msg="CreateContainer within sandbox \"2572583c81cc20d03990e115aeb6216c10dc5bcd275ecdf63566c015a585d322\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 16 23:56:42.899982 containerd[1615]: time="2026-01-16T23:56:42.899896493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-db2d61d92f,Uid:0e7046afb65d62fb45eff0d63e15e0ec,Namespace:kube-system,Attempt:0,} returns sandbox id \"4d4284977eb62214dd0b28c08a38ac1ffd94738adc030399a29e44e0f00716e7\"" Jan 16 23:56:42.903458 containerd[1615]: time="2026-01-16T23:56:42.903388404Z" level=info msg="CreateContainer within sandbox \"4d4284977eb62214dd0b28c08a38ac1ffd94738adc030399a29e44e0f00716e7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 16 23:56:42.906589 containerd[1615]: time="2026-01-16T23:56:42.906443231Z" level=info msg="CreateContainer within sandbox \"64572e4adea03728945b3da3aa0ca79a0a7c9fcdbcd72eb2a9a4f2e5dd2985ea\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e9adb7850fb7ad99d8c32ebd8828bdda7e21b9070e425ecbc07a617588e681e8\"" Jan 16 23:56:42.907162 containerd[1615]: time="2026-01-16T23:56:42.907044611Z" level=info msg="StartContainer for \"e9adb7850fb7ad99d8c32ebd8828bdda7e21b9070e425ecbc07a617588e681e8\"" Jan 16 23:56:42.916452 containerd[1615]: time="2026-01-16T23:56:42.916413112Z" level=info msg="CreateContainer within sandbox \"2572583c81cc20d03990e115aeb6216c10dc5bcd275ecdf63566c015a585d322\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9c0c38ebc3f9687b6e23777c2b81a8dca87600886b244f52bc20d9947d6704ca\"" Jan 16 23:56:42.917889 containerd[1615]: time="2026-01-16T23:56:42.917082220Z" level=info msg="StartContainer for \"9c0c38ebc3f9687b6e23777c2b81a8dca87600886b244f52bc20d9947d6704ca\"" Jan 16 23:56:42.928139 containerd[1615]: time="2026-01-16T23:56:42.928078324Z" level=info msg="CreateContainer within sandbox \"4d4284977eb62214dd0b28c08a38ac1ffd94738adc030399a29e44e0f00716e7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a5e33b4b06da4889f2165b1ef260ff4d8d3535df079bb37ade64e04bcbc685d6\"" Jan 16 23:56:42.928650 containerd[1615]: time="2026-01-16T23:56:42.928613778Z" level=info msg="StartContainer for \"a5e33b4b06da4889f2165b1ef260ff4d8d3535df079bb37ade64e04bcbc685d6\"" Jan 16 23:56:43.003215 containerd[1615]: time="2026-01-16T23:56:43.003141891Z" level=info msg="StartContainer for \"e9adb7850fb7ad99d8c32ebd8828bdda7e21b9070e425ecbc07a617588e681e8\" returns successfully" Jan 16 23:56:43.006063 kubelet[2347]: W0116 23:56:43.005990 2347 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://188.245.124.206:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 188.245.124.206:6443: connect: connection refused Jan 16 23:56:43.006164 kubelet[2347]: E0116 23:56:43.006068 2347 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://188.245.124.206:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 188.245.124.206:6443: connect: connection refused" logger="UnhandledError" Jan 16 23:56:43.049184 containerd[1615]: time="2026-01-16T23:56:43.048934178Z" level=info msg="StartContainer for \"9c0c38ebc3f9687b6e23777c2b81a8dca87600886b244f52bc20d9947d6704ca\" returns successfully" Jan 16 23:56:43.056651 kubelet[2347]: E0116 23:56:43.055899 2347 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.124.206:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-db2d61d92f?timeout=10s\": dial tcp 188.245.124.206:6443: connect: connection refused" interval="1.6s" Jan 16 23:56:43.062913 containerd[1615]: time="2026-01-16T23:56:43.062867907Z" level=info msg="StartContainer for \"a5e33b4b06da4889f2165b1ef260ff4d8d3535df079bb37ade64e04bcbc685d6\" returns successfully" Jan 16 23:56:43.213076 kubelet[2347]: I0116 23:56:43.212972 2347 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-db2d61d92f" Jan 16 23:56:43.703660 kubelet[2347]: E0116 23:56:43.703536 2347 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-db2d61d92f\" not found" node="ci-4081-3-6-n-db2d61d92f" Jan 16 23:56:43.714640 kubelet[2347]: E0116 23:56:43.713170 2347 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-db2d61d92f\" not found" node="ci-4081-3-6-n-db2d61d92f" Jan 16 23:56:43.717858 kubelet[2347]: E0116 23:56:43.717832 2347 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-db2d61d92f\" not found" node="ci-4081-3-6-n-db2d61d92f" Jan 16 23:56:44.720871 kubelet[2347]: E0116 23:56:44.720845 2347 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-db2d61d92f\" not found" node="ci-4081-3-6-n-db2d61d92f" Jan 16 23:56:44.721958 kubelet[2347]: E0116 23:56:44.721580 2347 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-db2d61d92f\" not found" node="ci-4081-3-6-n-db2d61d92f" Jan 16 23:56:44.889655 update_engine[1591]: I20260116 23:56:44.888665 1591 update_attempter.cc:509] Updating boot flags... Jan 16 23:56:45.024754 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2623) Jan 16 23:56:45.245613 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2626) Jan 16 23:56:45.258719 kubelet[2347]: E0116 23:56:45.258619 2347 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-6-n-db2d61d92f\" not found" node="ci-4081-3-6-n-db2d61d92f" Jan 16 23:56:45.386996 kubelet[2347]: I0116 23:56:45.386952 2347 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-n-db2d61d92f" Jan 16 23:56:45.386996 kubelet[2347]: E0116 23:56:45.386995 2347 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081-3-6-n-db2d61d92f\": node \"ci-4081-3-6-n-db2d61d92f\" not found" Jan 16 23:56:45.438362 kubelet[2347]: I0116 23:56:45.438325 2347 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-db2d61d92f" Jan 16 23:56:45.455385 kubelet[2347]: E0116 23:56:45.455339 2347 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-db2d61d92f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-6-n-db2d61d92f" Jan 16 23:56:45.455385 kubelet[2347]: I0116 23:56:45.455376 2347 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-db2d61d92f" Jan 16 23:56:45.460851 kubelet[2347]: E0116 23:56:45.460809 2347 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-6-n-db2d61d92f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-db2d61d92f" Jan 16 23:56:45.460851 kubelet[2347]: I0116 23:56:45.460844 2347 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-db2d61d92f" Jan 16 23:56:45.466701 kubelet[2347]: E0116 23:56:45.466648 2347 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-n-db2d61d92f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-6-n-db2d61d92f" Jan 16 23:56:45.634900 kubelet[2347]: I0116 23:56:45.634707 2347 apiserver.go:52] "Watching apiserver" Jan 16 23:56:45.643486 kubelet[2347]: I0116 23:56:45.643454 2347 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 16 23:56:46.654168 kubelet[2347]: I0116 23:56:46.653816 2347 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-db2d61d92f" Jan 16 23:56:47.619378 systemd[1]: Reloading requested from client PID 2634 ('systemctl') (unit session-7.scope)... Jan 16 23:56:47.619396 systemd[1]: Reloading... Jan 16 23:56:47.706744 zram_generator::config[2674]: No configuration found. Jan 16 23:56:47.831409 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 16 23:56:47.914824 systemd[1]: Reloading finished in 295 ms. Jan 16 23:56:47.952927 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 23:56:47.968496 systemd[1]: kubelet.service: Deactivated successfully. Jan 16 23:56:47.969000 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 23:56:47.984259 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 23:56:48.103442 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 23:56:48.117788 (kubelet)[2729]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 16 23:56:48.178516 kubelet[2729]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 23:56:48.178516 kubelet[2729]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 16 23:56:48.178516 kubelet[2729]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 23:56:48.179438 kubelet[2729]: I0116 23:56:48.178501 2729 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 16 23:56:48.184914 kubelet[2729]: I0116 23:56:48.184862 2729 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 16 23:56:48.184914 kubelet[2729]: I0116 23:56:48.184899 2729 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 16 23:56:48.185210 kubelet[2729]: I0116 23:56:48.185180 2729 server.go:954] "Client rotation is on, will bootstrap in background" Jan 16 23:56:48.186581 kubelet[2729]: I0116 23:56:48.186533 2729 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 16 23:56:48.192500 kubelet[2729]: I0116 23:56:48.192444 2729 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 16 23:56:48.197179 kubelet[2729]: E0116 23:56:48.197130 2729 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 16 23:56:48.197179 kubelet[2729]: I0116 23:56:48.197170 2729 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 16 23:56:48.199655 kubelet[2729]: I0116 23:56:48.199612 2729 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 16 23:56:48.200156 kubelet[2729]: I0116 23:56:48.200105 2729 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 16 23:56:48.200307 kubelet[2729]: I0116 23:56:48.200137 2729 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-db2d61d92f","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jan 16 23:56:48.200477 kubelet[2729]: I0116 23:56:48.200317 2729 topology_manager.go:138] "Creating topology manager with none policy" Jan 16 23:56:48.200477 kubelet[2729]: I0116 23:56:48.200327 2729 container_manager_linux.go:304] "Creating device plugin manager" Jan 16 23:56:48.200477 kubelet[2729]: I0116 23:56:48.200372 2729 state_mem.go:36] "Initialized new in-memory state store" Jan 16 23:56:48.200878 kubelet[2729]: I0116 23:56:48.200568 2729 kubelet.go:446] "Attempting to sync node with API server" Jan 16 23:56:48.200878 kubelet[2729]: I0116 23:56:48.200584 2729 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 16 23:56:48.200878 kubelet[2729]: I0116 23:56:48.200603 2729 kubelet.go:352] "Adding apiserver pod source" Jan 16 23:56:48.200878 kubelet[2729]: I0116 23:56:48.200612 2729 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 16 23:56:48.202676 kubelet[2729]: I0116 23:56:48.202265 2729 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 16 23:56:48.203420 kubelet[2729]: I0116 23:56:48.203400 2729 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 16 23:56:48.203956 kubelet[2729]: I0116 23:56:48.203940 2729 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 16 23:56:48.204054 kubelet[2729]: I0116 23:56:48.204045 2729 server.go:1287] "Started kubelet" Jan 16 23:56:48.206726 kubelet[2729]: I0116 23:56:48.206705 2729 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 16 23:56:48.214913 kubelet[2729]: I0116 23:56:48.214866 2729 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 16 23:56:48.218654 kubelet[2729]: I0116 23:56:48.215428 2729 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 16 23:56:48.218654 kubelet[2729]: I0116 23:56:48.215744 2729 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 16 23:56:48.218654 kubelet[2729]: I0116 23:56:48.215942 2729 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 16 23:56:48.220188 kubelet[2729]: I0116 23:56:48.220153 2729 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 16 23:56:48.220795 kubelet[2729]: E0116 23:56:48.220618 2729 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-db2d61d92f\" not found" Jan 16 23:56:48.220795 kubelet[2729]: I0116 23:56:48.220690 2729 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 16 23:56:48.221655 kubelet[2729]: I0116 23:56:48.221059 2729 reconciler.go:26] "Reconciler: start to sync state" Jan 16 23:56:48.221655 kubelet[2729]: I0116 23:56:48.221642 2729 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 16 23:56:48.223257 kubelet[2729]: I0116 23:56:48.223236 2729 factory.go:221] Registration of the containerd container factory successfully Jan 16 23:56:48.223257 kubelet[2729]: I0116 23:56:48.223256 2729 factory.go:221] Registration of the systemd container factory successfully Jan 16 23:56:48.234599 kubelet[2729]: I0116 23:56:48.232510 2729 server.go:479] "Adding debug handlers to kubelet server" Jan 16 23:56:48.243698 kubelet[2729]: I0116 23:56:48.243612 2729 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 16 23:56:48.245028 kubelet[2729]: I0116 23:56:48.245001 2729 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 16 23:56:48.245151 kubelet[2729]: I0116 23:56:48.245140 2729 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 16 23:56:48.245217 kubelet[2729]: I0116 23:56:48.245208 2729 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 16 23:56:48.245265 kubelet[2729]: I0116 23:56:48.245257 2729 kubelet.go:2382] "Starting kubelet main sync loop" Jan 16 23:56:48.245362 kubelet[2729]: E0116 23:56:48.245346 2729 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 16 23:56:48.315875 kubelet[2729]: I0116 23:56:48.315848 2729 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 16 23:56:48.316081 kubelet[2729]: I0116 23:56:48.316066 2729 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 16 23:56:48.316143 kubelet[2729]: I0116 23:56:48.316135 2729 state_mem.go:36] "Initialized new in-memory state store" Jan 16 23:56:48.316524 kubelet[2729]: I0116 23:56:48.316504 2729 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 16 23:56:48.316621 kubelet[2729]: I0116 23:56:48.316596 2729 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 16 23:56:48.316694 kubelet[2729]: I0116 23:56:48.316686 2729 policy_none.go:49] "None policy: Start" Jan 16 23:56:48.316746 kubelet[2729]: I0116 23:56:48.316739 2729 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 16 23:56:48.316796 kubelet[2729]: I0116 23:56:48.316789 2729 state_mem.go:35] "Initializing new in-memory state store" Jan 16 23:56:48.316963 kubelet[2729]: I0116 23:56:48.316952 2729 state_mem.go:75] "Updated machine memory state" Jan 16 23:56:48.318143 kubelet[2729]: I0116 23:56:48.318119 2729 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 16 23:56:48.318435 kubelet[2729]: I0116 23:56:48.318383 2729 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 16 23:56:48.318536 kubelet[2729]: I0116 23:56:48.318504 2729 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 16 23:56:48.321041 kubelet[2729]: I0116 23:56:48.321011 2729 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 16 23:56:48.321727 kubelet[2729]: E0116 23:56:48.321489 2729 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 16 23:56:48.346863 kubelet[2729]: I0116 23:56:48.346816 2729 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-db2d61d92f" Jan 16 23:56:48.348208 kubelet[2729]: I0116 23:56:48.347317 2729 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-db2d61d92f" Jan 16 23:56:48.348208 kubelet[2729]: I0116 23:56:48.347550 2729 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-db2d61d92f" Jan 16 23:56:48.356939 kubelet[2729]: E0116 23:56:48.356854 2729 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-n-db2d61d92f\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-6-n-db2d61d92f" Jan 16 23:56:48.423312 kubelet[2729]: I0116 23:56:48.423242 2729 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-db2d61d92f" Jan 16 23:56:48.434488 kubelet[2729]: I0116 23:56:48.434206 2729 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-6-n-db2d61d92f" Jan 16 23:56:48.434488 kubelet[2729]: I0116 23:56:48.434290 2729 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-n-db2d61d92f" Jan 16 23:56:48.522317 kubelet[2729]: I0116 23:56:48.522250 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/da1b61f5262054b0a899dcd7282539df-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-db2d61d92f\" (UID: \"da1b61f5262054b0a899dcd7282539df\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-db2d61d92f" Jan 16 23:56:48.522317 kubelet[2729]: I0116 23:56:48.522326 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/da1b61f5262054b0a899dcd7282539df-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-db2d61d92f\" (UID: \"da1b61f5262054b0a899dcd7282539df\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-db2d61d92f" Jan 16 23:56:48.522587 kubelet[2729]: I0116 23:56:48.522365 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a8a4e0bc592dc7543eac58b76f1714e8-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-db2d61d92f\" (UID: \"a8a4e0bc592dc7543eac58b76f1714e8\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-db2d61d92f" Jan 16 23:56:48.522587 kubelet[2729]: I0116 23:56:48.522401 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a8a4e0bc592dc7543eac58b76f1714e8-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-db2d61d92f\" (UID: \"a8a4e0bc592dc7543eac58b76f1714e8\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-db2d61d92f" Jan 16 23:56:48.522587 kubelet[2729]: I0116 23:56:48.522460 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/da1b61f5262054b0a899dcd7282539df-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-db2d61d92f\" (UID: \"da1b61f5262054b0a899dcd7282539df\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-db2d61d92f" Jan 16 23:56:48.522587 kubelet[2729]: I0116 23:56:48.522495 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/da1b61f5262054b0a899dcd7282539df-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-db2d61d92f\" (UID: \"da1b61f5262054b0a899dcd7282539df\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-db2d61d92f" Jan 16 23:56:48.522587 kubelet[2729]: I0116 23:56:48.522529 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/da1b61f5262054b0a899dcd7282539df-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-db2d61d92f\" (UID: \"da1b61f5262054b0a899dcd7282539df\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-db2d61d92f" Jan 16 23:56:48.522911 kubelet[2729]: I0116 23:56:48.522561 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0e7046afb65d62fb45eff0d63e15e0ec-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-db2d61d92f\" (UID: \"0e7046afb65d62fb45eff0d63e15e0ec\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-db2d61d92f" Jan 16 23:56:48.522911 kubelet[2729]: I0116 23:56:48.522588 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a8a4e0bc592dc7543eac58b76f1714e8-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-db2d61d92f\" (UID: \"a8a4e0bc592dc7543eac58b76f1714e8\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-db2d61d92f" Jan 16 23:56:49.209793 kubelet[2729]: I0116 23:56:49.208807 2729 apiserver.go:52] "Watching apiserver" Jan 16 23:56:49.222054 kubelet[2729]: I0116 23:56:49.221991 2729 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 16 23:56:49.291220 kubelet[2729]: I0116 23:56:49.291130 2729 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-db2d61d92f" Jan 16 23:56:49.303768 kubelet[2729]: E0116 23:56:49.303727 2729 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-db2d61d92f\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-6-n-db2d61d92f" Jan 16 23:56:49.316907 kubelet[2729]: I0116 23:56:49.316683 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-6-n-db2d61d92f" podStartSLOduration=1.314757871 podStartE2EDuration="1.314757871s" podCreationTimestamp="2026-01-16 23:56:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-16 23:56:49.313798243 +0000 UTC m=+1.189304096" watchObservedRunningTime="2026-01-16 23:56:49.314757871 +0000 UTC m=+1.190263724" Jan 16 23:56:49.328988 kubelet[2729]: I0116 23:56:49.328853 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-db2d61d92f" podStartSLOduration=1.32883395 podStartE2EDuration="1.32883395s" podCreationTimestamp="2026-01-16 23:56:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-16 23:56:49.328759625 +0000 UTC m=+1.204265518" watchObservedRunningTime="2026-01-16 23:56:49.32883395 +0000 UTC m=+1.204339803" Jan 16 23:56:49.363767 kubelet[2729]: I0116 23:56:49.363679 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-6-n-db2d61d92f" podStartSLOduration=3.363662141 podStartE2EDuration="3.363662141s" podCreationTimestamp="2026-01-16 23:56:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-16 23:56:49.346997959 +0000 UTC m=+1.222503812" watchObservedRunningTime="2026-01-16 23:56:49.363662141 +0000 UTC m=+1.239167954" Jan 16 23:56:53.762242 kubelet[2729]: I0116 23:56:53.761936 2729 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 16 23:56:53.765560 kubelet[2729]: I0116 23:56:53.763360 2729 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 16 23:56:53.765922 containerd[1615]: time="2026-01-16T23:56:53.763015791Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 16 23:56:54.461423 kubelet[2729]: I0116 23:56:54.461338 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cf8e20c7-6e7c-4bf6-ba1c-ca4bf4106dc2-xtables-lock\") pod \"kube-proxy-lvsvj\" (UID: \"cf8e20c7-6e7c-4bf6-ba1c-ca4bf4106dc2\") " pod="kube-system/kube-proxy-lvsvj" Jan 16 23:56:54.461871 kubelet[2729]: I0116 23:56:54.461516 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cf8e20c7-6e7c-4bf6-ba1c-ca4bf4106dc2-lib-modules\") pod \"kube-proxy-lvsvj\" (UID: \"cf8e20c7-6e7c-4bf6-ba1c-ca4bf4106dc2\") " pod="kube-system/kube-proxy-lvsvj" Jan 16 23:56:54.461871 kubelet[2729]: I0116 23:56:54.461738 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98pxn\" (UniqueName: \"kubernetes.io/projected/cf8e20c7-6e7c-4bf6-ba1c-ca4bf4106dc2-kube-api-access-98pxn\") pod \"kube-proxy-lvsvj\" (UID: \"cf8e20c7-6e7c-4bf6-ba1c-ca4bf4106dc2\") " pod="kube-system/kube-proxy-lvsvj" Jan 16 23:56:54.462351 kubelet[2729]: I0116 23:56:54.461945 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cf8e20c7-6e7c-4bf6-ba1c-ca4bf4106dc2-kube-proxy\") pod \"kube-proxy-lvsvj\" (UID: \"cf8e20c7-6e7c-4bf6-ba1c-ca4bf4106dc2\") " pod="kube-system/kube-proxy-lvsvj" Jan 16 23:56:54.755605 containerd[1615]: time="2026-01-16T23:56:54.754955610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lvsvj,Uid:cf8e20c7-6e7c-4bf6-ba1c-ca4bf4106dc2,Namespace:kube-system,Attempt:0,}" Jan 16 23:56:54.787291 containerd[1615]: time="2026-01-16T23:56:54.786951305Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 23:56:54.787291 containerd[1615]: time="2026-01-16T23:56:54.787023029Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 23:56:54.787291 containerd[1615]: time="2026-01-16T23:56:54.787035190Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:56:54.787291 containerd[1615]: time="2026-01-16T23:56:54.787142556Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:56:54.859595 containerd[1615]: time="2026-01-16T23:56:54.857417462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lvsvj,Uid:cf8e20c7-6e7c-4bf6-ba1c-ca4bf4106dc2,Namespace:kube-system,Attempt:0,} returns sandbox id \"aaf554c91a7ebb5d7b6cb09b36afe232548ebdc7fccbda817559dd0ba0ee9877\"" Jan 16 23:56:54.868539 containerd[1615]: time="2026-01-16T23:56:54.868364243Z" level=info msg="CreateContainer within sandbox \"aaf554c91a7ebb5d7b6cb09b36afe232548ebdc7fccbda817559dd0ba0ee9877\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 16 23:56:54.902772 containerd[1615]: time="2026-01-16T23:56:54.902699351Z" level=info msg="CreateContainer within sandbox \"aaf554c91a7ebb5d7b6cb09b36afe232548ebdc7fccbda817559dd0ba0ee9877\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"43ab33a410e0fb53a461e7712ffac64d169af280632495600b33857d6da26cf7\"" Jan 16 23:56:54.904317 containerd[1615]: time="2026-01-16T23:56:54.904064628Z" level=info msg="StartContainer for \"43ab33a410e0fb53a461e7712ffac64d169af280632495600b33857d6da26cf7\"" Jan 16 23:56:54.964115 containerd[1615]: time="2026-01-16T23:56:54.963998188Z" level=info msg="StartContainer for \"43ab33a410e0fb53a461e7712ffac64d169af280632495600b33857d6da26cf7\" returns successfully" Jan 16 23:56:54.964605 kubelet[2729]: I0116 23:56:54.964434 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kj62j\" (UniqueName: \"kubernetes.io/projected/28f488d2-09a1-4704-b8dd-46d2eb09be01-kube-api-access-kj62j\") pod \"tigera-operator-7dcd859c48-2djtc\" (UID: \"28f488d2-09a1-4704-b8dd-46d2eb09be01\") " pod="tigera-operator/tigera-operator-7dcd859c48-2djtc" Jan 16 23:56:54.964605 kubelet[2729]: I0116 23:56:54.964472 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/28f488d2-09a1-4704-b8dd-46d2eb09be01-var-lib-calico\") pod \"tigera-operator-7dcd859c48-2djtc\" (UID: \"28f488d2-09a1-4704-b8dd-46d2eb09be01\") " pod="tigera-operator/tigera-operator-7dcd859c48-2djtc" Jan 16 23:56:55.188730 containerd[1615]: time="2026-01-16T23:56:55.188594808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-2djtc,Uid:28f488d2-09a1-4704-b8dd-46d2eb09be01,Namespace:tigera-operator,Attempt:0,}" Jan 16 23:56:55.218944 containerd[1615]: time="2026-01-16T23:56:55.218060651Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 23:56:55.218944 containerd[1615]: time="2026-01-16T23:56:55.218178817Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 23:56:55.218944 containerd[1615]: time="2026-01-16T23:56:55.218194938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:56:55.218944 containerd[1615]: time="2026-01-16T23:56:55.218284983Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:56:55.277704 containerd[1615]: time="2026-01-16T23:56:55.277656572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-2djtc,Uid:28f488d2-09a1-4704-b8dd-46d2eb09be01,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"a790f11ca5a6ee36bedcd71d949f32e6e4cd510ec886dc1fb78a10df9202d1d6\"" Jan 16 23:56:55.281328 containerd[1615]: time="2026-01-16T23:56:55.280969232Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 16 23:56:55.321913 kubelet[2729]: I0116 23:56:55.321855 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lvsvj" podStartSLOduration=1.321820974 podStartE2EDuration="1.321820974s" podCreationTimestamp="2026-01-16 23:56:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-16 23:56:55.320792958 +0000 UTC m=+7.196298811" watchObservedRunningTime="2026-01-16 23:56:55.321820974 +0000 UTC m=+7.197326827" Jan 16 23:56:57.348167 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1572599088.mount: Deactivated successfully. Jan 16 23:56:57.768207 containerd[1615]: time="2026-01-16T23:56:57.768050856Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:56:57.769644 containerd[1615]: time="2026-01-16T23:56:57.769529850Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Jan 16 23:56:57.771134 containerd[1615]: time="2026-01-16T23:56:57.771082208Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:56:57.774223 containerd[1615]: time="2026-01-16T23:56:57.774173403Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:56:57.776041 containerd[1615]: time="2026-01-16T23:56:57.775368343Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.494360229s" Jan 16 23:56:57.776041 containerd[1615]: time="2026-01-16T23:56:57.775405305Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Jan 16 23:56:57.778748 containerd[1615]: time="2026-01-16T23:56:57.778706270Z" level=info msg="CreateContainer within sandbox \"a790f11ca5a6ee36bedcd71d949f32e6e4cd510ec886dc1fb78a10df9202d1d6\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 16 23:56:57.796903 containerd[1615]: time="2026-01-16T23:56:57.796860421Z" level=info msg="CreateContainer within sandbox \"a790f11ca5a6ee36bedcd71d949f32e6e4cd510ec886dc1fb78a10df9202d1d6\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"f31776948309214171641927f107e2ff5472c32a100dec1711148a9660ccd1bf\"" Jan 16 23:56:57.798312 containerd[1615]: time="2026-01-16T23:56:57.798272971Z" level=info msg="StartContainer for \"f31776948309214171641927f107e2ff5472c32a100dec1711148a9660ccd1bf\"" Jan 16 23:56:57.871887 containerd[1615]: time="2026-01-16T23:56:57.871750456Z" level=info msg="StartContainer for \"f31776948309214171641927f107e2ff5472c32a100dec1711148a9660ccd1bf\" returns successfully" Jan 16 23:56:59.747002 kubelet[2729]: I0116 23:56:59.745704 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-2djtc" podStartSLOduration=3.249265191 podStartE2EDuration="5.745676685s" podCreationTimestamp="2026-01-16 23:56:54 +0000 UTC" firstStartedPulling="2026-01-16 23:56:55.280201751 +0000 UTC m=+7.155707604" lastFinishedPulling="2026-01-16 23:56:57.776613245 +0000 UTC m=+9.652119098" observedRunningTime="2026-01-16 23:56:58.338950682 +0000 UTC m=+10.214456535" watchObservedRunningTime="2026-01-16 23:56:59.745676685 +0000 UTC m=+11.621182538" Jan 16 23:57:02.329301 sudo[1859]: pam_unix(sudo:session): session closed for user root Jan 16 23:57:02.426940 sshd[1855]: pam_unix(sshd:session): session closed for user core Jan 16 23:57:02.436687 systemd[1]: sshd@6-188.245.124.206:22-4.153.228.146:47164.service: Deactivated successfully. Jan 16 23:57:02.451438 systemd[1]: session-7.scope: Deactivated successfully. Jan 16 23:57:02.451485 systemd-logind[1590]: Session 7 logged out. Waiting for processes to exit. Jan 16 23:57:02.455406 systemd-logind[1590]: Removed session 7. Jan 16 23:57:11.476654 kubelet[2729]: I0116 23:57:11.475918 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/44bd5514-c5ed-4dd5-bf7a-28bdc952164b-tigera-ca-bundle\") pod \"calico-typha-59c649c748-2k7zv\" (UID: \"44bd5514-c5ed-4dd5-bf7a-28bdc952164b\") " pod="calico-system/calico-typha-59c649c748-2k7zv" Jan 16 23:57:11.477992 kubelet[2729]: I0116 23:57:11.475970 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/44bd5514-c5ed-4dd5-bf7a-28bdc952164b-typha-certs\") pod \"calico-typha-59c649c748-2k7zv\" (UID: \"44bd5514-c5ed-4dd5-bf7a-28bdc952164b\") " pod="calico-system/calico-typha-59c649c748-2k7zv" Jan 16 23:57:11.478190 kubelet[2729]: I0116 23:57:11.478170 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7zbm\" (UniqueName: \"kubernetes.io/projected/44bd5514-c5ed-4dd5-bf7a-28bdc952164b-kube-api-access-s7zbm\") pod \"calico-typha-59c649c748-2k7zv\" (UID: \"44bd5514-c5ed-4dd5-bf7a-28bdc952164b\") " pod="calico-system/calico-typha-59c649c748-2k7zv" Jan 16 23:57:11.579330 kubelet[2729]: I0116 23:57:11.578471 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/544f9763-df51-4795-9ac2-caf23efb46fa-cni-net-dir\") pod \"calico-node-nj8rz\" (UID: \"544f9763-df51-4795-9ac2-caf23efb46fa\") " pod="calico-system/calico-node-nj8rz" Jan 16 23:57:11.579330 kubelet[2729]: I0116 23:57:11.578598 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rp9m\" (UniqueName: \"kubernetes.io/projected/544f9763-df51-4795-9ac2-caf23efb46fa-kube-api-access-9rp9m\") pod \"calico-node-nj8rz\" (UID: \"544f9763-df51-4795-9ac2-caf23efb46fa\") " pod="calico-system/calico-node-nj8rz" Jan 16 23:57:11.579330 kubelet[2729]: I0116 23:57:11.578672 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/544f9763-df51-4795-9ac2-caf23efb46fa-cni-bin-dir\") pod \"calico-node-nj8rz\" (UID: \"544f9763-df51-4795-9ac2-caf23efb46fa\") " pod="calico-system/calico-node-nj8rz" Jan 16 23:57:11.579330 kubelet[2729]: I0116 23:57:11.578733 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/544f9763-df51-4795-9ac2-caf23efb46fa-cni-log-dir\") pod \"calico-node-nj8rz\" (UID: \"544f9763-df51-4795-9ac2-caf23efb46fa\") " pod="calico-system/calico-node-nj8rz" Jan 16 23:57:11.579330 kubelet[2729]: I0116 23:57:11.578770 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/544f9763-df51-4795-9ac2-caf23efb46fa-var-run-calico\") pod \"calico-node-nj8rz\" (UID: \"544f9763-df51-4795-9ac2-caf23efb46fa\") " pod="calico-system/calico-node-nj8rz" Jan 16 23:57:11.580332 kubelet[2729]: I0116 23:57:11.578802 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/544f9763-df51-4795-9ac2-caf23efb46fa-xtables-lock\") pod \"calico-node-nj8rz\" (UID: \"544f9763-df51-4795-9ac2-caf23efb46fa\") " pod="calico-system/calico-node-nj8rz" Jan 16 23:57:11.580332 kubelet[2729]: I0116 23:57:11.578839 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/544f9763-df51-4795-9ac2-caf23efb46fa-flexvol-driver-host\") pod \"calico-node-nj8rz\" (UID: \"544f9763-df51-4795-9ac2-caf23efb46fa\") " pod="calico-system/calico-node-nj8rz" Jan 16 23:57:11.580332 kubelet[2729]: I0116 23:57:11.578874 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/544f9763-df51-4795-9ac2-caf23efb46fa-lib-modules\") pod \"calico-node-nj8rz\" (UID: \"544f9763-df51-4795-9ac2-caf23efb46fa\") " pod="calico-system/calico-node-nj8rz" Jan 16 23:57:11.580332 kubelet[2729]: I0116 23:57:11.578924 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/544f9763-df51-4795-9ac2-caf23efb46fa-node-certs\") pod \"calico-node-nj8rz\" (UID: \"544f9763-df51-4795-9ac2-caf23efb46fa\") " pod="calico-system/calico-node-nj8rz" Jan 16 23:57:11.580332 kubelet[2729]: I0116 23:57:11.578953 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/544f9763-df51-4795-9ac2-caf23efb46fa-policysync\") pod \"calico-node-nj8rz\" (UID: \"544f9763-df51-4795-9ac2-caf23efb46fa\") " pod="calico-system/calico-node-nj8rz" Jan 16 23:57:11.580738 kubelet[2729]: I0116 23:57:11.578983 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/544f9763-df51-4795-9ac2-caf23efb46fa-tigera-ca-bundle\") pod \"calico-node-nj8rz\" (UID: \"544f9763-df51-4795-9ac2-caf23efb46fa\") " pod="calico-system/calico-node-nj8rz" Jan 16 23:57:11.580738 kubelet[2729]: I0116 23:57:11.579012 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/544f9763-df51-4795-9ac2-caf23efb46fa-var-lib-calico\") pod \"calico-node-nj8rz\" (UID: \"544f9763-df51-4795-9ac2-caf23efb46fa\") " pod="calico-system/calico-node-nj8rz" Jan 16 23:57:11.650019 kubelet[2729]: E0116 23:57:11.649842 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n2dkx" podUID="418b98a5-873e-4b20-a6d4-0ef55480b923" Jan 16 23:57:11.682145 kubelet[2729]: I0116 23:57:11.679728 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vsp7\" (UniqueName: \"kubernetes.io/projected/418b98a5-873e-4b20-a6d4-0ef55480b923-kube-api-access-2vsp7\") pod \"csi-node-driver-n2dkx\" (UID: \"418b98a5-873e-4b20-a6d4-0ef55480b923\") " pod="calico-system/csi-node-driver-n2dkx" Jan 16 23:57:11.682145 kubelet[2729]: I0116 23:57:11.679824 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/418b98a5-873e-4b20-a6d4-0ef55480b923-varrun\") pod \"csi-node-driver-n2dkx\" (UID: \"418b98a5-873e-4b20-a6d4-0ef55480b923\") " pod="calico-system/csi-node-driver-n2dkx" Jan 16 23:57:11.682145 kubelet[2729]: I0116 23:57:11.679853 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/418b98a5-873e-4b20-a6d4-0ef55480b923-registration-dir\") pod \"csi-node-driver-n2dkx\" (UID: \"418b98a5-873e-4b20-a6d4-0ef55480b923\") " pod="calico-system/csi-node-driver-n2dkx" Jan 16 23:57:11.682145 kubelet[2729]: I0116 23:57:11.679954 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/418b98a5-873e-4b20-a6d4-0ef55480b923-socket-dir\") pod \"csi-node-driver-n2dkx\" (UID: \"418b98a5-873e-4b20-a6d4-0ef55480b923\") " pod="calico-system/csi-node-driver-n2dkx" Jan 16 23:57:11.682145 kubelet[2729]: I0116 23:57:11.679996 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/418b98a5-873e-4b20-a6d4-0ef55480b923-kubelet-dir\") pod \"csi-node-driver-n2dkx\" (UID: \"418b98a5-873e-4b20-a6d4-0ef55480b923\") " pod="calico-system/csi-node-driver-n2dkx" Jan 16 23:57:11.685723 kubelet[2729]: E0116 23:57:11.684853 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:11.685723 kubelet[2729]: W0116 23:57:11.684898 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:11.685723 kubelet[2729]: E0116 23:57:11.684924 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:11.685723 kubelet[2729]: E0116 23:57:11.685247 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:11.685723 kubelet[2729]: W0116 23:57:11.685259 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:11.685723 kubelet[2729]: E0116 23:57:11.685271 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:11.687053 kubelet[2729]: E0116 23:57:11.686991 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:11.687331 kubelet[2729]: W0116 23:57:11.687106 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:11.687331 kubelet[2729]: E0116 23:57:11.687126 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:11.688334 kubelet[2729]: E0116 23:57:11.688301 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:11.688334 kubelet[2729]: W0116 23:57:11.688326 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:11.688334 kubelet[2729]: E0116 23:57:11.688342 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:11.692662 kubelet[2729]: E0116 23:57:11.690745 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:11.692662 kubelet[2729]: W0116 23:57:11.690774 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:11.692662 kubelet[2729]: E0116 23:57:11.690794 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:11.692662 kubelet[2729]: E0116 23:57:11.691420 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:11.692662 kubelet[2729]: W0116 23:57:11.691431 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:11.692662 kubelet[2729]: E0116 23:57:11.691541 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:11.699670 kubelet[2729]: E0116 23:57:11.697683 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:11.699670 kubelet[2729]: W0116 23:57:11.697713 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:11.699670 kubelet[2729]: E0116 23:57:11.697737 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:11.711402 containerd[1615]: time="2026-01-16T23:57:11.709835306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-59c649c748-2k7zv,Uid:44bd5514-c5ed-4dd5-bf7a-28bdc952164b,Namespace:calico-system,Attempt:0,}" Jan 16 23:57:11.717261 kubelet[2729]: E0116 23:57:11.717208 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:11.717634 kubelet[2729]: W0116 23:57:11.717598 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:11.717743 kubelet[2729]: E0116 23:57:11.717729 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:11.761922 containerd[1615]: time="2026-01-16T23:57:11.761204180Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 23:57:11.762492 containerd[1615]: time="2026-01-16T23:57:11.761681355Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 23:57:11.762492 containerd[1615]: time="2026-01-16T23:57:11.761696956Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:57:11.763769 containerd[1615]: time="2026-01-16T23:57:11.763545775Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:57:11.781926 kubelet[2729]: E0116 23:57:11.781703 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:11.781926 kubelet[2729]: W0116 23:57:11.781734 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:11.781926 kubelet[2729]: E0116 23:57:11.781765 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:11.782471 kubelet[2729]: E0116 23:57:11.782364 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:11.782471 kubelet[2729]: W0116 23:57:11.782405 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:11.782471 kubelet[2729]: E0116 23:57:11.782445 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:11.783651 kubelet[2729]: E0116 23:57:11.783353 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:11.783651 kubelet[2729]: W0116 23:57:11.783402 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:11.783651 kubelet[2729]: E0116 23:57:11.783439 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:11.785255 kubelet[2729]: E0116 23:57:11.784311 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:11.785255 kubelet[2729]: W0116 23:57:11.784324 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:11.785255 kubelet[2729]: E0116 23:57:11.784492 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:11.785255 kubelet[2729]: E0116 23:57:11.784608 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:11.785255 kubelet[2729]: W0116 23:57:11.784616 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:11.785255 kubelet[2729]: E0116 23:57:11.784655 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:11.786162 kubelet[2729]: E0116 23:57:11.785726 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:11.786162 kubelet[2729]: W0116 23:57:11.785747 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:11.786162 kubelet[2729]: E0116 23:57:11.785772 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:11.787899 kubelet[2729]: E0116 23:57:11.787320 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:11.787899 kubelet[2729]: W0116 23:57:11.787339 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:11.787899 kubelet[2729]: E0116 23:57:11.787835 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:11.792883 kubelet[2729]: E0116 23:57:11.792851 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:11.792883 kubelet[2729]: W0116 23:57:11.792875 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:11.793914 kubelet[2729]: E0116 23:57:11.792954 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:11.794128 kubelet[2729]: E0116 23:57:11.794108 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:11.794203 kubelet[2729]: W0116 23:57:11.794129 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:11.794334 kubelet[2729]: E0116 23:57:11.794252 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:11.795733 kubelet[2729]: E0116 23:57:11.795710 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:11.795733 kubelet[2729]: W0116 23:57:11.795731 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:11.795932 kubelet[2729]: E0116 23:57:11.795812 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:11.796620 kubelet[2729]: E0116 23:57:11.796599 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:11.797382 kubelet[2729]: W0116 23:57:11.796618 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:11.797535 kubelet[2729]: E0116 23:57:11.797469 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:11.798275 kubelet[2729]: E0116 23:57:11.798254 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:11.798275 kubelet[2729]: W0116 23:57:11.798274 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:11.798588 kubelet[2729]: E0116 23:57:11.798348 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:11.800171 kubelet[2729]: E0116 23:57:11.799714 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:11.800171 kubelet[2729]: W0116 23:57:11.799731 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:11.800399 kubelet[2729]: E0116 23:57:11.800276 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:11.802155 kubelet[2729]: E0116 23:57:11.802047 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:11.803389 kubelet[2729]: W0116 23:57:11.802454 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:11.803602 kubelet[2729]: E0116 23:57:11.803506 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:11.804333 kubelet[2729]: E0116 23:57:11.804284 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:11.805311 kubelet[2729]: W0116 23:57:11.805179 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:11.805832 kubelet[2729]: E0116 23:57:11.805536 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:11.806427 kubelet[2729]: E0116 23:57:11.806196 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:11.806427 kubelet[2729]: W0116 23:57:11.806315 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:11.807577 kubelet[2729]: E0116 23:57:11.806891 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:11.808143 kubelet[2729]: E0116 23:57:11.808082 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:11.808143 kubelet[2729]: W0116 23:57:11.808100 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:11.808441 kubelet[2729]: E0116 23:57:11.808334 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:11.808820 kubelet[2729]: E0116 23:57:11.808776 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:11.808984 kubelet[2729]: W0116 23:57:11.808791 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:11.809092 kubelet[2729]: E0116 23:57:11.809038 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:11.809490 kubelet[2729]: E0116 23:57:11.809474 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:11.809490 kubelet[2729]: W0116 23:57:11.809546 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:11.809924 kubelet[2729]: E0116 23:57:11.809877 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:11.810196 kubelet[2729]: E0116 23:57:11.810183 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:11.811185 kubelet[2729]: W0116 23:57:11.810259 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:11.811478 kubelet[2729]: E0116 23:57:11.811346 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:11.812803 kubelet[2729]: E0116 23:57:11.811707 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:11.812803 kubelet[2729]: W0116 23:57:11.811720 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:11.813336 kubelet[2729]: E0116 23:57:11.812969 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:11.813991 kubelet[2729]: E0116 23:57:11.813935 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:11.814484 kubelet[2729]: W0116 23:57:11.814245 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:11.815194 kubelet[2729]: E0116 23:57:11.814832 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:11.816930 kubelet[2729]: E0116 23:57:11.816595 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:11.816930 kubelet[2729]: W0116 23:57:11.816613 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:11.817586 kubelet[2729]: E0116 23:57:11.817390 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:11.818040 kubelet[2729]: E0116 23:57:11.817713 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:11.818040 kubelet[2729]: W0116 23:57:11.817728 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:11.818272 kubelet[2729]: E0116 23:57:11.818224 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:11.819005 kubelet[2729]: E0116 23:57:11.818854 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:11.819005 kubelet[2729]: W0116 23:57:11.818868 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:11.819005 kubelet[2729]: E0116 23:57:11.818882 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:11.833044 kubelet[2729]: E0116 23:57:11.832932 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:11.835079 kubelet[2729]: W0116 23:57:11.833446 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:11.835079 kubelet[2729]: E0116 23:57:11.833480 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:11.842778 containerd[1615]: time="2026-01-16T23:57:11.841899067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nj8rz,Uid:544f9763-df51-4795-9ac2-caf23efb46fa,Namespace:calico-system,Attempt:0,}" Jan 16 23:57:11.879120 containerd[1615]: time="2026-01-16T23:57:11.879068890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-59c649c748-2k7zv,Uid:44bd5514-c5ed-4dd5-bf7a-28bdc952164b,Namespace:calico-system,Attempt:0,} returns sandbox id \"dbe92b219b6c5a3d2f9185cb7e7238b479d77c7d645f5e4a67d6eaed587a77bb\"" Jan 16 23:57:11.890828 containerd[1615]: time="2026-01-16T23:57:11.890781103Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 16 23:57:11.901795 containerd[1615]: time="2026-01-16T23:57:11.901567166Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 23:57:11.901795 containerd[1615]: time="2026-01-16T23:57:11.901633528Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 23:57:11.901795 containerd[1615]: time="2026-01-16T23:57:11.901645208Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:57:11.903173 containerd[1615]: time="2026-01-16T23:57:11.901734851Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:57:11.974512 containerd[1615]: time="2026-01-16T23:57:11.974333121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nj8rz,Uid:544f9763-df51-4795-9ac2-caf23efb46fa,Namespace:calico-system,Attempt:0,} returns sandbox id \"718ca7ca084717846e6f6f4736eb6b8f40b3dd75b3728fb843e938c443105157\"" Jan 16 23:57:13.246313 kubelet[2729]: E0116 23:57:13.246193 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n2dkx" podUID="418b98a5-873e-4b20-a6d4-0ef55480b923" Jan 16 23:57:13.546757 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount657376413.mount: Deactivated successfully. Jan 16 23:57:14.363156 containerd[1615]: time="2026-01-16T23:57:14.363089435Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:57:14.364754 containerd[1615]: time="2026-01-16T23:57:14.364696883Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Jan 16 23:57:14.365744 containerd[1615]: time="2026-01-16T23:57:14.365663791Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:57:14.368216 containerd[1615]: time="2026-01-16T23:57:14.368140185Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:57:14.369666 containerd[1615]: time="2026-01-16T23:57:14.368931128Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 2.477831015s" Jan 16 23:57:14.369666 containerd[1615]: time="2026-01-16T23:57:14.368970409Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Jan 16 23:57:14.371346 containerd[1615]: time="2026-01-16T23:57:14.371321879Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 16 23:57:14.388832 containerd[1615]: time="2026-01-16T23:57:14.388764716Z" level=info msg="CreateContainer within sandbox \"dbe92b219b6c5a3d2f9185cb7e7238b479d77c7d645f5e4a67d6eaed587a77bb\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 16 23:57:14.407663 containerd[1615]: time="2026-01-16T23:57:14.407548072Z" level=info msg="CreateContainer within sandbox \"dbe92b219b6c5a3d2f9185cb7e7238b479d77c7d645f5e4a67d6eaed587a77bb\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"c7d6341dfb9fc839ad603b21ecd1954432f6cc66bbffee91cbaaa184e73a03ff\"" Jan 16 23:57:14.409911 containerd[1615]: time="2026-01-16T23:57:14.408514701Z" level=info msg="StartContainer for \"c7d6341dfb9fc839ad603b21ecd1954432f6cc66bbffee91cbaaa184e73a03ff\"" Jan 16 23:57:14.482872 containerd[1615]: time="2026-01-16T23:57:14.482818341Z" level=info msg="StartContainer for \"c7d6341dfb9fc839ad603b21ecd1954432f6cc66bbffee91cbaaa184e73a03ff\" returns successfully" Jan 16 23:57:15.248347 kubelet[2729]: E0116 23:57:15.246811 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n2dkx" podUID="418b98a5-873e-4b20-a6d4-0ef55480b923" Jan 16 23:57:15.388968 kubelet[2729]: E0116 23:57:15.388928 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:15.389349 kubelet[2729]: W0116 23:57:15.389179 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:15.389349 kubelet[2729]: E0116 23:57:15.389217 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:15.390639 kubelet[2729]: E0116 23:57:15.390199 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:15.390639 kubelet[2729]: W0116 23:57:15.390216 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:15.390639 kubelet[2729]: E0116 23:57:15.390285 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:15.390639 kubelet[2729]: E0116 23:57:15.390515 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:15.390639 kubelet[2729]: W0116 23:57:15.390525 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:15.390639 kubelet[2729]: E0116 23:57:15.390546 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:15.391207 kubelet[2729]: E0116 23:57:15.391010 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:15.391207 kubelet[2729]: W0116 23:57:15.391033 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:15.391207 kubelet[2729]: E0116 23:57:15.391047 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:15.391577 kubelet[2729]: E0116 23:57:15.391469 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:15.391577 kubelet[2729]: W0116 23:57:15.391482 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:15.391577 kubelet[2729]: E0116 23:57:15.391506 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:15.391950 kubelet[2729]: E0116 23:57:15.391846 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:15.391950 kubelet[2729]: W0116 23:57:15.391858 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:15.391950 kubelet[2729]: E0116 23:57:15.391872 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:15.392287 kubelet[2729]: E0116 23:57:15.392183 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:15.392287 kubelet[2729]: W0116 23:57:15.392195 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:15.392287 kubelet[2729]: E0116 23:57:15.392206 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:15.392736 kubelet[2729]: E0116 23:57:15.392572 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:15.392736 kubelet[2729]: W0116 23:57:15.392585 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:15.392736 kubelet[2729]: E0116 23:57:15.392596 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:15.393113 kubelet[2729]: E0116 23:57:15.393027 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:15.393113 kubelet[2729]: W0116 23:57:15.393048 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:15.393113 kubelet[2729]: E0116 23:57:15.393059 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:15.393433 kubelet[2729]: E0116 23:57:15.393340 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:15.393433 kubelet[2729]: W0116 23:57:15.393352 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:15.393433 kubelet[2729]: E0116 23:57:15.393370 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:15.393792 kubelet[2729]: E0116 23:57:15.393686 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:15.393792 kubelet[2729]: W0116 23:57:15.393698 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:15.393792 kubelet[2729]: E0116 23:57:15.393720 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:15.394232 kubelet[2729]: E0116 23:57:15.394063 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:15.394232 kubelet[2729]: W0116 23:57:15.394093 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:15.394232 kubelet[2729]: E0116 23:57:15.394104 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:15.394866 kubelet[2729]: E0116 23:57:15.394736 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:15.394866 kubelet[2729]: W0116 23:57:15.394772 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:15.394866 kubelet[2729]: E0116 23:57:15.394794 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:15.395240 kubelet[2729]: E0116 23:57:15.395128 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:15.395240 kubelet[2729]: W0116 23:57:15.395141 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:15.395240 kubelet[2729]: E0116 23:57:15.395151 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:15.395454 kubelet[2729]: E0116 23:57:15.395388 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:15.395454 kubelet[2729]: W0116 23:57:15.395399 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:15.395454 kubelet[2729]: E0116 23:57:15.395409 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:15.426264 kubelet[2729]: E0116 23:57:15.426166 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:15.426264 kubelet[2729]: W0116 23:57:15.426196 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:15.426264 kubelet[2729]: E0116 23:57:15.426248 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:15.427459 kubelet[2729]: E0116 23:57:15.426589 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:15.427459 kubelet[2729]: W0116 23:57:15.426645 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:15.427459 kubelet[2729]: E0116 23:57:15.426676 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:15.427459 kubelet[2729]: E0116 23:57:15.427090 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:15.427459 kubelet[2729]: W0116 23:57:15.427116 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:15.427459 kubelet[2729]: E0116 23:57:15.427138 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:15.427459 kubelet[2729]: E0116 23:57:15.427370 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:15.427459 kubelet[2729]: W0116 23:57:15.427384 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:15.428125 kubelet[2729]: E0116 23:57:15.427400 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:15.428125 kubelet[2729]: E0116 23:57:15.427675 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:15.428125 kubelet[2729]: W0116 23:57:15.427792 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:15.428125 kubelet[2729]: E0116 23:57:15.427807 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:15.428125 kubelet[2729]: E0116 23:57:15.428020 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:15.428125 kubelet[2729]: W0116 23:57:15.428030 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:15.428125 kubelet[2729]: E0116 23:57:15.428049 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:15.428652 kubelet[2729]: E0116 23:57:15.428258 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:15.428652 kubelet[2729]: W0116 23:57:15.428270 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:15.428652 kubelet[2729]: E0116 23:57:15.428298 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:15.428652 kubelet[2729]: E0116 23:57:15.428507 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:15.428652 kubelet[2729]: W0116 23:57:15.428517 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:15.428652 kubelet[2729]: E0116 23:57:15.428534 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:15.429106 kubelet[2729]: E0116 23:57:15.428848 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:15.429106 kubelet[2729]: W0116 23:57:15.428880 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:15.429106 kubelet[2729]: E0116 23:57:15.428904 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:15.430022 kubelet[2729]: E0116 23:57:15.429855 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:15.430022 kubelet[2729]: W0116 23:57:15.429874 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:15.430022 kubelet[2729]: E0116 23:57:15.429896 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:15.430272 kubelet[2729]: E0116 23:57:15.430105 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:15.430272 kubelet[2729]: W0116 23:57:15.430115 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:15.430272 kubelet[2729]: E0116 23:57:15.430136 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:15.432172 kubelet[2729]: E0116 23:57:15.431776 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:15.432172 kubelet[2729]: W0116 23:57:15.431800 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:15.432172 kubelet[2729]: E0116 23:57:15.431909 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:15.432172 kubelet[2729]: E0116 23:57:15.432037 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:15.432172 kubelet[2729]: W0116 23:57:15.432045 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:15.432172 kubelet[2729]: E0116 23:57:15.432122 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:15.432344 kubelet[2729]: E0116 23:57:15.432201 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:15.432344 kubelet[2729]: W0116 23:57:15.432208 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:15.432344 kubelet[2729]: E0116 23:57:15.432284 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:15.432407 kubelet[2729]: E0116 23:57:15.432393 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:15.432407 kubelet[2729]: W0116 23:57:15.432400 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:15.432451 kubelet[2729]: E0116 23:57:15.432409 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:15.432843 kubelet[2729]: E0116 23:57:15.432539 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:15.432843 kubelet[2729]: W0116 23:57:15.432552 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:15.432843 kubelet[2729]: E0116 23:57:15.432561 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:15.432941 kubelet[2729]: E0116 23:57:15.432887 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:15.432941 kubelet[2729]: W0116 23:57:15.432898 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:15.432941 kubelet[2729]: E0116 23:57:15.432908 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:15.433341 kubelet[2729]: E0116 23:57:15.433311 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:57:15.433341 kubelet[2729]: W0116 23:57:15.433329 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:57:15.433341 kubelet[2729]: E0116 23:57:15.433341 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:57:16.026055 containerd[1615]: time="2026-01-16T23:57:16.025213992Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:57:16.026055 containerd[1615]: time="2026-01-16T23:57:16.025853451Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Jan 16 23:57:16.026881 containerd[1615]: time="2026-01-16T23:57:16.026811118Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:57:16.030178 containerd[1615]: time="2026-01-16T23:57:16.029662599Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:57:16.030463 containerd[1615]: time="2026-01-16T23:57:16.030434741Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.658963257s" Jan 16 23:57:16.030543 containerd[1615]: time="2026-01-16T23:57:16.030527463Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Jan 16 23:57:16.034901 containerd[1615]: time="2026-01-16T23:57:16.034669341Z" level=info msg="CreateContainer within sandbox \"718ca7ca084717846e6f6f4736eb6b8f40b3dd75b3728fb843e938c443105157\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 16 23:57:16.053038 containerd[1615]: time="2026-01-16T23:57:16.052949659Z" level=info msg="CreateContainer within sandbox \"718ca7ca084717846e6f6f4736eb6b8f40b3dd75b3728fb843e938c443105157\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"88f36600c48ae7f59d673dbc2084396e1c35475ce134f8b7e1626d7a0a7046f6\"" Jan 16 23:57:16.054728 containerd[1615]: time="2026-01-16T23:57:16.054661948Z" level=info msg="StartContainer for \"88f36600c48ae7f59d673dbc2084396e1c35475ce134f8b7e1626d7a0a7046f6\"" Jan 16 23:57:16.126885 containerd[1615]: time="2026-01-16T23:57:16.126789074Z" level=info msg="StartContainer for \"88f36600c48ae7f59d673dbc2084396e1c35475ce134f8b7e1626d7a0a7046f6\" returns successfully" Jan 16 23:57:16.267241 containerd[1615]: time="2026-01-16T23:57:16.267125496Z" level=info msg="shim disconnected" id=88f36600c48ae7f59d673dbc2084396e1c35475ce134f8b7e1626d7a0a7046f6 namespace=k8s.io Jan 16 23:57:16.267870 containerd[1615]: time="2026-01-16T23:57:16.267539388Z" level=warning msg="cleaning up after shim disconnected" id=88f36600c48ae7f59d673dbc2084396e1c35475ce134f8b7e1626d7a0a7046f6 namespace=k8s.io Jan 16 23:57:16.267870 containerd[1615]: time="2026-01-16T23:57:16.267567669Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 16 23:57:16.383900 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-88f36600c48ae7f59d673dbc2084396e1c35475ce134f8b7e1626d7a0a7046f6-rootfs.mount: Deactivated successfully. Jan 16 23:57:16.388411 kubelet[2729]: I0116 23:57:16.387235 2729 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 16 23:57:16.394867 containerd[1615]: time="2026-01-16T23:57:16.394793639Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 16 23:57:16.412869 kubelet[2729]: I0116 23:57:16.412404 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-59c649c748-2k7zv" podStartSLOduration=2.929785574 podStartE2EDuration="5.412379057s" podCreationTimestamp="2026-01-16 23:57:11 +0000 UTC" firstStartedPulling="2026-01-16 23:57:11.88817206 +0000 UTC m=+23.763677913" lastFinishedPulling="2026-01-16 23:57:14.370765543 +0000 UTC m=+26.246271396" observedRunningTime="2026-01-16 23:57:15.399528638 +0000 UTC m=+27.275034531" watchObservedRunningTime="2026-01-16 23:57:16.412379057 +0000 UTC m=+28.287884910" Jan 16 23:57:17.245985 kubelet[2729]: E0116 23:57:17.245883 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n2dkx" podUID="418b98a5-873e-4b20-a6d4-0ef55480b923" Jan 16 23:57:19.246556 kubelet[2729]: E0116 23:57:19.246124 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n2dkx" podUID="418b98a5-873e-4b20-a6d4-0ef55480b923" Jan 16 23:57:20.124344 containerd[1615]: time="2026-01-16T23:57:20.124265756Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:57:20.126049 containerd[1615]: time="2026-01-16T23:57:20.125861398Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Jan 16 23:57:20.127369 containerd[1615]: time="2026-01-16T23:57:20.127144072Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:57:20.129781 containerd[1615]: time="2026-01-16T23:57:20.129713219Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:57:20.131070 containerd[1615]: time="2026-01-16T23:57:20.130657844Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 3.735794524s" Jan 16 23:57:20.131070 containerd[1615]: time="2026-01-16T23:57:20.130696965Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Jan 16 23:57:20.135209 containerd[1615]: time="2026-01-16T23:57:20.135021999Z" level=info msg="CreateContainer within sandbox \"718ca7ca084717846e6f6f4736eb6b8f40b3dd75b3728fb843e938c443105157\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 16 23:57:20.152760 containerd[1615]: time="2026-01-16T23:57:20.152694064Z" level=info msg="CreateContainer within sandbox \"718ca7ca084717846e6f6f4736eb6b8f40b3dd75b3728fb843e938c443105157\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4f73ccb747656f5654cb41043e0a52ede51d253cade56438ef13d47cebbe56ee\"" Jan 16 23:57:20.153550 containerd[1615]: time="2026-01-16T23:57:20.153513766Z" level=info msg="StartContainer for \"4f73ccb747656f5654cb41043e0a52ede51d253cade56438ef13d47cebbe56ee\"" Jan 16 23:57:20.227884 containerd[1615]: time="2026-01-16T23:57:20.227682917Z" level=info msg="StartContainer for \"4f73ccb747656f5654cb41043e0a52ede51d253cade56438ef13d47cebbe56ee\" returns successfully" Jan 16 23:57:20.771585 containerd[1615]: time="2026-01-16T23:57:20.771511509Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 16 23:57:20.786914 kubelet[2729]: I0116 23:57:20.785288 2729 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 16 23:57:20.808279 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4f73ccb747656f5654cb41043e0a52ede51d253cade56438ef13d47cebbe56ee-rootfs.mount: Deactivated successfully. Jan 16 23:57:20.896669 containerd[1615]: time="2026-01-16T23:57:20.895325367Z" level=info msg="shim disconnected" id=4f73ccb747656f5654cb41043e0a52ede51d253cade56438ef13d47cebbe56ee namespace=k8s.io Jan 16 23:57:20.896669 containerd[1615]: time="2026-01-16T23:57:20.895422610Z" level=warning msg="cleaning up after shim disconnected" id=4f73ccb747656f5654cb41043e0a52ede51d253cade56438ef13d47cebbe56ee namespace=k8s.io Jan 16 23:57:20.896669 containerd[1615]: time="2026-01-16T23:57:20.895433250Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 16 23:57:20.967367 kubelet[2729]: I0116 23:57:20.966957 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfzlx\" (UniqueName: \"kubernetes.io/projected/fa73edb7-b960-4dc9-91ae-ac7984d8d56b-kube-api-access-dfzlx\") pod \"coredns-668d6bf9bc-czgwf\" (UID: \"fa73edb7-b960-4dc9-91ae-ac7984d8d56b\") " pod="kube-system/coredns-668d6bf9bc-czgwf" Jan 16 23:57:20.969009 kubelet[2729]: I0116 23:57:20.967235 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59191777-c11b-4c90-aa9f-cb135874655c-tigera-ca-bundle\") pod \"calico-kube-controllers-79df8bc6d5-wt9rj\" (UID: \"59191777-c11b-4c90-aa9f-cb135874655c\") " pod="calico-system/calico-kube-controllers-79df8bc6d5-wt9rj" Jan 16 23:57:20.969009 kubelet[2729]: I0116 23:57:20.968568 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlwr5\" (UniqueName: \"kubernetes.io/projected/59191777-c11b-4c90-aa9f-cb135874655c-kube-api-access-hlwr5\") pod \"calico-kube-controllers-79df8bc6d5-wt9rj\" (UID: \"59191777-c11b-4c90-aa9f-cb135874655c\") " pod="calico-system/calico-kube-controllers-79df8bc6d5-wt9rj" Jan 16 23:57:20.969009 kubelet[2729]: I0116 23:57:20.968707 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e369e49f-7e57-4c99-8a2f-2c08450c808f-calico-apiserver-certs\") pod \"calico-apiserver-6f7cdf6968-skgrt\" (UID: \"e369e49f-7e57-4c99-8a2f-2c08450c808f\") " pod="calico-apiserver/calico-apiserver-6f7cdf6968-skgrt" Jan 16 23:57:20.969691 kubelet[2729]: I0116 23:57:20.969390 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvfr2\" (UniqueName: \"kubernetes.io/projected/2937c15b-91fa-4e47-b049-69032bc8570b-kube-api-access-pvfr2\") pod \"whisker-7f48594d95-tmkh9\" (UID: \"2937c15b-91fa-4e47-b049-69032bc8570b\") " pod="calico-system/whisker-7f48594d95-tmkh9" Jan 16 23:57:20.969691 kubelet[2729]: I0116 23:57:20.969490 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/98bfe59b-02e4-4bdc-9e5a-0a8209ddd104-calico-apiserver-certs\") pod \"calico-apiserver-6f7cdf6968-8z7jz\" (UID: \"98bfe59b-02e4-4bdc-9e5a-0a8209ddd104\") " pod="calico-apiserver/calico-apiserver-6f7cdf6968-8z7jz" Jan 16 23:57:20.969691 kubelet[2729]: I0116 23:57:20.969525 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxmf9\" (UniqueName: \"kubernetes.io/projected/98bfe59b-02e4-4bdc-9e5a-0a8209ddd104-kube-api-access-gxmf9\") pod \"calico-apiserver-6f7cdf6968-8z7jz\" (UID: \"98bfe59b-02e4-4bdc-9e5a-0a8209ddd104\") " pod="calico-apiserver/calico-apiserver-6f7cdf6968-8z7jz" Jan 16 23:57:20.969691 kubelet[2729]: I0116 23:57:20.969553 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fqch\" (UniqueName: \"kubernetes.io/projected/db404f65-1005-4977-b8b1-05db5155d53d-kube-api-access-2fqch\") pod \"goldmane-666569f655-npqz8\" (UID: \"db404f65-1005-4977-b8b1-05db5155d53d\") " pod="calico-system/goldmane-666569f655-npqz8" Jan 16 23:57:20.969691 kubelet[2729]: I0116 23:57:20.969581 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/189086c2-a129-41e8-ba84-b185657d3f10-config-volume\") pod \"coredns-668d6bf9bc-77fjj\" (UID: \"189086c2-a129-41e8-ba84-b185657d3f10\") " pod="kube-system/coredns-668d6bf9bc-77fjj" Jan 16 23:57:20.969965 kubelet[2729]: I0116 23:57:20.969644 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7tj4\" (UniqueName: \"kubernetes.io/projected/189086c2-a129-41e8-ba84-b185657d3f10-kube-api-access-t7tj4\") pod \"coredns-668d6bf9bc-77fjj\" (UID: \"189086c2-a129-41e8-ba84-b185657d3f10\") " pod="kube-system/coredns-668d6bf9bc-77fjj" Jan 16 23:57:20.969965 kubelet[2729]: I0116 23:57:20.969666 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2j77x\" (UniqueName: \"kubernetes.io/projected/e369e49f-7e57-4c99-8a2f-2c08450c808f-kube-api-access-2j77x\") pod \"calico-apiserver-6f7cdf6968-skgrt\" (UID: \"e369e49f-7e57-4c99-8a2f-2c08450c808f\") " pod="calico-apiserver/calico-apiserver-6f7cdf6968-skgrt" Jan 16 23:57:20.971419 kubelet[2729]: I0116 23:57:20.970063 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2937c15b-91fa-4e47-b049-69032bc8570b-whisker-backend-key-pair\") pod \"whisker-7f48594d95-tmkh9\" (UID: \"2937c15b-91fa-4e47-b049-69032bc8570b\") " pod="calico-system/whisker-7f48594d95-tmkh9" Jan 16 23:57:20.971419 kubelet[2729]: I0116 23:57:20.970094 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2937c15b-91fa-4e47-b049-69032bc8570b-whisker-ca-bundle\") pod \"whisker-7f48594d95-tmkh9\" (UID: \"2937c15b-91fa-4e47-b049-69032bc8570b\") " pod="calico-system/whisker-7f48594d95-tmkh9" Jan 16 23:57:20.971419 kubelet[2729]: I0116 23:57:20.970141 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db404f65-1005-4977-b8b1-05db5155d53d-config\") pod \"goldmane-666569f655-npqz8\" (UID: \"db404f65-1005-4977-b8b1-05db5155d53d\") " pod="calico-system/goldmane-666569f655-npqz8" Jan 16 23:57:20.971419 kubelet[2729]: I0116 23:57:20.971227 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/db404f65-1005-4977-b8b1-05db5155d53d-goldmane-key-pair\") pod \"goldmane-666569f655-npqz8\" (UID: \"db404f65-1005-4977-b8b1-05db5155d53d\") " pod="calico-system/goldmane-666569f655-npqz8" Jan 16 23:57:20.971419 kubelet[2729]: I0116 23:57:20.971279 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/db404f65-1005-4977-b8b1-05db5155d53d-goldmane-ca-bundle\") pod \"goldmane-666569f655-npqz8\" (UID: \"db404f65-1005-4977-b8b1-05db5155d53d\") " pod="calico-system/goldmane-666569f655-npqz8" Jan 16 23:57:20.971660 kubelet[2729]: I0116 23:57:20.971327 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fa73edb7-b960-4dc9-91ae-ac7984d8d56b-config-volume\") pod \"coredns-668d6bf9bc-czgwf\" (UID: \"fa73edb7-b960-4dc9-91ae-ac7984d8d56b\") " pod="kube-system/coredns-668d6bf9bc-czgwf" Jan 16 23:57:21.181687 containerd[1615]: time="2026-01-16T23:57:21.181588102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-czgwf,Uid:fa73edb7-b960-4dc9-91ae-ac7984d8d56b,Namespace:kube-system,Attempt:0,}" Jan 16 23:57:21.197989 containerd[1615]: time="2026-01-16T23:57:21.197933725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f48594d95-tmkh9,Uid:2937c15b-91fa-4e47-b049-69032bc8570b,Namespace:calico-system,Attempt:0,}" Jan 16 23:57:21.248693 containerd[1615]: time="2026-01-16T23:57:21.248528714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-npqz8,Uid:db404f65-1005-4977-b8b1-05db5155d53d,Namespace:calico-system,Attempt:0,}" Jan 16 23:57:21.249688 containerd[1615]: time="2026-01-16T23:57:21.249047008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-77fjj,Uid:189086c2-a129-41e8-ba84-b185657d3f10,Namespace:kube-system,Attempt:0,}" Jan 16 23:57:21.249688 containerd[1615]: time="2026-01-16T23:57:21.249313775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79df8bc6d5-wt9rj,Uid:59191777-c11b-4c90-aa9f-cb135874655c,Namespace:calico-system,Attempt:0,}" Jan 16 23:57:21.250057 containerd[1615]: time="2026-01-16T23:57:21.250022673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f7cdf6968-8z7jz,Uid:98bfe59b-02e4-4bdc-9e5a-0a8209ddd104,Namespace:calico-apiserver,Attempt:0,}" Jan 16 23:57:21.253598 containerd[1615]: time="2026-01-16T23:57:21.252860226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n2dkx,Uid:418b98a5-873e-4b20-a6d4-0ef55480b923,Namespace:calico-system,Attempt:0,}" Jan 16 23:57:21.259278 containerd[1615]: time="2026-01-16T23:57:21.259233631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f7cdf6968-skgrt,Uid:e369e49f-7e57-4c99-8a2f-2c08450c808f,Namespace:calico-apiserver,Attempt:0,}" Jan 16 23:57:21.374427 containerd[1615]: time="2026-01-16T23:57:21.374241688Z" level=error msg="Failed to destroy network for sandbox \"68fe52e0f5db0f1599e8d3156051f848d55dc1748249d82bf0152d8603c48a9a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:21.375028 containerd[1615]: time="2026-01-16T23:57:21.374874944Z" level=error msg="Failed to destroy network for sandbox \"aa192c094faf8b69bcbb6f9d843eb29a2c377500ef98c82221c385697ba71bf2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:21.376294 containerd[1615]: time="2026-01-16T23:57:21.376236379Z" level=error msg="encountered an error cleaning up failed sandbox \"aa192c094faf8b69bcbb6f9d843eb29a2c377500ef98c82221c385697ba71bf2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:21.376405 containerd[1615]: time="2026-01-16T23:57:21.376315021Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f48594d95-tmkh9,Uid:2937c15b-91fa-4e47-b049-69032bc8570b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"aa192c094faf8b69bcbb6f9d843eb29a2c377500ef98c82221c385697ba71bf2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:21.377037 containerd[1615]: time="2026-01-16T23:57:21.376401944Z" level=error msg="encountered an error cleaning up failed sandbox \"68fe52e0f5db0f1599e8d3156051f848d55dc1748249d82bf0152d8603c48a9a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:21.377037 containerd[1615]: time="2026-01-16T23:57:21.376428784Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-czgwf,Uid:fa73edb7-b960-4dc9-91ae-ac7984d8d56b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"68fe52e0f5db0f1599e8d3156051f848d55dc1748249d82bf0152d8603c48a9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:21.377160 kubelet[2729]: E0116 23:57:21.376661 2729 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68fe52e0f5db0f1599e8d3156051f848d55dc1748249d82bf0152d8603c48a9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:21.377160 kubelet[2729]: E0116 23:57:21.376713 2729 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa192c094faf8b69bcbb6f9d843eb29a2c377500ef98c82221c385697ba71bf2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:21.377160 kubelet[2729]: E0116 23:57:21.376738 2729 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68fe52e0f5db0f1599e8d3156051f848d55dc1748249d82bf0152d8603c48a9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-czgwf" Jan 16 23:57:21.377160 kubelet[2729]: E0116 23:57:21.376776 2729 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa192c094faf8b69bcbb6f9d843eb29a2c377500ef98c82221c385697ba71bf2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7f48594d95-tmkh9" Jan 16 23:57:21.377271 kubelet[2729]: E0116 23:57:21.376797 2729 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa192c094faf8b69bcbb6f9d843eb29a2c377500ef98c82221c385697ba71bf2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7f48594d95-tmkh9" Jan 16 23:57:21.377271 kubelet[2729]: E0116 23:57:21.376804 2729 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68fe52e0f5db0f1599e8d3156051f848d55dc1748249d82bf0152d8603c48a9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-czgwf" Jan 16 23:57:21.377271 kubelet[2729]: E0116 23:57:21.376844 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7f48594d95-tmkh9_calico-system(2937c15b-91fa-4e47-b049-69032bc8570b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7f48594d95-tmkh9_calico-system(2937c15b-91fa-4e47-b049-69032bc8570b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aa192c094faf8b69bcbb6f9d843eb29a2c377500ef98c82221c385697ba71bf2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7f48594d95-tmkh9" podUID="2937c15b-91fa-4e47-b049-69032bc8570b" Jan 16 23:57:21.377373 kubelet[2729]: E0116 23:57:21.376844 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-czgwf_kube-system(fa73edb7-b960-4dc9-91ae-ac7984d8d56b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-czgwf_kube-system(fa73edb7-b960-4dc9-91ae-ac7984d8d56b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"68fe52e0f5db0f1599e8d3156051f848d55dc1748249d82bf0152d8603c48a9a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-czgwf" podUID="fa73edb7-b960-4dc9-91ae-ac7984d8d56b" Jan 16 23:57:21.420902 containerd[1615]: time="2026-01-16T23:57:21.420469564Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 16 23:57:21.424927 kubelet[2729]: I0116 23:57:21.424851 2729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa192c094faf8b69bcbb6f9d843eb29a2c377500ef98c82221c385697ba71bf2" Jan 16 23:57:21.428088 containerd[1615]: time="2026-01-16T23:57:21.427510626Z" level=info msg="StopPodSandbox for \"aa192c094faf8b69bcbb6f9d843eb29a2c377500ef98c82221c385697ba71bf2\"" Jan 16 23:57:21.428088 containerd[1615]: time="2026-01-16T23:57:21.427717952Z" level=info msg="Ensure that sandbox aa192c094faf8b69bcbb6f9d843eb29a2c377500ef98c82221c385697ba71bf2 in task-service has been cleanup successfully" Jan 16 23:57:21.433400 kubelet[2729]: I0116 23:57:21.433282 2729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68fe52e0f5db0f1599e8d3156051f848d55dc1748249d82bf0152d8603c48a9a" Jan 16 23:57:21.434093 containerd[1615]: time="2026-01-16T23:57:21.434059316Z" level=info msg="StopPodSandbox for \"68fe52e0f5db0f1599e8d3156051f848d55dc1748249d82bf0152d8603c48a9a\"" Jan 16 23:57:21.434481 containerd[1615]: time="2026-01-16T23:57:21.434458326Z" level=info msg="Ensure that sandbox 68fe52e0f5db0f1599e8d3156051f848d55dc1748249d82bf0152d8603c48a9a in task-service has been cleanup successfully" Jan 16 23:57:21.653027 containerd[1615]: time="2026-01-16T23:57:21.652960901Z" level=error msg="Failed to destroy network for sandbox \"813465b72d802fde6d8262e94d1f9553f7d5daa5c3321fc020244dd436f9356b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:21.656148 containerd[1615]: time="2026-01-16T23:57:21.655258081Z" level=error msg="encountered an error cleaning up failed sandbox \"813465b72d802fde6d8262e94d1f9553f7d5daa5c3321fc020244dd436f9356b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:21.657723 containerd[1615]: time="2026-01-16T23:57:21.656795280Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f7cdf6968-8z7jz,Uid:98bfe59b-02e4-4bdc-9e5a-0a8209ddd104,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"813465b72d802fde6d8262e94d1f9553f7d5daa5c3321fc020244dd436f9356b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:21.659305 kubelet[2729]: E0116 23:57:21.658804 2729 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"813465b72d802fde6d8262e94d1f9553f7d5daa5c3321fc020244dd436f9356b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:21.659305 kubelet[2729]: E0116 23:57:21.658890 2729 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"813465b72d802fde6d8262e94d1f9553f7d5daa5c3321fc020244dd436f9356b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f7cdf6968-8z7jz" Jan 16 23:57:21.659305 kubelet[2729]: E0116 23:57:21.658914 2729 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"813465b72d802fde6d8262e94d1f9553f7d5daa5c3321fc020244dd436f9356b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f7cdf6968-8z7jz" Jan 16 23:57:21.659892 kubelet[2729]: E0116 23:57:21.658980 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6f7cdf6968-8z7jz_calico-apiserver(98bfe59b-02e4-4bdc-9e5a-0a8209ddd104)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6f7cdf6968-8z7jz_calico-apiserver(98bfe59b-02e4-4bdc-9e5a-0a8209ddd104)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"813465b72d802fde6d8262e94d1f9553f7d5daa5c3321fc020244dd436f9356b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6f7cdf6968-8z7jz" podUID="98bfe59b-02e4-4bdc-9e5a-0a8209ddd104" Jan 16 23:57:21.671381 containerd[1615]: time="2026-01-16T23:57:21.671322856Z" level=error msg="StopPodSandbox for \"68fe52e0f5db0f1599e8d3156051f848d55dc1748249d82bf0152d8603c48a9a\" failed" error="failed to destroy network for sandbox \"68fe52e0f5db0f1599e8d3156051f848d55dc1748249d82bf0152d8603c48a9a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:21.671821 kubelet[2729]: E0116 23:57:21.671690 2729 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"68fe52e0f5db0f1599e8d3156051f848d55dc1748249d82bf0152d8603c48a9a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="68fe52e0f5db0f1599e8d3156051f848d55dc1748249d82bf0152d8603c48a9a" Jan 16 23:57:21.672092 kubelet[2729]: E0116 23:57:21.671817 2729 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"68fe52e0f5db0f1599e8d3156051f848d55dc1748249d82bf0152d8603c48a9a"} Jan 16 23:57:21.672092 kubelet[2729]: E0116 23:57:21.671881 2729 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fa73edb7-b960-4dc9-91ae-ac7984d8d56b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"68fe52e0f5db0f1599e8d3156051f848d55dc1748249d82bf0152d8603c48a9a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 16 23:57:21.672092 kubelet[2729]: E0116 23:57:21.671905 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fa73edb7-b960-4dc9-91ae-ac7984d8d56b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"68fe52e0f5db0f1599e8d3156051f848d55dc1748249d82bf0152d8603c48a9a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-czgwf" podUID="fa73edb7-b960-4dc9-91ae-ac7984d8d56b" Jan 16 23:57:21.685350 containerd[1615]: time="2026-01-16T23:57:21.684989450Z" level=error msg="Failed to destroy network for sandbox \"8bcc19a3b81706a4c02b39edd5dfc7a5bf10b0db2b317ad075008ecc9d3cbb40\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:21.687359 containerd[1615]: time="2026-01-16T23:57:21.687044943Z" level=error msg="encountered an error cleaning up failed sandbox \"8bcc19a3b81706a4c02b39edd5dfc7a5bf10b0db2b317ad075008ecc9d3cbb40\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:21.687902 containerd[1615]: time="2026-01-16T23:57:21.687254749Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n2dkx,Uid:418b98a5-873e-4b20-a6d4-0ef55480b923,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8bcc19a3b81706a4c02b39edd5dfc7a5bf10b0db2b317ad075008ecc9d3cbb40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:21.688205 containerd[1615]: time="2026-01-16T23:57:21.688025369Z" level=error msg="Failed to destroy network for sandbox \"2623671246706b3d5ba38c8fbe31e962febe832fb61fbd580c6e09cc51ce5b1b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:21.690117 kubelet[2729]: E0116 23:57:21.689564 2729 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bcc19a3b81706a4c02b39edd5dfc7a5bf10b0db2b317ad075008ecc9d3cbb40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:21.690117 kubelet[2729]: E0116 23:57:21.689690 2729 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bcc19a3b81706a4c02b39edd5dfc7a5bf10b0db2b317ad075008ecc9d3cbb40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-n2dkx" Jan 16 23:57:21.690117 kubelet[2729]: E0116 23:57:21.689724 2729 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bcc19a3b81706a4c02b39edd5dfc7a5bf10b0db2b317ad075008ecc9d3cbb40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-n2dkx" Jan 16 23:57:21.690330 kubelet[2729]: E0116 23:57:21.690070 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-n2dkx_calico-system(418b98a5-873e-4b20-a6d4-0ef55480b923)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-n2dkx_calico-system(418b98a5-873e-4b20-a6d4-0ef55480b923)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8bcc19a3b81706a4c02b39edd5dfc7a5bf10b0db2b317ad075008ecc9d3cbb40\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-n2dkx" podUID="418b98a5-873e-4b20-a6d4-0ef55480b923" Jan 16 23:57:21.691334 containerd[1615]: time="2026-01-16T23:57:21.691270693Z" level=error msg="encountered an error cleaning up failed sandbox \"2623671246706b3d5ba38c8fbe31e962febe832fb61fbd580c6e09cc51ce5b1b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:21.691739 containerd[1615]: time="2026-01-16T23:57:21.691693223Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-77fjj,Uid:189086c2-a129-41e8-ba84-b185657d3f10,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2623671246706b3d5ba38c8fbe31e962febe832fb61fbd580c6e09cc51ce5b1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:21.693382 kubelet[2729]: E0116 23:57:21.693190 2729 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2623671246706b3d5ba38c8fbe31e962febe832fb61fbd580c6e09cc51ce5b1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:21.693382 kubelet[2729]: E0116 23:57:21.693267 2729 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2623671246706b3d5ba38c8fbe31e962febe832fb61fbd580c6e09cc51ce5b1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-77fjj" Jan 16 23:57:21.693382 kubelet[2729]: E0116 23:57:21.693287 2729 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2623671246706b3d5ba38c8fbe31e962febe832fb61fbd580c6e09cc51ce5b1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-77fjj" Jan 16 23:57:21.693538 kubelet[2729]: E0116 23:57:21.693329 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-77fjj_kube-system(189086c2-a129-41e8-ba84-b185657d3f10)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-77fjj_kube-system(189086c2-a129-41e8-ba84-b185657d3f10)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2623671246706b3d5ba38c8fbe31e962febe832fb61fbd580c6e09cc51ce5b1b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-77fjj" podUID="189086c2-a129-41e8-ba84-b185657d3f10" Jan 16 23:57:21.694921 containerd[1615]: time="2026-01-16T23:57:21.694876906Z" level=error msg="Failed to destroy network for sandbox \"19cc241b89dd849b0574ebc2d04f961d4bd8583e7202c758d54ec24098a09a1b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:21.695408 containerd[1615]: time="2026-01-16T23:57:21.695367439Z" level=error msg="encountered an error cleaning up failed sandbox \"19cc241b89dd849b0574ebc2d04f961d4bd8583e7202c758d54ec24098a09a1b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:21.695550 containerd[1615]: time="2026-01-16T23:57:21.695506522Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-npqz8,Uid:db404f65-1005-4977-b8b1-05db5155d53d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"19cc241b89dd849b0574ebc2d04f961d4bd8583e7202c758d54ec24098a09a1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:21.696127 kubelet[2729]: E0116 23:57:21.695924 2729 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19cc241b89dd849b0574ebc2d04f961d4bd8583e7202c758d54ec24098a09a1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:21.696127 kubelet[2729]: E0116 23:57:21.695988 2729 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19cc241b89dd849b0574ebc2d04f961d4bd8583e7202c758d54ec24098a09a1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-npqz8" Jan 16 23:57:21.696127 kubelet[2729]: E0116 23:57:21.696007 2729 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19cc241b89dd849b0574ebc2d04f961d4bd8583e7202c758d54ec24098a09a1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-npqz8" Jan 16 23:57:21.696277 kubelet[2729]: E0116 23:57:21.696068 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-npqz8_calico-system(db404f65-1005-4977-b8b1-05db5155d53d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-npqz8_calico-system(db404f65-1005-4977-b8b1-05db5155d53d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"19cc241b89dd849b0574ebc2d04f961d4bd8583e7202c758d54ec24098a09a1b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-npqz8" podUID="db404f65-1005-4977-b8b1-05db5155d53d" Jan 16 23:57:21.696652 containerd[1615]: time="2026-01-16T23:57:21.696510268Z" level=error msg="StopPodSandbox for \"aa192c094faf8b69bcbb6f9d843eb29a2c377500ef98c82221c385697ba71bf2\" failed" error="failed to destroy network for sandbox \"aa192c094faf8b69bcbb6f9d843eb29a2c377500ef98c82221c385697ba71bf2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:21.697093 kubelet[2729]: E0116 23:57:21.696964 2729 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"aa192c094faf8b69bcbb6f9d843eb29a2c377500ef98c82221c385697ba71bf2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="aa192c094faf8b69bcbb6f9d843eb29a2c377500ef98c82221c385697ba71bf2" Jan 16 23:57:21.697093 kubelet[2729]: E0116 23:57:21.697011 2729 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"aa192c094faf8b69bcbb6f9d843eb29a2c377500ef98c82221c385697ba71bf2"} Jan 16 23:57:21.697093 kubelet[2729]: E0116 23:57:21.697046 2729 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2937c15b-91fa-4e47-b049-69032bc8570b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aa192c094faf8b69bcbb6f9d843eb29a2c377500ef98c82221c385697ba71bf2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 16 23:57:21.697093 kubelet[2729]: E0116 23:57:21.697066 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2937c15b-91fa-4e47-b049-69032bc8570b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aa192c094faf8b69bcbb6f9d843eb29a2c377500ef98c82221c385697ba71bf2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7f48594d95-tmkh9" podUID="2937c15b-91fa-4e47-b049-69032bc8570b" Jan 16 23:57:21.699829 containerd[1615]: time="2026-01-16T23:57:21.699614948Z" level=error msg="Failed to destroy network for sandbox \"b815939dcb1947f34f4f9e2b2f13ece0d18f4c3a8bb16efca1609be2ecd59af8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:21.702033 containerd[1615]: time="2026-01-16T23:57:21.701901888Z" level=error msg="encountered an error cleaning up failed sandbox \"b815939dcb1947f34f4f9e2b2f13ece0d18f4c3a8bb16efca1609be2ecd59af8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:21.702033 containerd[1615]: time="2026-01-16T23:57:21.701976330Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f7cdf6968-skgrt,Uid:e369e49f-7e57-4c99-8a2f-2c08450c808f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b815939dcb1947f34f4f9e2b2f13ece0d18f4c3a8bb16efca1609be2ecd59af8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:21.702719 kubelet[2729]: E0116 23:57:21.702442 2729 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b815939dcb1947f34f4f9e2b2f13ece0d18f4c3a8bb16efca1609be2ecd59af8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:21.702719 kubelet[2729]: E0116 23:57:21.702499 2729 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b815939dcb1947f34f4f9e2b2f13ece0d18f4c3a8bb16efca1609be2ecd59af8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f7cdf6968-skgrt" Jan 16 23:57:21.702719 kubelet[2729]: E0116 23:57:21.702518 2729 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b815939dcb1947f34f4f9e2b2f13ece0d18f4c3a8bb16efca1609be2ecd59af8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f7cdf6968-skgrt" Jan 16 23:57:21.703016 kubelet[2729]: E0116 23:57:21.702579 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6f7cdf6968-skgrt_calico-apiserver(e369e49f-7e57-4c99-8a2f-2c08450c808f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6f7cdf6968-skgrt_calico-apiserver(e369e49f-7e57-4c99-8a2f-2c08450c808f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b815939dcb1947f34f4f9e2b2f13ece0d18f4c3a8bb16efca1609be2ecd59af8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6f7cdf6968-skgrt" podUID="e369e49f-7e57-4c99-8a2f-2c08450c808f" Jan 16 23:57:21.724867 containerd[1615]: time="2026-01-16T23:57:21.724806680Z" level=error msg="Failed to destroy network for sandbox \"5c3bb6d7c27e5790e94f8f9d03e414cd0001ffd0b8524745bcca295dc616eaf1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:21.725590 containerd[1615]: time="2026-01-16T23:57:21.725364895Z" level=error msg="encountered an error cleaning up failed sandbox \"5c3bb6d7c27e5790e94f8f9d03e414cd0001ffd0b8524745bcca295dc616eaf1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:21.725590 containerd[1615]: time="2026-01-16T23:57:21.725430977Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79df8bc6d5-wt9rj,Uid:59191777-c11b-4c90-aa9f-cb135874655c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5c3bb6d7c27e5790e94f8f9d03e414cd0001ffd0b8524745bcca295dc616eaf1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:21.726347 kubelet[2729]: E0116 23:57:21.725734 2729 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c3bb6d7c27e5790e94f8f9d03e414cd0001ffd0b8524745bcca295dc616eaf1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:21.726347 kubelet[2729]: E0116 23:57:21.725814 2729 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c3bb6d7c27e5790e94f8f9d03e414cd0001ffd0b8524745bcca295dc616eaf1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79df8bc6d5-wt9rj" Jan 16 23:57:21.726347 kubelet[2729]: E0116 23:57:21.725838 2729 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c3bb6d7c27e5790e94f8f9d03e414cd0001ffd0b8524745bcca295dc616eaf1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79df8bc6d5-wt9rj" Jan 16 23:57:21.726460 kubelet[2729]: E0116 23:57:21.725892 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-79df8bc6d5-wt9rj_calico-system(59191777-c11b-4c90-aa9f-cb135874655c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-79df8bc6d5-wt9rj_calico-system(59191777-c11b-4c90-aa9f-cb135874655c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5c3bb6d7c27e5790e94f8f9d03e414cd0001ffd0b8524745bcca295dc616eaf1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-79df8bc6d5-wt9rj" podUID="59191777-c11b-4c90-aa9f-cb135874655c" Jan 16 23:57:22.150523 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-19cc241b89dd849b0574ebc2d04f961d4bd8583e7202c758d54ec24098a09a1b-shm.mount: Deactivated successfully. Jan 16 23:57:22.150765 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-aa192c094faf8b69bcbb6f9d843eb29a2c377500ef98c82221c385697ba71bf2-shm.mount: Deactivated successfully. Jan 16 23:57:22.150912 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-68fe52e0f5db0f1599e8d3156051f848d55dc1748249d82bf0152d8603c48a9a-shm.mount: Deactivated successfully. Jan 16 23:57:22.438054 kubelet[2729]: I0116 23:57:22.437821 2729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2623671246706b3d5ba38c8fbe31e962febe832fb61fbd580c6e09cc51ce5b1b" Jan 16 23:57:22.441607 containerd[1615]: time="2026-01-16T23:57:22.439315753Z" level=info msg="StopPodSandbox for \"2623671246706b3d5ba38c8fbe31e962febe832fb61fbd580c6e09cc51ce5b1b\"" Jan 16 23:57:22.441607 containerd[1615]: time="2026-01-16T23:57:22.439524998Z" level=info msg="Ensure that sandbox 2623671246706b3d5ba38c8fbe31e962febe832fb61fbd580c6e09cc51ce5b1b in task-service has been cleanup successfully" Jan 16 23:57:22.448646 kubelet[2729]: I0116 23:57:22.447187 2729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bcc19a3b81706a4c02b39edd5dfc7a5bf10b0db2b317ad075008ecc9d3cbb40" Jan 16 23:57:22.450584 containerd[1615]: time="2026-01-16T23:57:22.449838141Z" level=info msg="StopPodSandbox for \"8bcc19a3b81706a4c02b39edd5dfc7a5bf10b0db2b317ad075008ecc9d3cbb40\"" Jan 16 23:57:22.450584 containerd[1615]: time="2026-01-16T23:57:22.450207670Z" level=info msg="Ensure that sandbox 8bcc19a3b81706a4c02b39edd5dfc7a5bf10b0db2b317ad075008ecc9d3cbb40 in task-service has been cleanup successfully" Jan 16 23:57:22.459343 kubelet[2729]: I0116 23:57:22.459310 2729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="813465b72d802fde6d8262e94d1f9553f7d5daa5c3321fc020244dd436f9356b" Jan 16 23:57:22.462122 containerd[1615]: time="2026-01-16T23:57:22.461376994Z" level=info msg="StopPodSandbox for \"813465b72d802fde6d8262e94d1f9553f7d5daa5c3321fc020244dd436f9356b\"" Jan 16 23:57:22.462122 containerd[1615]: time="2026-01-16T23:57:22.461594600Z" level=info msg="Ensure that sandbox 813465b72d802fde6d8262e94d1f9553f7d5daa5c3321fc020244dd436f9356b in task-service has been cleanup successfully" Jan 16 23:57:22.463565 kubelet[2729]: I0116 23:57:22.462865 2729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19cc241b89dd849b0574ebc2d04f961d4bd8583e7202c758d54ec24098a09a1b" Jan 16 23:57:22.464920 containerd[1615]: time="2026-01-16T23:57:22.464883124Z" level=info msg="StopPodSandbox for \"19cc241b89dd849b0574ebc2d04f961d4bd8583e7202c758d54ec24098a09a1b\"" Jan 16 23:57:22.465432 containerd[1615]: time="2026-01-16T23:57:22.465406377Z" level=info msg="Ensure that sandbox 19cc241b89dd849b0574ebc2d04f961d4bd8583e7202c758d54ec24098a09a1b in task-service has been cleanup successfully" Jan 16 23:57:22.469438 kubelet[2729]: I0116 23:57:22.467604 2729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b815939dcb1947f34f4f9e2b2f13ece0d18f4c3a8bb16efca1609be2ecd59af8" Jan 16 23:57:22.470148 containerd[1615]: time="2026-01-16T23:57:22.470115937Z" level=info msg="StopPodSandbox for \"b815939dcb1947f34f4f9e2b2f13ece0d18f4c3a8bb16efca1609be2ecd59af8\"" Jan 16 23:57:22.472184 containerd[1615]: time="2026-01-16T23:57:22.472143709Z" level=info msg="Ensure that sandbox b815939dcb1947f34f4f9e2b2f13ece0d18f4c3a8bb16efca1609be2ecd59af8 in task-service has been cleanup successfully" Jan 16 23:57:22.480956 kubelet[2729]: I0116 23:57:22.479575 2729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c3bb6d7c27e5790e94f8f9d03e414cd0001ffd0b8524745bcca295dc616eaf1" Jan 16 23:57:22.482575 containerd[1615]: time="2026-01-16T23:57:22.482228766Z" level=info msg="StopPodSandbox for \"5c3bb6d7c27e5790e94f8f9d03e414cd0001ffd0b8524745bcca295dc616eaf1\"" Jan 16 23:57:22.483267 containerd[1615]: time="2026-01-16T23:57:22.483238631Z" level=info msg="Ensure that sandbox 5c3bb6d7c27e5790e94f8f9d03e414cd0001ffd0b8524745bcca295dc616eaf1 in task-service has been cleanup successfully" Jan 16 23:57:22.548925 containerd[1615]: time="2026-01-16T23:57:22.548605776Z" level=error msg="StopPodSandbox for \"2623671246706b3d5ba38c8fbe31e962febe832fb61fbd580c6e09cc51ce5b1b\" failed" error="failed to destroy network for sandbox \"2623671246706b3d5ba38c8fbe31e962febe832fb61fbd580c6e09cc51ce5b1b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:22.549297 kubelet[2729]: E0116 23:57:22.549259 2729 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2623671246706b3d5ba38c8fbe31e962febe832fb61fbd580c6e09cc51ce5b1b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2623671246706b3d5ba38c8fbe31e962febe832fb61fbd580c6e09cc51ce5b1b" Jan 16 23:57:22.549435 kubelet[2729]: E0116 23:57:22.549411 2729 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2623671246706b3d5ba38c8fbe31e962febe832fb61fbd580c6e09cc51ce5b1b"} Jan 16 23:57:22.549527 kubelet[2729]: E0116 23:57:22.549513 2729 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"189086c2-a129-41e8-ba84-b185657d3f10\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2623671246706b3d5ba38c8fbe31e962febe832fb61fbd580c6e09cc51ce5b1b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 16 23:57:22.549776 kubelet[2729]: E0116 23:57:22.549637 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"189086c2-a129-41e8-ba84-b185657d3f10\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2623671246706b3d5ba38c8fbe31e962febe832fb61fbd580c6e09cc51ce5b1b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-77fjj" podUID="189086c2-a129-41e8-ba84-b185657d3f10" Jan 16 23:57:22.558553 containerd[1615]: time="2026-01-16T23:57:22.558090138Z" level=error msg="StopPodSandbox for \"19cc241b89dd849b0574ebc2d04f961d4bd8583e7202c758d54ec24098a09a1b\" failed" error="failed to destroy network for sandbox \"19cc241b89dd849b0574ebc2d04f961d4bd8583e7202c758d54ec24098a09a1b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:22.559067 kubelet[2729]: E0116 23:57:22.558903 2729 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"19cc241b89dd849b0574ebc2d04f961d4bd8583e7202c758d54ec24098a09a1b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="19cc241b89dd849b0574ebc2d04f961d4bd8583e7202c758d54ec24098a09a1b" Jan 16 23:57:22.559067 kubelet[2729]: E0116 23:57:22.558963 2729 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"19cc241b89dd849b0574ebc2d04f961d4bd8583e7202c758d54ec24098a09a1b"} Jan 16 23:57:22.559067 kubelet[2729]: E0116 23:57:22.558998 2729 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"db404f65-1005-4977-b8b1-05db5155d53d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"19cc241b89dd849b0574ebc2d04f961d4bd8583e7202c758d54ec24098a09a1b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 16 23:57:22.559067 kubelet[2729]: E0116 23:57:22.559019 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"db404f65-1005-4977-b8b1-05db5155d53d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"19cc241b89dd849b0574ebc2d04f961d4bd8583e7202c758d54ec24098a09a1b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-npqz8" podUID="db404f65-1005-4977-b8b1-05db5155d53d" Jan 16 23:57:22.565602 containerd[1615]: time="2026-01-16T23:57:22.565540848Z" level=error msg="StopPodSandbox for \"813465b72d802fde6d8262e94d1f9553f7d5daa5c3321fc020244dd436f9356b\" failed" error="failed to destroy network for sandbox \"813465b72d802fde6d8262e94d1f9553f7d5daa5c3321fc020244dd436f9356b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:22.566814 kubelet[2729]: E0116 23:57:22.566323 2729 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"813465b72d802fde6d8262e94d1f9553f7d5daa5c3321fc020244dd436f9356b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="813465b72d802fde6d8262e94d1f9553f7d5daa5c3321fc020244dd436f9356b" Jan 16 23:57:22.566814 kubelet[2729]: E0116 23:57:22.566683 2729 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"813465b72d802fde6d8262e94d1f9553f7d5daa5c3321fc020244dd436f9356b"} Jan 16 23:57:22.566814 kubelet[2729]: E0116 23:57:22.566743 2729 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"98bfe59b-02e4-4bdc-9e5a-0a8209ddd104\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"813465b72d802fde6d8262e94d1f9553f7d5daa5c3321fc020244dd436f9356b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 16 23:57:22.566814 kubelet[2729]: E0116 23:57:22.566780 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"98bfe59b-02e4-4bdc-9e5a-0a8209ddd104\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"813465b72d802fde6d8262e94d1f9553f7d5daa5c3321fc020244dd436f9356b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6f7cdf6968-8z7jz" podUID="98bfe59b-02e4-4bdc-9e5a-0a8209ddd104" Jan 16 23:57:22.575943 containerd[1615]: time="2026-01-16T23:57:22.575807749Z" level=error msg="StopPodSandbox for \"8bcc19a3b81706a4c02b39edd5dfc7a5bf10b0db2b317ad075008ecc9d3cbb40\" failed" error="failed to destroy network for sandbox \"8bcc19a3b81706a4c02b39edd5dfc7a5bf10b0db2b317ad075008ecc9d3cbb40\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:22.576157 kubelet[2729]: E0116 23:57:22.576093 2729 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8bcc19a3b81706a4c02b39edd5dfc7a5bf10b0db2b317ad075008ecc9d3cbb40\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8bcc19a3b81706a4c02b39edd5dfc7a5bf10b0db2b317ad075008ecc9d3cbb40" Jan 16 23:57:22.576205 kubelet[2729]: E0116 23:57:22.576176 2729 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8bcc19a3b81706a4c02b39edd5dfc7a5bf10b0db2b317ad075008ecc9d3cbb40"} Jan 16 23:57:22.576249 kubelet[2729]: E0116 23:57:22.576225 2729 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"418b98a5-873e-4b20-a6d4-0ef55480b923\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8bcc19a3b81706a4c02b39edd5dfc7a5bf10b0db2b317ad075008ecc9d3cbb40\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 16 23:57:22.576304 kubelet[2729]: E0116 23:57:22.576268 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"418b98a5-873e-4b20-a6d4-0ef55480b923\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8bcc19a3b81706a4c02b39edd5dfc7a5bf10b0db2b317ad075008ecc9d3cbb40\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-n2dkx" podUID="418b98a5-873e-4b20-a6d4-0ef55480b923" Jan 16 23:57:22.588740 containerd[1615]: time="2026-01-16T23:57:22.588683557Z" level=error msg="StopPodSandbox for \"b815939dcb1947f34f4f9e2b2f13ece0d18f4c3a8bb16efca1609be2ecd59af8\" failed" error="failed to destroy network for sandbox \"b815939dcb1947f34f4f9e2b2f13ece0d18f4c3a8bb16efca1609be2ecd59af8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:22.589643 kubelet[2729]: E0116 23:57:22.588955 2729 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b815939dcb1947f34f4f9e2b2f13ece0d18f4c3a8bb16efca1609be2ecd59af8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b815939dcb1947f34f4f9e2b2f13ece0d18f4c3a8bb16efca1609be2ecd59af8" Jan 16 23:57:22.589643 kubelet[2729]: E0116 23:57:22.589009 2729 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b815939dcb1947f34f4f9e2b2f13ece0d18f4c3a8bb16efca1609be2ecd59af8"} Jan 16 23:57:22.589643 kubelet[2729]: E0116 23:57:22.589047 2729 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e369e49f-7e57-4c99-8a2f-2c08450c808f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b815939dcb1947f34f4f9e2b2f13ece0d18f4c3a8bb16efca1609be2ecd59af8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 16 23:57:22.589643 kubelet[2729]: E0116 23:57:22.589073 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e369e49f-7e57-4c99-8a2f-2c08450c808f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b815939dcb1947f34f4f9e2b2f13ece0d18f4c3a8bb16efca1609be2ecd59af8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6f7cdf6968-skgrt" podUID="e369e49f-7e57-4c99-8a2f-2c08450c808f" Jan 16 23:57:22.590436 containerd[1615]: time="2026-01-16T23:57:22.590370360Z" level=error msg="StopPodSandbox for \"5c3bb6d7c27e5790e94f8f9d03e414cd0001ffd0b8524745bcca295dc616eaf1\" failed" error="failed to destroy network for sandbox \"5c3bb6d7c27e5790e94f8f9d03e414cd0001ffd0b8524745bcca295dc616eaf1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:57:22.590805 kubelet[2729]: E0116 23:57:22.590611 2729 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5c3bb6d7c27e5790e94f8f9d03e414cd0001ffd0b8524745bcca295dc616eaf1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5c3bb6d7c27e5790e94f8f9d03e414cd0001ffd0b8524745bcca295dc616eaf1" Jan 16 23:57:22.590805 kubelet[2729]: E0116 23:57:22.590709 2729 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5c3bb6d7c27e5790e94f8f9d03e414cd0001ffd0b8524745bcca295dc616eaf1"} Jan 16 23:57:22.590805 kubelet[2729]: E0116 23:57:22.590745 2729 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"59191777-c11b-4c90-aa9f-cb135874655c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5c3bb6d7c27e5790e94f8f9d03e414cd0001ffd0b8524745bcca295dc616eaf1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 16 23:57:22.590805 kubelet[2729]: E0116 23:57:22.590764 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"59191777-c11b-4c90-aa9f-cb135874655c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5c3bb6d7c27e5790e94f8f9d03e414cd0001ffd0b8524745bcca295dc616eaf1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-79df8bc6d5-wt9rj" podUID="59191777-c11b-4c90-aa9f-cb135874655c" Jan 16 23:57:28.785045 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1656393754.mount: Deactivated successfully. Jan 16 23:57:28.820304 containerd[1615]: time="2026-01-16T23:57:28.820239130Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:57:28.821617 containerd[1615]: time="2026-01-16T23:57:28.821360436Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Jan 16 23:57:28.822655 containerd[1615]: time="2026-01-16T23:57:28.822569385Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:57:28.827184 containerd[1615]: time="2026-01-16T23:57:28.825862982Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:57:28.827184 containerd[1615]: time="2026-01-16T23:57:28.826720282Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 7.405463858s" Jan 16 23:57:28.827184 containerd[1615]: time="2026-01-16T23:57:28.826757803Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Jan 16 23:57:28.847049 containerd[1615]: time="2026-01-16T23:57:28.846993999Z" level=info msg="CreateContainer within sandbox \"718ca7ca084717846e6f6f4736eb6b8f40b3dd75b3728fb843e938c443105157\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 16 23:57:28.876298 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1550956110.mount: Deactivated successfully. Jan 16 23:57:28.879270 containerd[1615]: time="2026-01-16T23:57:28.879120394Z" level=info msg="CreateContainer within sandbox \"718ca7ca084717846e6f6f4736eb6b8f40b3dd75b3728fb843e938c443105157\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"707af708cb1ea9d712dc9e4c665075513beac9690bd8e2ceb6686a3c51f0d2db\"" Jan 16 23:57:28.881881 containerd[1615]: time="2026-01-16T23:57:28.881096680Z" level=info msg="StartContainer for \"707af708cb1ea9d712dc9e4c665075513beac9690bd8e2ceb6686a3c51f0d2db\"" Jan 16 23:57:28.964990 containerd[1615]: time="2026-01-16T23:57:28.964754006Z" level=info msg="StartContainer for \"707af708cb1ea9d712dc9e4c665075513beac9690bd8e2ceb6686a3c51f0d2db\" returns successfully" Jan 16 23:57:29.109876 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 16 23:57:29.110102 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 16 23:57:29.282666 containerd[1615]: time="2026-01-16T23:57:29.282610722Z" level=info msg="StopPodSandbox for \"aa192c094faf8b69bcbb6f9d843eb29a2c377500ef98c82221c385697ba71bf2\"" Jan 16 23:57:29.535245 kubelet[2729]: I0116 23:57:29.532841 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-nj8rz" podStartSLOduration=1.6837484919999999 podStartE2EDuration="18.532820017s" podCreationTimestamp="2026-01-16 23:57:11 +0000 UTC" firstStartedPulling="2026-01-16 23:57:11.978984069 +0000 UTC m=+23.854489882" lastFinishedPulling="2026-01-16 23:57:28.828055554 +0000 UTC m=+40.703561407" observedRunningTime="2026-01-16 23:57:29.52907957 +0000 UTC m=+41.404585423" watchObservedRunningTime="2026-01-16 23:57:29.532820017 +0000 UTC m=+41.408325830" Jan 16 23:57:29.574304 containerd[1615]: 2026-01-16 23:57:29.470 [INFO][3874] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="aa192c094faf8b69bcbb6f9d843eb29a2c377500ef98c82221c385697ba71bf2" Jan 16 23:57:29.574304 containerd[1615]: 2026-01-16 23:57:29.471 [INFO][3874] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="aa192c094faf8b69bcbb6f9d843eb29a2c377500ef98c82221c385697ba71bf2" iface="eth0" netns="/var/run/netns/cni-099871ce-6683-4347-bedc-9d43cb1db9a3" Jan 16 23:57:29.574304 containerd[1615]: 2026-01-16 23:57:29.472 [INFO][3874] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="aa192c094faf8b69bcbb6f9d843eb29a2c377500ef98c82221c385697ba71bf2" iface="eth0" netns="/var/run/netns/cni-099871ce-6683-4347-bedc-9d43cb1db9a3" Jan 16 23:57:29.574304 containerd[1615]: 2026-01-16 23:57:29.472 [INFO][3874] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="aa192c094faf8b69bcbb6f9d843eb29a2c377500ef98c82221c385697ba71bf2" iface="eth0" netns="/var/run/netns/cni-099871ce-6683-4347-bedc-9d43cb1db9a3" Jan 16 23:57:29.574304 containerd[1615]: 2026-01-16 23:57:29.472 [INFO][3874] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="aa192c094faf8b69bcbb6f9d843eb29a2c377500ef98c82221c385697ba71bf2" Jan 16 23:57:29.574304 containerd[1615]: 2026-01-16 23:57:29.472 [INFO][3874] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aa192c094faf8b69bcbb6f9d843eb29a2c377500ef98c82221c385697ba71bf2" Jan 16 23:57:29.574304 containerd[1615]: 2026-01-16 23:57:29.536 [INFO][3883] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="aa192c094faf8b69bcbb6f9d843eb29a2c377500ef98c82221c385697ba71bf2" HandleID="k8s-pod-network.aa192c094faf8b69bcbb6f9d843eb29a2c377500ef98c82221c385697ba71bf2" Workload="ci--4081--3--6--n--db2d61d92f-k8s-whisker--7f48594d95--tmkh9-eth0" Jan 16 23:57:29.574304 containerd[1615]: 2026-01-16 23:57:29.536 [INFO][3883] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:29.574304 containerd[1615]: 2026-01-16 23:57:29.536 [INFO][3883] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:29.574304 containerd[1615]: 2026-01-16 23:57:29.560 [WARNING][3883] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="aa192c094faf8b69bcbb6f9d843eb29a2c377500ef98c82221c385697ba71bf2" HandleID="k8s-pod-network.aa192c094faf8b69bcbb6f9d843eb29a2c377500ef98c82221c385697ba71bf2" Workload="ci--4081--3--6--n--db2d61d92f-k8s-whisker--7f48594d95--tmkh9-eth0" Jan 16 23:57:29.574304 containerd[1615]: 2026-01-16 23:57:29.560 [INFO][3883] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="aa192c094faf8b69bcbb6f9d843eb29a2c377500ef98c82221c385697ba71bf2" HandleID="k8s-pod-network.aa192c094faf8b69bcbb6f9d843eb29a2c377500ef98c82221c385697ba71bf2" Workload="ci--4081--3--6--n--db2d61d92f-k8s-whisker--7f48594d95--tmkh9-eth0" Jan 16 23:57:29.574304 containerd[1615]: 2026-01-16 23:57:29.563 [INFO][3883] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:29.574304 containerd[1615]: 2026-01-16 23:57:29.569 [INFO][3874] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="aa192c094faf8b69bcbb6f9d843eb29a2c377500ef98c82221c385697ba71bf2" Jan 16 23:57:29.575559 containerd[1615]: time="2026-01-16T23:57:29.575233963Z" level=info msg="TearDown network for sandbox \"aa192c094faf8b69bcbb6f9d843eb29a2c377500ef98c82221c385697ba71bf2\" successfully" Jan 16 23:57:29.575559 containerd[1615]: time="2026-01-16T23:57:29.575270484Z" level=info msg="StopPodSandbox for \"aa192c094faf8b69bcbb6f9d843eb29a2c377500ef98c82221c385697ba71bf2\" returns successfully" Jan 16 23:57:29.645258 kubelet[2729]: I0116 23:57:29.643860 2729 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvfr2\" (UniqueName: \"kubernetes.io/projected/2937c15b-91fa-4e47-b049-69032bc8570b-kube-api-access-pvfr2\") pod \"2937c15b-91fa-4e47-b049-69032bc8570b\" (UID: \"2937c15b-91fa-4e47-b049-69032bc8570b\") " Jan 16 23:57:29.645577 kubelet[2729]: I0116 23:57:29.645539 2729 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2937c15b-91fa-4e47-b049-69032bc8570b-whisker-backend-key-pair\") pod \"2937c15b-91fa-4e47-b049-69032bc8570b\" (UID: \"2937c15b-91fa-4e47-b049-69032bc8570b\") " Jan 16 23:57:29.645794 kubelet[2729]: I0116 23:57:29.645773 2729 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2937c15b-91fa-4e47-b049-69032bc8570b-whisker-ca-bundle\") pod \"2937c15b-91fa-4e47-b049-69032bc8570b\" (UID: \"2937c15b-91fa-4e47-b049-69032bc8570b\") " Jan 16 23:57:29.657912 kubelet[2729]: I0116 23:57:29.653516 2729 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2937c15b-91fa-4e47-b049-69032bc8570b-kube-api-access-pvfr2" (OuterVolumeSpecName: "kube-api-access-pvfr2") pod "2937c15b-91fa-4e47-b049-69032bc8570b" (UID: "2937c15b-91fa-4e47-b049-69032bc8570b"). InnerVolumeSpecName "kube-api-access-pvfr2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 16 23:57:29.663658 kubelet[2729]: I0116 23:57:29.662107 2729 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2937c15b-91fa-4e47-b049-69032bc8570b-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "2937c15b-91fa-4e47-b049-69032bc8570b" (UID: "2937c15b-91fa-4e47-b049-69032bc8570b"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 16 23:57:29.663658 kubelet[2729]: I0116 23:57:29.662694 2729 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2937c15b-91fa-4e47-b049-69032bc8570b-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "2937c15b-91fa-4e47-b049-69032bc8570b" (UID: "2937c15b-91fa-4e47-b049-69032bc8570b"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 16 23:57:29.748085 kubelet[2729]: I0116 23:57:29.748026 2729 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pvfr2\" (UniqueName: \"kubernetes.io/projected/2937c15b-91fa-4e47-b049-69032bc8570b-kube-api-access-pvfr2\") on node \"ci-4081-3-6-n-db2d61d92f\" DevicePath \"\"" Jan 16 23:57:29.748373 kubelet[2729]: I0116 23:57:29.748334 2729 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2937c15b-91fa-4e47-b049-69032bc8570b-whisker-backend-key-pair\") on node \"ci-4081-3-6-n-db2d61d92f\" DevicePath \"\"" Jan 16 23:57:29.748527 kubelet[2729]: I0116 23:57:29.748503 2729 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2937c15b-91fa-4e47-b049-69032bc8570b-whisker-ca-bundle\") on node \"ci-4081-3-6-n-db2d61d92f\" DevicePath \"\"" Jan 16 23:57:29.784582 systemd[1]: run-netns-cni\x2d099871ce\x2d6683\x2d4347\x2dbedc\x2d9d43cb1db9a3.mount: Deactivated successfully. Jan 16 23:57:29.784745 systemd[1]: var-lib-kubelet-pods-2937c15b\x2d91fa\x2d4e47\x2db049\x2d69032bc8570b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpvfr2.mount: Deactivated successfully. Jan 16 23:57:29.784828 systemd[1]: var-lib-kubelet-pods-2937c15b\x2d91fa\x2d4e47\x2db049\x2d69032bc8570b-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 16 23:57:30.656487 kubelet[2729]: I0116 23:57:30.656111 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7b693a10-74eb-43ee-978b-c4010636e57f-whisker-backend-key-pair\") pod \"whisker-5c6b945f-kpw28\" (UID: \"7b693a10-74eb-43ee-978b-c4010636e57f\") " pod="calico-system/whisker-5c6b945f-kpw28" Jan 16 23:57:30.656487 kubelet[2729]: I0116 23:57:30.656186 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b693a10-74eb-43ee-978b-c4010636e57f-whisker-ca-bundle\") pod \"whisker-5c6b945f-kpw28\" (UID: \"7b693a10-74eb-43ee-978b-c4010636e57f\") " pod="calico-system/whisker-5c6b945f-kpw28" Jan 16 23:57:30.656487 kubelet[2729]: I0116 23:57:30.656210 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq6n5\" (UniqueName: \"kubernetes.io/projected/7b693a10-74eb-43ee-978b-c4010636e57f-kube-api-access-cq6n5\") pod \"whisker-5c6b945f-kpw28\" (UID: \"7b693a10-74eb-43ee-978b-c4010636e57f\") " pod="calico-system/whisker-5c6b945f-kpw28" Jan 16 23:57:30.893486 containerd[1615]: time="2026-01-16T23:57:30.893379739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c6b945f-kpw28,Uid:7b693a10-74eb-43ee-978b-c4010636e57f,Namespace:calico-system,Attempt:0,}" Jan 16 23:57:31.170884 systemd-networkd[1238]: cali5a6acd70e32: Link UP Jan 16 23:57:31.172192 systemd-networkd[1238]: cali5a6acd70e32: Gained carrier Jan 16 23:57:31.195728 containerd[1615]: 2026-01-16 23:57:30.983 [INFO][4034] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 16 23:57:31.195728 containerd[1615]: 2026-01-16 23:57:31.025 [INFO][4034] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--db2d61d92f-k8s-whisker--5c6b945f--kpw28-eth0 whisker-5c6b945f- calico-system 7b693a10-74eb-43ee-978b-c4010636e57f 911 0 2026-01-16 23:57:30 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5c6b945f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-6-n-db2d61d92f whisker-5c6b945f-kpw28 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali5a6acd70e32 [] [] }} ContainerID="dd7e15df9714371b08bde76556a5d4ca1c690ea06b55e2dfa7471383aa711f00" Namespace="calico-system" Pod="whisker-5c6b945f-kpw28" WorkloadEndpoint="ci--4081--3--6--n--db2d61d92f-k8s-whisker--5c6b945f--kpw28-" Jan 16 23:57:31.195728 containerd[1615]: 2026-01-16 23:57:31.026 [INFO][4034] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dd7e15df9714371b08bde76556a5d4ca1c690ea06b55e2dfa7471383aa711f00" Namespace="calico-system" Pod="whisker-5c6b945f-kpw28" WorkloadEndpoint="ci--4081--3--6--n--db2d61d92f-k8s-whisker--5c6b945f--kpw28-eth0" Jan 16 23:57:31.195728 containerd[1615]: 2026-01-16 23:57:31.077 [INFO][4052] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dd7e15df9714371b08bde76556a5d4ca1c690ea06b55e2dfa7471383aa711f00" HandleID="k8s-pod-network.dd7e15df9714371b08bde76556a5d4ca1c690ea06b55e2dfa7471383aa711f00" Workload="ci--4081--3--6--n--db2d61d92f-k8s-whisker--5c6b945f--kpw28-eth0" Jan 16 23:57:31.195728 containerd[1615]: 2026-01-16 23:57:31.077 [INFO][4052] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="dd7e15df9714371b08bde76556a5d4ca1c690ea06b55e2dfa7471383aa711f00" HandleID="k8s-pod-network.dd7e15df9714371b08bde76556a5d4ca1c690ea06b55e2dfa7471383aa711f00" Workload="ci--4081--3--6--n--db2d61d92f-k8s-whisker--5c6b945f--kpw28-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cb1c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-db2d61d92f", "pod":"whisker-5c6b945f-kpw28", "timestamp":"2026-01-16 23:57:31.077759042 +0000 UTC"}, Hostname:"ci-4081-3-6-n-db2d61d92f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 23:57:31.195728 containerd[1615]: 2026-01-16 23:57:31.081 [INFO][4052] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:31.195728 containerd[1615]: 2026-01-16 23:57:31.081 [INFO][4052] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:31.195728 containerd[1615]: 2026-01-16 23:57:31.082 [INFO][4052] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-db2d61d92f' Jan 16 23:57:31.195728 containerd[1615]: 2026-01-16 23:57:31.099 [INFO][4052] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dd7e15df9714371b08bde76556a5d4ca1c690ea06b55e2dfa7471383aa711f00" host="ci-4081-3-6-n-db2d61d92f" Jan 16 23:57:31.195728 containerd[1615]: 2026-01-16 23:57:31.112 [INFO][4052] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-db2d61d92f" Jan 16 23:57:31.195728 containerd[1615]: 2026-01-16 23:57:31.120 [INFO][4052] ipam/ipam.go 511: Trying affinity for 192.168.58.64/26 host="ci-4081-3-6-n-db2d61d92f" Jan 16 23:57:31.195728 containerd[1615]: 2026-01-16 23:57:31.124 [INFO][4052] ipam/ipam.go 158: Attempting to load block cidr=192.168.58.64/26 host="ci-4081-3-6-n-db2d61d92f" Jan 16 23:57:31.195728 containerd[1615]: 2026-01-16 23:57:31.129 [INFO][4052] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.58.64/26 host="ci-4081-3-6-n-db2d61d92f" Jan 16 23:57:31.195728 containerd[1615]: 2026-01-16 23:57:31.130 [INFO][4052] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.58.64/26 handle="k8s-pod-network.dd7e15df9714371b08bde76556a5d4ca1c690ea06b55e2dfa7471383aa711f00" host="ci-4081-3-6-n-db2d61d92f" Jan 16 23:57:31.195728 containerd[1615]: 2026-01-16 23:57:31.138 [INFO][4052] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.dd7e15df9714371b08bde76556a5d4ca1c690ea06b55e2dfa7471383aa711f00 Jan 16 23:57:31.195728 containerd[1615]: 2026-01-16 23:57:31.145 [INFO][4052] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.58.64/26 handle="k8s-pod-network.dd7e15df9714371b08bde76556a5d4ca1c690ea06b55e2dfa7471383aa711f00" host="ci-4081-3-6-n-db2d61d92f" Jan 16 23:57:31.195728 containerd[1615]: 2026-01-16 23:57:31.153 [INFO][4052] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.58.65/26] block=192.168.58.64/26 handle="k8s-pod-network.dd7e15df9714371b08bde76556a5d4ca1c690ea06b55e2dfa7471383aa711f00" host="ci-4081-3-6-n-db2d61d92f" Jan 16 23:57:31.195728 containerd[1615]: 2026-01-16 23:57:31.153 [INFO][4052] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.58.65/26] handle="k8s-pod-network.dd7e15df9714371b08bde76556a5d4ca1c690ea06b55e2dfa7471383aa711f00" host="ci-4081-3-6-n-db2d61d92f" Jan 16 23:57:31.195728 containerd[1615]: 2026-01-16 23:57:31.154 [INFO][4052] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:31.195728 containerd[1615]: 2026-01-16 23:57:31.154 [INFO][4052] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.58.65/26] IPv6=[] ContainerID="dd7e15df9714371b08bde76556a5d4ca1c690ea06b55e2dfa7471383aa711f00" HandleID="k8s-pod-network.dd7e15df9714371b08bde76556a5d4ca1c690ea06b55e2dfa7471383aa711f00" Workload="ci--4081--3--6--n--db2d61d92f-k8s-whisker--5c6b945f--kpw28-eth0" Jan 16 23:57:31.199063 containerd[1615]: 2026-01-16 23:57:31.157 [INFO][4034] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dd7e15df9714371b08bde76556a5d4ca1c690ea06b55e2dfa7471383aa711f00" Namespace="calico-system" Pod="whisker-5c6b945f-kpw28" WorkloadEndpoint="ci--4081--3--6--n--db2d61d92f-k8s-whisker--5c6b945f--kpw28-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--db2d61d92f-k8s-whisker--5c6b945f--kpw28-eth0", GenerateName:"whisker-5c6b945f-", Namespace:"calico-system", SelfLink:"", UID:"7b693a10-74eb-43ee-978b-c4010636e57f", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 57, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5c6b945f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-db2d61d92f", ContainerID:"", Pod:"whisker-5c6b945f-kpw28", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.58.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5a6acd70e32", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:57:31.199063 containerd[1615]: 2026-01-16 23:57:31.157 [INFO][4034] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.58.65/32] ContainerID="dd7e15df9714371b08bde76556a5d4ca1c690ea06b55e2dfa7471383aa711f00" Namespace="calico-system" Pod="whisker-5c6b945f-kpw28" WorkloadEndpoint="ci--4081--3--6--n--db2d61d92f-k8s-whisker--5c6b945f--kpw28-eth0" Jan 16 23:57:31.199063 containerd[1615]: 2026-01-16 23:57:31.157 [INFO][4034] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5a6acd70e32 ContainerID="dd7e15df9714371b08bde76556a5d4ca1c690ea06b55e2dfa7471383aa711f00" Namespace="calico-system" Pod="whisker-5c6b945f-kpw28" WorkloadEndpoint="ci--4081--3--6--n--db2d61d92f-k8s-whisker--5c6b945f--kpw28-eth0" Jan 16 23:57:31.199063 containerd[1615]: 2026-01-16 23:57:31.172 [INFO][4034] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dd7e15df9714371b08bde76556a5d4ca1c690ea06b55e2dfa7471383aa711f00" Namespace="calico-system" Pod="whisker-5c6b945f-kpw28" WorkloadEndpoint="ci--4081--3--6--n--db2d61d92f-k8s-whisker--5c6b945f--kpw28-eth0" Jan 16 23:57:31.199063 containerd[1615]: 2026-01-16 23:57:31.176 [INFO][4034] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dd7e15df9714371b08bde76556a5d4ca1c690ea06b55e2dfa7471383aa711f00" Namespace="calico-system" Pod="whisker-5c6b945f-kpw28" WorkloadEndpoint="ci--4081--3--6--n--db2d61d92f-k8s-whisker--5c6b945f--kpw28-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--db2d61d92f-k8s-whisker--5c6b945f--kpw28-eth0", GenerateName:"whisker-5c6b945f-", Namespace:"calico-system", SelfLink:"", UID:"7b693a10-74eb-43ee-978b-c4010636e57f", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 57, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5c6b945f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-db2d61d92f", ContainerID:"dd7e15df9714371b08bde76556a5d4ca1c690ea06b55e2dfa7471383aa711f00", Pod:"whisker-5c6b945f-kpw28", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.58.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5a6acd70e32", MAC:"e2:73:5a:3f:26:53", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:57:31.199063 containerd[1615]: 2026-01-16 23:57:31.193 [INFO][4034] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dd7e15df9714371b08bde76556a5d4ca1c690ea06b55e2dfa7471383aa711f00" Namespace="calico-system" Pod="whisker-5c6b945f-kpw28" WorkloadEndpoint="ci--4081--3--6--n--db2d61d92f-k8s-whisker--5c6b945f--kpw28-eth0" Jan 16 23:57:31.219231 containerd[1615]: time="2026-01-16T23:57:31.218957216Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 23:57:31.219231 containerd[1615]: time="2026-01-16T23:57:31.219030898Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 23:57:31.219231 containerd[1615]: time="2026-01-16T23:57:31.219042898Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:57:31.219231 containerd[1615]: time="2026-01-16T23:57:31.219138261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:57:31.277349 containerd[1615]: time="2026-01-16T23:57:31.277301065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c6b945f-kpw28,Uid:7b693a10-74eb-43ee-978b-c4010636e57f,Namespace:calico-system,Attempt:0,} returns sandbox id \"dd7e15df9714371b08bde76556a5d4ca1c690ea06b55e2dfa7471383aa711f00\"" Jan 16 23:57:31.281022 containerd[1615]: time="2026-01-16T23:57:31.280972308Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 16 23:57:31.636208 containerd[1615]: time="2026-01-16T23:57:31.635979751Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:57:31.637813 containerd[1615]: time="2026-01-16T23:57:31.637712470Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 16 23:57:31.637992 containerd[1615]: time="2026-01-16T23:57:31.637875594Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 16 23:57:31.638377 kubelet[2729]: E0116 23:57:31.638203 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 23:57:31.638533 kubelet[2729]: E0116 23:57:31.638383 2729 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 23:57:31.650237 kubelet[2729]: E0116 23:57:31.650159 2729 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a43c7915b4074bd1b752561b3055b41c,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cq6n5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5c6b945f-kpw28_calico-system(7b693a10-74eb-43ee-978b-c4010636e57f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 16 23:57:31.654243 containerd[1615]: time="2026-01-16T23:57:31.653238144Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 16 23:57:31.999584 containerd[1615]: time="2026-01-16T23:57:31.999348944Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:57:32.001423 containerd[1615]: time="2026-01-16T23:57:32.001336469Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 16 23:57:32.001558 containerd[1615]: time="2026-01-16T23:57:32.001402190Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 16 23:57:32.001974 kubelet[2729]: E0116 23:57:32.001867 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 23:57:32.002373 kubelet[2729]: E0116 23:57:32.002035 2729 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 23:57:32.002504 kubelet[2729]: E0116 23:57:32.002324 2729 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cq6n5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5c6b945f-kpw28_calico-system(7b693a10-74eb-43ee-978b-c4010636e57f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 16 23:57:32.004693 kubelet[2729]: E0116 23:57:32.004569 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c6b945f-kpw28" podUID="7b693a10-74eb-43ee-978b-c4010636e57f" Jan 16 23:57:32.263312 kubelet[2729]: I0116 23:57:32.263198 2729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2937c15b-91fa-4e47-b049-69032bc8570b" path="/var/lib/kubelet/pods/2937c15b-91fa-4e47-b049-69032bc8570b/volumes" Jan 16 23:57:32.516886 kubelet[2729]: E0116 23:57:32.516635 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c6b945f-kpw28" podUID="7b693a10-74eb-43ee-978b-c4010636e57f" Jan 16 23:57:32.900595 systemd-networkd[1238]: cali5a6acd70e32: Gained IPv6LL Jan 16 23:57:33.247513 containerd[1615]: time="2026-01-16T23:57:33.246921950Z" level=info msg="StopPodSandbox for \"5c3bb6d7c27e5790e94f8f9d03e414cd0001ffd0b8524745bcca295dc616eaf1\"" Jan 16 23:57:33.401612 containerd[1615]: 2026-01-16 23:57:33.316 [INFO][4166] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5c3bb6d7c27e5790e94f8f9d03e414cd0001ffd0b8524745bcca295dc616eaf1" Jan 16 23:57:33.401612 containerd[1615]: 2026-01-16 23:57:33.317 [INFO][4166] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5c3bb6d7c27e5790e94f8f9d03e414cd0001ffd0b8524745bcca295dc616eaf1" iface="eth0" netns="/var/run/netns/cni-150b190b-f624-ba82-5ad3-2616f78c0e08" Jan 16 23:57:33.401612 containerd[1615]: 2026-01-16 23:57:33.318 [INFO][4166] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5c3bb6d7c27e5790e94f8f9d03e414cd0001ffd0b8524745bcca295dc616eaf1" iface="eth0" netns="/var/run/netns/cni-150b190b-f624-ba82-5ad3-2616f78c0e08" Jan 16 23:57:33.401612 containerd[1615]: 2026-01-16 23:57:33.319 [INFO][4166] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5c3bb6d7c27e5790e94f8f9d03e414cd0001ffd0b8524745bcca295dc616eaf1" iface="eth0" netns="/var/run/netns/cni-150b190b-f624-ba82-5ad3-2616f78c0e08" Jan 16 23:57:33.401612 containerd[1615]: 2026-01-16 23:57:33.319 [INFO][4166] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5c3bb6d7c27e5790e94f8f9d03e414cd0001ffd0b8524745bcca295dc616eaf1" Jan 16 23:57:33.401612 containerd[1615]: 2026-01-16 23:57:33.319 [INFO][4166] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5c3bb6d7c27e5790e94f8f9d03e414cd0001ffd0b8524745bcca295dc616eaf1" Jan 16 23:57:33.401612 containerd[1615]: 2026-01-16 23:57:33.367 [INFO][4187] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5c3bb6d7c27e5790e94f8f9d03e414cd0001ffd0b8524745bcca295dc616eaf1" HandleID="k8s-pod-network.5c3bb6d7c27e5790e94f8f9d03e414cd0001ffd0b8524745bcca295dc616eaf1" Workload="ci--4081--3--6--n--db2d61d92f-k8s-calico--kube--controllers--79df8bc6d5--wt9rj-eth0" Jan 16 23:57:33.401612 containerd[1615]: 2026-01-16 23:57:33.369 [INFO][4187] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:33.401612 containerd[1615]: 2026-01-16 23:57:33.369 [INFO][4187] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:33.401612 containerd[1615]: 2026-01-16 23:57:33.382 [WARNING][4187] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5c3bb6d7c27e5790e94f8f9d03e414cd0001ffd0b8524745bcca295dc616eaf1" HandleID="k8s-pod-network.5c3bb6d7c27e5790e94f8f9d03e414cd0001ffd0b8524745bcca295dc616eaf1" Workload="ci--4081--3--6--n--db2d61d92f-k8s-calico--kube--controllers--79df8bc6d5--wt9rj-eth0" Jan 16 23:57:33.401612 containerd[1615]: 2026-01-16 23:57:33.382 [INFO][4187] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5c3bb6d7c27e5790e94f8f9d03e414cd0001ffd0b8524745bcca295dc616eaf1" HandleID="k8s-pod-network.5c3bb6d7c27e5790e94f8f9d03e414cd0001ffd0b8524745bcca295dc616eaf1" Workload="ci--4081--3--6--n--db2d61d92f-k8s-calico--kube--controllers--79df8bc6d5--wt9rj-eth0" Jan 16 23:57:33.401612 containerd[1615]: 2026-01-16 23:57:33.385 [INFO][4187] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:33.401612 containerd[1615]: 2026-01-16 23:57:33.395 [INFO][4166] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5c3bb6d7c27e5790e94f8f9d03e414cd0001ffd0b8524745bcca295dc616eaf1" Jan 16 23:57:33.404813 containerd[1615]: time="2026-01-16T23:57:33.403798217Z" level=info msg="TearDown network for sandbox \"5c3bb6d7c27e5790e94f8f9d03e414cd0001ffd0b8524745bcca295dc616eaf1\" successfully" Jan 16 23:57:33.404813 containerd[1615]: time="2026-01-16T23:57:33.403843698Z" level=info msg="StopPodSandbox for \"5c3bb6d7c27e5790e94f8f9d03e414cd0001ffd0b8524745bcca295dc616eaf1\" returns successfully" Jan 16 23:57:33.408320 containerd[1615]: time="2026-01-16T23:57:33.408277757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79df8bc6d5-wt9rj,Uid:59191777-c11b-4c90-aa9f-cb135874655c,Namespace:calico-system,Attempt:1,}" Jan 16 23:57:33.412558 systemd[1]: run-netns-cni\x2d150b190b\x2df624\x2dba82\x2d5ad3\x2d2616f78c0e08.mount: Deactivated successfully. Jan 16 23:57:33.599687 systemd-networkd[1238]: caliacac572f0e4: Link UP Jan 16 23:57:33.599950 systemd-networkd[1238]: caliacac572f0e4: Gained carrier Jan 16 23:57:33.623922 containerd[1615]: 2026-01-16 23:57:33.485 [INFO][4197] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 16 23:57:33.623922 containerd[1615]: 2026-01-16 23:57:33.505 [INFO][4197] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--db2d61d92f-k8s-calico--kube--controllers--79df8bc6d5--wt9rj-eth0 calico-kube-controllers-79df8bc6d5- calico-system 59191777-c11b-4c90-aa9f-cb135874655c 935 0 2026-01-16 23:57:11 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:79df8bc6d5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-6-n-db2d61d92f calico-kube-controllers-79df8bc6d5-wt9rj eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliacac572f0e4 [] [] }} ContainerID="9b3fc1e59e2c1bc327cd5c7bf9cf24594c2d8b5ff5e6acb1228138d5faf784af" Namespace="calico-system" Pod="calico-kube-controllers-79df8bc6d5-wt9rj" WorkloadEndpoint="ci--4081--3--6--n--db2d61d92f-k8s-calico--kube--controllers--79df8bc6d5--wt9rj-" Jan 16 23:57:33.623922 containerd[1615]: 2026-01-16 23:57:33.506 [INFO][4197] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9b3fc1e59e2c1bc327cd5c7bf9cf24594c2d8b5ff5e6acb1228138d5faf784af" Namespace="calico-system" Pod="calico-kube-controllers-79df8bc6d5-wt9rj" WorkloadEndpoint="ci--4081--3--6--n--db2d61d92f-k8s-calico--kube--controllers--79df8bc6d5--wt9rj-eth0" Jan 16 23:57:33.623922 containerd[1615]: 2026-01-16 23:57:33.538 [INFO][4210] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9b3fc1e59e2c1bc327cd5c7bf9cf24594c2d8b5ff5e6acb1228138d5faf784af" HandleID="k8s-pod-network.9b3fc1e59e2c1bc327cd5c7bf9cf24594c2d8b5ff5e6acb1228138d5faf784af" Workload="ci--4081--3--6--n--db2d61d92f-k8s-calico--kube--controllers--79df8bc6d5--wt9rj-eth0" Jan 16 23:57:33.623922 containerd[1615]: 2026-01-16 23:57:33.538 [INFO][4210] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9b3fc1e59e2c1bc327cd5c7bf9cf24594c2d8b5ff5e6acb1228138d5faf784af" HandleID="k8s-pod-network.9b3fc1e59e2c1bc327cd5c7bf9cf24594c2d8b5ff5e6acb1228138d5faf784af" Workload="ci--4081--3--6--n--db2d61d92f-k8s-calico--kube--controllers--79df8bc6d5--wt9rj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b590), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-db2d61d92f", "pod":"calico-kube-controllers-79df8bc6d5-wt9rj", "timestamp":"2026-01-16 23:57:33.53859819 +0000 UTC"}, Hostname:"ci-4081-3-6-n-db2d61d92f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 23:57:33.623922 containerd[1615]: 2026-01-16 23:57:33.539 [INFO][4210] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:33.623922 containerd[1615]: 2026-01-16 23:57:33.539 [INFO][4210] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:33.623922 containerd[1615]: 2026-01-16 23:57:33.539 [INFO][4210] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-db2d61d92f' Jan 16 23:57:33.623922 containerd[1615]: 2026-01-16 23:57:33.552 [INFO][4210] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9b3fc1e59e2c1bc327cd5c7bf9cf24594c2d8b5ff5e6acb1228138d5faf784af" host="ci-4081-3-6-n-db2d61d92f" Jan 16 23:57:33.623922 containerd[1615]: 2026-01-16 23:57:33.559 [INFO][4210] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-db2d61d92f" Jan 16 23:57:33.623922 containerd[1615]: 2026-01-16 23:57:33.566 [INFO][4210] ipam/ipam.go 511: Trying affinity for 192.168.58.64/26 host="ci-4081-3-6-n-db2d61d92f" Jan 16 23:57:33.623922 containerd[1615]: 2026-01-16 23:57:33.570 [INFO][4210] ipam/ipam.go 158: Attempting to load block cidr=192.168.58.64/26 host="ci-4081-3-6-n-db2d61d92f" Jan 16 23:57:33.623922 containerd[1615]: 2026-01-16 23:57:33.573 [INFO][4210] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.58.64/26 host="ci-4081-3-6-n-db2d61d92f" Jan 16 23:57:33.623922 containerd[1615]: 2026-01-16 23:57:33.573 [INFO][4210] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.58.64/26 handle="k8s-pod-network.9b3fc1e59e2c1bc327cd5c7bf9cf24594c2d8b5ff5e6acb1228138d5faf784af" host="ci-4081-3-6-n-db2d61d92f" Jan 16 23:57:33.623922 containerd[1615]: 2026-01-16 23:57:33.575 [INFO][4210] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9b3fc1e59e2c1bc327cd5c7bf9cf24594c2d8b5ff5e6acb1228138d5faf784af Jan 16 23:57:33.623922 containerd[1615]: 2026-01-16 23:57:33.581 [INFO][4210] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.58.64/26 handle="k8s-pod-network.9b3fc1e59e2c1bc327cd5c7bf9cf24594c2d8b5ff5e6acb1228138d5faf784af" host="ci-4081-3-6-n-db2d61d92f" Jan 16 23:57:33.623922 containerd[1615]: 2026-01-16 23:57:33.591 [INFO][4210] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.58.66/26] block=192.168.58.64/26 handle="k8s-pod-network.9b3fc1e59e2c1bc327cd5c7bf9cf24594c2d8b5ff5e6acb1228138d5faf784af" host="ci-4081-3-6-n-db2d61d92f" Jan 16 23:57:33.623922 containerd[1615]: 2026-01-16 23:57:33.591 [INFO][4210] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.58.66/26] handle="k8s-pod-network.9b3fc1e59e2c1bc327cd5c7bf9cf24594c2d8b5ff5e6acb1228138d5faf784af" host="ci-4081-3-6-n-db2d61d92f" Jan 16 23:57:33.623922 containerd[1615]: 2026-01-16 23:57:33.591 [INFO][4210] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:33.623922 containerd[1615]: 2026-01-16 23:57:33.591 [INFO][4210] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.58.66/26] IPv6=[] ContainerID="9b3fc1e59e2c1bc327cd5c7bf9cf24594c2d8b5ff5e6acb1228138d5faf784af" HandleID="k8s-pod-network.9b3fc1e59e2c1bc327cd5c7bf9cf24594c2d8b5ff5e6acb1228138d5faf784af" Workload="ci--4081--3--6--n--db2d61d92f-k8s-calico--kube--controllers--79df8bc6d5--wt9rj-eth0" Jan 16 23:57:33.624910 containerd[1615]: 2026-01-16 23:57:33.594 [INFO][4197] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9b3fc1e59e2c1bc327cd5c7bf9cf24594c2d8b5ff5e6acb1228138d5faf784af" Namespace="calico-system" Pod="calico-kube-controllers-79df8bc6d5-wt9rj" WorkloadEndpoint="ci--4081--3--6--n--db2d61d92f-k8s-calico--kube--controllers--79df8bc6d5--wt9rj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--db2d61d92f-k8s-calico--kube--controllers--79df8bc6d5--wt9rj-eth0", GenerateName:"calico-kube-controllers-79df8bc6d5-", Namespace:"calico-system", SelfLink:"", UID:"59191777-c11b-4c90-aa9f-cb135874655c", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 57, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79df8bc6d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-db2d61d92f", ContainerID:"", Pod:"calico-kube-controllers-79df8bc6d5-wt9rj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.58.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliacac572f0e4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:57:33.624910 containerd[1615]: 2026-01-16 23:57:33.594 [INFO][4197] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.58.66/32] ContainerID="9b3fc1e59e2c1bc327cd5c7bf9cf24594c2d8b5ff5e6acb1228138d5faf784af" Namespace="calico-system" Pod="calico-kube-controllers-79df8bc6d5-wt9rj" WorkloadEndpoint="ci--4081--3--6--n--db2d61d92f-k8s-calico--kube--controllers--79df8bc6d5--wt9rj-eth0" Jan 16 23:57:33.624910 containerd[1615]: 2026-01-16 23:57:33.594 [INFO][4197] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliacac572f0e4 ContainerID="9b3fc1e59e2c1bc327cd5c7bf9cf24594c2d8b5ff5e6acb1228138d5faf784af" Namespace="calico-system" Pod="calico-kube-controllers-79df8bc6d5-wt9rj" WorkloadEndpoint="ci--4081--3--6--n--db2d61d92f-k8s-calico--kube--controllers--79df8bc6d5--wt9rj-eth0" Jan 16 23:57:33.624910 containerd[1615]: 2026-01-16 23:57:33.601 [INFO][4197] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9b3fc1e59e2c1bc327cd5c7bf9cf24594c2d8b5ff5e6acb1228138d5faf784af" Namespace="calico-system" Pod="calico-kube-controllers-79df8bc6d5-wt9rj" WorkloadEndpoint="ci--4081--3--6--n--db2d61d92f-k8s-calico--kube--controllers--79df8bc6d5--wt9rj-eth0" Jan 16 23:57:33.624910 containerd[1615]: 2026-01-16 23:57:33.601 [INFO][4197] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9b3fc1e59e2c1bc327cd5c7bf9cf24594c2d8b5ff5e6acb1228138d5faf784af" Namespace="calico-system" Pod="calico-kube-controllers-79df8bc6d5-wt9rj" WorkloadEndpoint="ci--4081--3--6--n--db2d61d92f-k8s-calico--kube--controllers--79df8bc6d5--wt9rj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--db2d61d92f-k8s-calico--kube--controllers--79df8bc6d5--wt9rj-eth0", GenerateName:"calico-kube-controllers-79df8bc6d5-", Namespace:"calico-system", SelfLink:"", UID:"59191777-c11b-4c90-aa9f-cb135874655c", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 57, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79df8bc6d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-db2d61d92f", ContainerID:"9b3fc1e59e2c1bc327cd5c7bf9cf24594c2d8b5ff5e6acb1228138d5faf784af", Pod:"calico-kube-controllers-79df8bc6d5-wt9rj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.58.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliacac572f0e4", MAC:"c6:9b:18:d3:61:86", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:57:33.624910 containerd[1615]: 2026-01-16 23:57:33.619 [INFO][4197] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9b3fc1e59e2c1bc327cd5c7bf9cf24594c2d8b5ff5e6acb1228138d5faf784af" Namespace="calico-system" Pod="calico-kube-controllers-79df8bc6d5-wt9rj" WorkloadEndpoint="ci--4081--3--6--n--db2d61d92f-k8s-calico--kube--controllers--79df8bc6d5--wt9rj-eth0" Jan 16 23:57:33.645196 containerd[1615]: time="2026-01-16T23:57:33.644944607Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 23:57:33.645542 containerd[1615]: time="2026-01-16T23:57:33.645005568Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 23:57:33.645542 containerd[1615]: time="2026-01-16T23:57:33.645263334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:57:33.646120 containerd[1615]: time="2026-01-16T23:57:33.646020871Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:57:33.703429 containerd[1615]: time="2026-01-16T23:57:33.703379513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79df8bc6d5-wt9rj,Uid:59191777-c11b-4c90-aa9f-cb135874655c,Namespace:calico-system,Attempt:1,} returns sandbox id \"9b3fc1e59e2c1bc327cd5c7bf9cf24594c2d8b5ff5e6acb1228138d5faf784af\"" Jan 16 23:57:33.707946 containerd[1615]: time="2026-01-16T23:57:33.707899134Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 16 23:57:34.050267 containerd[1615]: time="2026-01-16T23:57:34.050031932Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:57:34.052070 containerd[1615]: time="2026-01-16T23:57:34.051765691Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 16 23:57:34.052070 containerd[1615]: time="2026-01-16T23:57:34.051899974Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 16 23:57:34.052584 kubelet[2729]: E0116 23:57:34.052509 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 23:57:34.053922 kubelet[2729]: E0116 23:57:34.053436 2729 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 23:57:34.054266 kubelet[2729]: E0116 23:57:34.054203 2729 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hlwr5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-79df8bc6d5-wt9rj_calico-system(59191777-c11b-4c90-aa9f-cb135874655c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 16 23:57:34.055801 kubelet[2729]: E0116 23:57:34.055752 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-79df8bc6d5-wt9rj" podUID="59191777-c11b-4c90-aa9f-cb135874655c" Jan 16 23:57:34.110387 kubelet[2729]: I0116 23:57:34.109365 2729 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 16 23:57:34.536658 kubelet[2729]: E0116 23:57:34.534824 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-79df8bc6d5-wt9rj" podUID="59191777-c11b-4c90-aa9f-cb135874655c" Jan 16 23:57:34.884188 systemd-networkd[1238]: caliacac572f0e4: Gained IPv6LL Jan 16 23:57:35.247684 containerd[1615]: time="2026-01-16T23:57:35.247270823Z" level=info msg="StopPodSandbox for \"813465b72d802fde6d8262e94d1f9553f7d5daa5c3321fc020244dd436f9356b\"" Jan 16 23:57:35.249485 containerd[1615]: time="2026-01-16T23:57:35.248823457Z" level=info msg="StopPodSandbox for \"8bcc19a3b81706a4c02b39edd5dfc7a5bf10b0db2b317ad075008ecc9d3cbb40\"" Jan 16 23:57:35.249485 containerd[1615]: time="2026-01-16T23:57:35.248015159Z" level=info msg="StopPodSandbox for \"68fe52e0f5db0f1599e8d3156051f848d55dc1748249d82bf0152d8603c48a9a\"" Jan 16 23:57:35.249485 containerd[1615]: time="2026-01-16T23:57:35.248112561Z" level=info msg="StopPodSandbox for \"19cc241b89dd849b0574ebc2d04f961d4bd8583e7202c758d54ec24098a09a1b\"" Jan 16 23:57:35.476999 containerd[1615]: 2026-01-16 23:57:35.384 [INFO][4350] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8bcc19a3b81706a4c02b39edd5dfc7a5bf10b0db2b317ad075008ecc9d3cbb40" Jan 16 23:57:35.476999 containerd[1615]: 2026-01-16 23:57:35.384 [INFO][4350] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8bcc19a3b81706a4c02b39edd5dfc7a5bf10b0db2b317ad075008ecc9d3cbb40" iface="eth0" netns="/var/run/netns/cni-52684841-8b1c-179c-45f7-037ebcfdf62a" Jan 16 23:57:35.476999 containerd[1615]: 2026-01-16 23:57:35.385 [INFO][4350] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8bcc19a3b81706a4c02b39edd5dfc7a5bf10b0db2b317ad075008ecc9d3cbb40" iface="eth0" netns="/var/run/netns/cni-52684841-8b1c-179c-45f7-037ebcfdf62a" Jan 16 23:57:35.476999 containerd[1615]: 2026-01-16 23:57:35.385 [INFO][4350] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8bcc19a3b81706a4c02b39edd5dfc7a5bf10b0db2b317ad075008ecc9d3cbb40" iface="eth0" netns="/var/run/netns/cni-52684841-8b1c-179c-45f7-037ebcfdf62a" Jan 16 23:57:35.476999 containerd[1615]: 2026-01-16 23:57:35.385 [INFO][4350] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8bcc19a3b81706a4c02b39edd5dfc7a5bf10b0db2b317ad075008ecc9d3cbb40" Jan 16 23:57:35.476999 containerd[1615]: 2026-01-16 23:57:35.385 [INFO][4350] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8bcc19a3b81706a4c02b39edd5dfc7a5bf10b0db2b317ad075008ecc9d3cbb40" Jan 16 23:57:35.476999 containerd[1615]: 2026-01-16 23:57:35.429 [INFO][4376] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8bcc19a3b81706a4c02b39edd5dfc7a5bf10b0db2b317ad075008ecc9d3cbb40" HandleID="k8s-pod-network.8bcc19a3b81706a4c02b39edd5dfc7a5bf10b0db2b317ad075008ecc9d3cbb40" Workload="ci--4081--3--6--n--db2d61d92f-k8s-csi--node--driver--n2dkx-eth0" Jan 16 23:57:35.476999 containerd[1615]: 2026-01-16 23:57:35.431 [INFO][4376] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:35.476999 containerd[1615]: 2026-01-16 23:57:35.431 [INFO][4376] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:35.476999 containerd[1615]: 2026-01-16 23:57:35.446 [WARNING][4376] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8bcc19a3b81706a4c02b39edd5dfc7a5bf10b0db2b317ad075008ecc9d3cbb40" HandleID="k8s-pod-network.8bcc19a3b81706a4c02b39edd5dfc7a5bf10b0db2b317ad075008ecc9d3cbb40" Workload="ci--4081--3--6--n--db2d61d92f-k8s-csi--node--driver--n2dkx-eth0" Jan 16 23:57:35.476999 containerd[1615]: 2026-01-16 23:57:35.446 [INFO][4376] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8bcc19a3b81706a4c02b39edd5dfc7a5bf10b0db2b317ad075008ecc9d3cbb40" HandleID="k8s-pod-network.8bcc19a3b81706a4c02b39edd5dfc7a5bf10b0db2b317ad075008ecc9d3cbb40" Workload="ci--4081--3--6--n--db2d61d92f-k8s-csi--node--driver--n2dkx-eth0" Jan 16 23:57:35.476999 containerd[1615]: 2026-01-16 23:57:35.449 [INFO][4376] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:35.476999 containerd[1615]: 2026-01-16 23:57:35.459 [INFO][4350] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8bcc19a3b81706a4c02b39edd5dfc7a5bf10b0db2b317ad075008ecc9d3cbb40" Jan 16 23:57:35.485511 containerd[1615]: time="2026-01-16T23:57:35.485001330Z" level=info msg="TearDown network for sandbox \"8bcc19a3b81706a4c02b39edd5dfc7a5bf10b0db2b317ad075008ecc9d3cbb40\" successfully" Jan 16 23:57:35.485511 containerd[1615]: time="2026-01-16T23:57:35.485129252Z" level=info msg="StopPodSandbox for \"8bcc19a3b81706a4c02b39edd5dfc7a5bf10b0db2b317ad075008ecc9d3cbb40\" returns successfully" Jan 16 23:57:35.487411 systemd[1]: run-netns-cni\x2d52684841\x2d8b1c\x2d179c\x2d45f7\x2d037ebcfdf62a.mount: Deactivated successfully. Jan 16 23:57:35.494535 containerd[1615]: time="2026-01-16T23:57:35.494478418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n2dkx,Uid:418b98a5-873e-4b20-a6d4-0ef55480b923,Namespace:calico-system,Attempt:1,}" Jan 16 23:57:35.578658 kernel: bpftool[4411]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 16 23:57:35.621436 containerd[1615]: 2026-01-16 23:57:35.410 [INFO][4349] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="68fe52e0f5db0f1599e8d3156051f848d55dc1748249d82bf0152d8603c48a9a" Jan 16 23:57:35.621436 containerd[1615]: 2026-01-16 23:57:35.410 [INFO][4349] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="68fe52e0f5db0f1599e8d3156051f848d55dc1748249d82bf0152d8603c48a9a" iface="eth0" netns="/var/run/netns/cni-8ae01a01-9374-80d3-7528-c41b3c772692" Jan 16 23:57:35.621436 containerd[1615]: 2026-01-16 23:57:35.412 [INFO][4349] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="68fe52e0f5db0f1599e8d3156051f848d55dc1748249d82bf0152d8603c48a9a" iface="eth0" netns="/var/run/netns/cni-8ae01a01-9374-80d3-7528-c41b3c772692" Jan 16 23:57:35.621436 containerd[1615]: 2026-01-16 23:57:35.413 [INFO][4349] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="68fe52e0f5db0f1599e8d3156051f848d55dc1748249d82bf0152d8603c48a9a" iface="eth0" netns="/var/run/netns/cni-8ae01a01-9374-80d3-7528-c41b3c772692" Jan 16 23:57:35.621436 containerd[1615]: 2026-01-16 23:57:35.413 [INFO][4349] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="68fe52e0f5db0f1599e8d3156051f848d55dc1748249d82bf0152d8603c48a9a" Jan 16 23:57:35.621436 containerd[1615]: 2026-01-16 23:57:35.413 [INFO][4349] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="68fe52e0f5db0f1599e8d3156051f848d55dc1748249d82bf0152d8603c48a9a" Jan 16 23:57:35.621436 containerd[1615]: 2026-01-16 23:57:35.495 [INFO][4384] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="68fe52e0f5db0f1599e8d3156051f848d55dc1748249d82bf0152d8603c48a9a" HandleID="k8s-pod-network.68fe52e0f5db0f1599e8d3156051f848d55dc1748249d82bf0152d8603c48a9a" Workload="ci--4081--3--6--n--db2d61d92f-k8s-coredns--668d6bf9bc--czgwf-eth0" Jan 16 23:57:35.621436 containerd[1615]: 2026-01-16 23:57:35.496 [INFO][4384] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:35.621436 containerd[1615]: 2026-01-16 23:57:35.496 [INFO][4384] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:35.621436 containerd[1615]: 2026-01-16 23:57:35.568 [WARNING][4384] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="68fe52e0f5db0f1599e8d3156051f848d55dc1748249d82bf0152d8603c48a9a" HandleID="k8s-pod-network.68fe52e0f5db0f1599e8d3156051f848d55dc1748249d82bf0152d8603c48a9a" Workload="ci--4081--3--6--n--db2d61d92f-k8s-coredns--668d6bf9bc--czgwf-eth0" Jan 16 23:57:35.621436 containerd[1615]: 2026-01-16 23:57:35.569 [INFO][4384] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="68fe52e0f5db0f1599e8d3156051f848d55dc1748249d82bf0152d8603c48a9a" HandleID="k8s-pod-network.68fe52e0f5db0f1599e8d3156051f848d55dc1748249d82bf0152d8603c48a9a" Workload="ci--4081--3--6--n--db2d61d92f-k8s-coredns--668d6bf9bc--czgwf-eth0" Jan 16 23:57:35.621436 containerd[1615]: 2026-01-16 23:57:35.580 [INFO][4384] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:35.621436 containerd[1615]: 2026-01-16 23:57:35.610 [INFO][4349] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="68fe52e0f5db0f1599e8d3156051f848d55dc1748249d82bf0152d8603c48a9a" Jan 16 23:57:35.626243 containerd[1615]: time="2026-01-16T23:57:35.624452076Z" level=info msg="TearDown network for sandbox \"68fe52e0f5db0f1599e8d3156051f848d55dc1748249d82bf0152d8603c48a9a\" successfully" Jan 16 23:57:35.626243 containerd[1615]: time="2026-01-16T23:57:35.624494396Z" level=info msg="StopPodSandbox for \"68fe52e0f5db0f1599e8d3156051f848d55dc1748249d82bf0152d8603c48a9a\" returns successfully" Jan 16 23:57:35.626811 containerd[1615]: time="2026-01-16T23:57:35.626503401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-czgwf,Uid:fa73edb7-b960-4dc9-91ae-ac7984d8d56b,Namespace:kube-system,Attempt:1,}" Jan 16 23:57:35.630887 systemd[1]: run-netns-cni\x2d8ae01a01\x2d9374\x2d80d3\x2d7528\x2dc41b3c772692.mount: Deactivated successfully. Jan 16 23:57:35.662137 containerd[1615]: 2026-01-16 23:57:35.443 [INFO][4343] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="813465b72d802fde6d8262e94d1f9553f7d5daa5c3321fc020244dd436f9356b" Jan 16 23:57:35.662137 containerd[1615]: 2026-01-16 23:57:35.446 [INFO][4343] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="813465b72d802fde6d8262e94d1f9553f7d5daa5c3321fc020244dd436f9356b" iface="eth0" netns="/var/run/netns/cni-ed73fa0a-d65c-8909-7a57-bfaece756453" Jan 16 23:57:35.662137 containerd[1615]: 2026-01-16 23:57:35.447 [INFO][4343] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="813465b72d802fde6d8262e94d1f9553f7d5daa5c3321fc020244dd436f9356b" iface="eth0" netns="/var/run/netns/cni-ed73fa0a-d65c-8909-7a57-bfaece756453" Jan 16 23:57:35.662137 containerd[1615]: 2026-01-16 23:57:35.448 [INFO][4343] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="813465b72d802fde6d8262e94d1f9553f7d5daa5c3321fc020244dd436f9356b" iface="eth0" netns="/var/run/netns/cni-ed73fa0a-d65c-8909-7a57-bfaece756453" Jan 16 23:57:35.662137 containerd[1615]: 2026-01-16 23:57:35.448 [INFO][4343] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="813465b72d802fde6d8262e94d1f9553f7d5daa5c3321fc020244dd436f9356b" Jan 16 23:57:35.662137 containerd[1615]: 2026-01-16 23:57:35.448 [INFO][4343] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="813465b72d802fde6d8262e94d1f9553f7d5daa5c3321fc020244dd436f9356b" Jan 16 23:57:35.662137 containerd[1615]: 2026-01-16 23:57:35.574 [INFO][4394] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="813465b72d802fde6d8262e94d1f9553f7d5daa5c3321fc020244dd436f9356b" HandleID="k8s-pod-network.813465b72d802fde6d8262e94d1f9553f7d5daa5c3321fc020244dd436f9356b" Workload="ci--4081--3--6--n--db2d61d92f-k8s-calico--apiserver--6f7cdf6968--8z7jz-eth0" Jan 16 23:57:35.662137 containerd[1615]: 2026-01-16 23:57:35.576 [INFO][4394] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:35.662137 containerd[1615]: 2026-01-16 23:57:35.581 [INFO][4394] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:35.662137 containerd[1615]: 2026-01-16 23:57:35.612 [WARNING][4394] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="813465b72d802fde6d8262e94d1f9553f7d5daa5c3321fc020244dd436f9356b" HandleID="k8s-pod-network.813465b72d802fde6d8262e94d1f9553f7d5daa5c3321fc020244dd436f9356b" Workload="ci--4081--3--6--n--db2d61d92f-k8s-calico--apiserver--6f7cdf6968--8z7jz-eth0" Jan 16 23:57:35.662137 containerd[1615]: 2026-01-16 23:57:35.612 [INFO][4394] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="813465b72d802fde6d8262e94d1f9553f7d5daa5c3321fc020244dd436f9356b" HandleID="k8s-pod-network.813465b72d802fde6d8262e94d1f9553f7d5daa5c3321fc020244dd436f9356b" Workload="ci--4081--3--6--n--db2d61d92f-k8s-calico--apiserver--6f7cdf6968--8z7jz-eth0" Jan 16 23:57:35.662137 containerd[1615]: 2026-01-16 23:57:35.623 [INFO][4394] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:35.662137 containerd[1615]: 2026-01-16 23:57:35.651 [INFO][4343] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="813465b72d802fde6d8262e94d1f9553f7d5daa5c3321fc020244dd436f9356b" Jan 16 23:57:35.665772 containerd[1615]: time="2026-01-16T23:57:35.665506298Z" level=info msg="TearDown network for sandbox \"813465b72d802fde6d8262e94d1f9553f7d5daa5c3321fc020244dd436f9356b\" successfully" Jan 16 23:57:35.665772 containerd[1615]: time="2026-01-16T23:57:35.665548379Z" level=info msg="StopPodSandbox for \"813465b72d802fde6d8262e94d1f9553f7d5daa5c3321fc020244dd436f9356b\" returns successfully" Jan 16 23:57:35.671798 systemd[1]: run-netns-cni\x2ded73fa0a\x2dd65c\x2d8909\x2d7a57\x2dbfaece756453.mount: Deactivated successfully. Jan 16 23:57:35.678371 containerd[1615]: time="2026-01-16T23:57:35.677032032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f7cdf6968-8z7jz,Uid:98bfe59b-02e4-4bdc-9e5a-0a8209ddd104,Namespace:calico-apiserver,Attempt:1,}" Jan 16 23:57:35.696381 kubelet[2729]: E0116 23:57:35.696206 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-79df8bc6d5-wt9rj" podUID="59191777-c11b-4c90-aa9f-cb135874655c" Jan 16 23:57:35.736968 containerd[1615]: 2026-01-16 23:57:35.443 [INFO][4352] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="19cc241b89dd849b0574ebc2d04f961d4bd8583e7202c758d54ec24098a09a1b" Jan 16 23:57:35.736968 containerd[1615]: 2026-01-16 23:57:35.444 [INFO][4352] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="19cc241b89dd849b0574ebc2d04f961d4bd8583e7202c758d54ec24098a09a1b" iface="eth0" netns="/var/run/netns/cni-9577a6b6-4174-9f08-98b4-6a96e5141f3f" Jan 16 23:57:35.736968 containerd[1615]: 2026-01-16 23:57:35.444 [INFO][4352] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="19cc241b89dd849b0574ebc2d04f961d4bd8583e7202c758d54ec24098a09a1b" iface="eth0" netns="/var/run/netns/cni-9577a6b6-4174-9f08-98b4-6a96e5141f3f" Jan 16 23:57:35.736968 containerd[1615]: 2026-01-16 23:57:35.445 [INFO][4352] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="19cc241b89dd849b0574ebc2d04f961d4bd8583e7202c758d54ec24098a09a1b" iface="eth0" netns="/var/run/netns/cni-9577a6b6-4174-9f08-98b4-6a96e5141f3f" Jan 16 23:57:35.736968 containerd[1615]: 2026-01-16 23:57:35.447 [INFO][4352] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="19cc241b89dd849b0574ebc2d04f961d4bd8583e7202c758d54ec24098a09a1b" Jan 16 23:57:35.736968 containerd[1615]: 2026-01-16 23:57:35.447 [INFO][4352] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="19cc241b89dd849b0574ebc2d04f961d4bd8583e7202c758d54ec24098a09a1b" Jan 16 23:57:35.736968 containerd[1615]: 2026-01-16 23:57:35.679 [INFO][4392] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="19cc241b89dd849b0574ebc2d04f961d4bd8583e7202c758d54ec24098a09a1b" HandleID="k8s-pod-network.19cc241b89dd849b0574ebc2d04f961d4bd8583e7202c758d54ec24098a09a1b" Workload="ci--4081--3--6--n--db2d61d92f-k8s-goldmane--666569f655--npqz8-eth0" Jan 16 23:57:35.736968 containerd[1615]: 2026-01-16 23:57:35.680 [INFO][4392] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:35.736968 containerd[1615]: 2026-01-16 23:57:35.680 [INFO][4392] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:35.736968 containerd[1615]: 2026-01-16 23:57:35.713 [WARNING][4392] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="19cc241b89dd849b0574ebc2d04f961d4bd8583e7202c758d54ec24098a09a1b" HandleID="k8s-pod-network.19cc241b89dd849b0574ebc2d04f961d4bd8583e7202c758d54ec24098a09a1b" Workload="ci--4081--3--6--n--db2d61d92f-k8s-goldmane--666569f655--npqz8-eth0" Jan 16 23:57:35.736968 containerd[1615]: 2026-01-16 23:57:35.713 [INFO][4392] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="19cc241b89dd849b0574ebc2d04f961d4bd8583e7202c758d54ec24098a09a1b" HandleID="k8s-pod-network.19cc241b89dd849b0574ebc2d04f961d4bd8583e7202c758d54ec24098a09a1b" Workload="ci--4081--3--6--n--db2d61d92f-k8s-goldmane--666569f655--npqz8-eth0" Jan 16 23:57:35.736968 containerd[1615]: 2026-01-16 23:57:35.716 [INFO][4392] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:35.736968 containerd[1615]: 2026-01-16 23:57:35.722 [INFO][4352] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="19cc241b89dd849b0574ebc2d04f961d4bd8583e7202c758d54ec24098a09a1b" Jan 16 23:57:35.737577 containerd[1615]: time="2026-01-16T23:57:35.737199234Z" level=info msg="TearDown network for sandbox \"19cc241b89dd849b0574ebc2d04f961d4bd8583e7202c758d54ec24098a09a1b\" successfully" Jan 16 23:57:35.737577 containerd[1615]: time="2026-01-16T23:57:35.737229475Z" level=info msg="StopPodSandbox for \"19cc241b89dd849b0574ebc2d04f961d4bd8583e7202c758d54ec24098a09a1b\" returns successfully" Jan 16 23:57:35.752827 containerd[1615]: time="2026-01-16T23:57:35.752783497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-npqz8,Uid:db404f65-1005-4977-b8b1-05db5155d53d,Namespace:calico-system,Attempt:1,}" Jan 16 23:57:36.004301 systemd-networkd[1238]: vxlan.calico: Link UP Jan 16 23:57:36.004308 systemd-networkd[1238]: vxlan.calico: Gained carrier Jan 16 23:57:36.072028 systemd-networkd[1238]: calic9ed88e9a46: Link UP Jan 16 23:57:36.073159 systemd-networkd[1238]: calic9ed88e9a46: Gained carrier Jan 16 23:57:36.136762 containerd[1615]: 2026-01-16 23:57:35.773 [INFO][4409] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--db2d61d92f-k8s-csi--node--driver--n2dkx-eth0 csi-node-driver- calico-system 418b98a5-873e-4b20-a6d4-0ef55480b923 959 0 2026-01-16 23:57:11 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-6-n-db2d61d92f csi-node-driver-n2dkx eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic9ed88e9a46 [] [] }} ContainerID="51420cc34f9c4cd389a448a0dd84937deb876b4d1275f8f39553dfe4939ec031" Namespace="calico-system" Pod="csi-node-driver-n2dkx" WorkloadEndpoint="ci--4081--3--6--n--db2d61d92f-k8s-csi--node--driver--n2dkx-" Jan 16 23:57:36.136762 containerd[1615]: 2026-01-16 23:57:35.773 [INFO][4409] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="51420cc34f9c4cd389a448a0dd84937deb876b4d1275f8f39553dfe4939ec031" Namespace="calico-system" Pod="csi-node-driver-n2dkx" WorkloadEndpoint="ci--4081--3--6--n--db2d61d92f-k8s-csi--node--driver--n2dkx-eth0" Jan 16 23:57:36.136762 containerd[1615]: 2026-01-16 23:57:35.934 [INFO][4461] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="51420cc34f9c4cd389a448a0dd84937deb876b4d1275f8f39553dfe4939ec031" HandleID="k8s-pod-network.51420cc34f9c4cd389a448a0dd84937deb876b4d1275f8f39553dfe4939ec031" Workload="ci--4081--3--6--n--db2d61d92f-k8s-csi--node--driver--n2dkx-eth0" Jan 16 23:57:36.136762 containerd[1615]: 2026-01-16 23:57:35.934 [INFO][4461] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="51420cc34f9c4cd389a448a0dd84937deb876b4d1275f8f39553dfe4939ec031" HandleID="k8s-pod-network.51420cc34f9c4cd389a448a0dd84937deb876b4d1275f8f39553dfe4939ec031" Workload="ci--4081--3--6--n--db2d61d92f-k8s-csi--node--driver--n2dkx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000121450), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-db2d61d92f", "pod":"csi-node-driver-n2dkx", "timestamp":"2026-01-16 23:57:35.934393076 +0000 UTC"}, Hostname:"ci-4081-3-6-n-db2d61d92f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 23:57:36.136762 containerd[1615]: 2026-01-16 23:57:35.934 [INFO][4461] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:36.136762 containerd[1615]: 2026-01-16 23:57:35.934 [INFO][4461] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:36.136762 containerd[1615]: 2026-01-16 23:57:35.934 [INFO][4461] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-db2d61d92f' Jan 16 23:57:36.136762 containerd[1615]: 2026-01-16 23:57:35.961 [INFO][4461] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.51420cc34f9c4cd389a448a0dd84937deb876b4d1275f8f39553dfe4939ec031" host="ci-4081-3-6-n-db2d61d92f" Jan 16 23:57:36.136762 containerd[1615]: 2026-01-16 23:57:35.970 [INFO][4461] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-db2d61d92f" Jan 16 23:57:36.136762 containerd[1615]: 2026-01-16 23:57:35.976 [INFO][4461] ipam/ipam.go 511: Trying affinity for 192.168.58.64/26 host="ci-4081-3-6-n-db2d61d92f" Jan 16 23:57:36.136762 containerd[1615]: 2026-01-16 23:57:35.979 [INFO][4461] ipam/ipam.go 158: Attempting to load block cidr=192.168.58.64/26 host="ci-4081-3-6-n-db2d61d92f" Jan 16 23:57:36.136762 containerd[1615]: 2026-01-16 23:57:35.985 [INFO][4461] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.58.64/26 host="ci-4081-3-6-n-db2d61d92f" Jan 16 23:57:36.136762 containerd[1615]: 2026-01-16 23:57:35.985 [INFO][4461] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.58.64/26 handle="k8s-pod-network.51420cc34f9c4cd389a448a0dd84937deb876b4d1275f8f39553dfe4939ec031" host="ci-4081-3-6-n-db2d61d92f" Jan 16 23:57:36.136762 containerd[1615]: 2026-01-16 23:57:35.989 [INFO][4461] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.51420cc34f9c4cd389a448a0dd84937deb876b4d1275f8f39553dfe4939ec031 Jan 16 23:57:36.136762 containerd[1615]: 2026-01-16 23:57:36.010 [INFO][4461] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.58.64/26 handle="k8s-pod-network.51420cc34f9c4cd389a448a0dd84937deb876b4d1275f8f39553dfe4939ec031" host="ci-4081-3-6-n-db2d61d92f" Jan 16 23:57:36.136762 containerd[1615]: 2026-01-16 23:57:36.030 [INFO][4461] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.58.67/26] block=192.168.58.64/26 handle="k8s-pod-network.51420cc34f9c4cd389a448a0dd84937deb876b4d1275f8f39553dfe4939ec031" host="ci-4081-3-6-n-db2d61d92f" Jan 16 23:57:36.136762 containerd[1615]: 2026-01-16 23:57:36.031 [INFO][4461] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.58.67/26] handle="k8s-pod-network.51420cc34f9c4cd389a448a0dd84937deb876b4d1275f8f39553dfe4939ec031" host="ci-4081-3-6-n-db2d61d92f" Jan 16 23:57:36.136762 containerd[1615]: 2026-01-16 23:57:36.031 [INFO][4461] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:36.136762 containerd[1615]: 2026-01-16 23:57:36.031 [INFO][4461] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.58.67/26] IPv6=[] ContainerID="51420cc34f9c4cd389a448a0dd84937deb876b4d1275f8f39553dfe4939ec031" HandleID="k8s-pod-network.51420cc34f9c4cd389a448a0dd84937deb876b4d1275f8f39553dfe4939ec031" Workload="ci--4081--3--6--n--db2d61d92f-k8s-csi--node--driver--n2dkx-eth0" Jan 16 23:57:36.137386 containerd[1615]: 2026-01-16 23:57:36.042 [INFO][4409] cni-plugin/k8s.go 418: Populated endpoint ContainerID="51420cc34f9c4cd389a448a0dd84937deb876b4d1275f8f39553dfe4939ec031" Namespace="calico-system" Pod="csi-node-driver-n2dkx" WorkloadEndpoint="ci--4081--3--6--n--db2d61d92f-k8s-csi--node--driver--n2dkx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--db2d61d92f-k8s-csi--node--driver--n2dkx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"418b98a5-873e-4b20-a6d4-0ef55480b923", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 57, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-db2d61d92f", ContainerID:"", Pod:"csi-node-driver-n2dkx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.58.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic9ed88e9a46", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:57:36.137386 containerd[1615]: 2026-01-16 23:57:36.042 [INFO][4409] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.58.67/32] ContainerID="51420cc34f9c4cd389a448a0dd84937deb876b4d1275f8f39553dfe4939ec031" Namespace="calico-system" Pod="csi-node-driver-n2dkx" WorkloadEndpoint="ci--4081--3--6--n--db2d61d92f-k8s-csi--node--driver--n2dkx-eth0" Jan 16 23:57:36.137386 containerd[1615]: 2026-01-16 23:57:36.042 [INFO][4409] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic9ed88e9a46 ContainerID="51420cc34f9c4cd389a448a0dd84937deb876b4d1275f8f39553dfe4939ec031" Namespace="calico-system" Pod="csi-node-driver-n2dkx" WorkloadEndpoint="ci--4081--3--6--n--db2d61d92f-k8s-csi--node--driver--n2dkx-eth0" Jan 16 23:57:36.137386 containerd[1615]: 2026-01-16 23:57:36.081 [INFO][4409] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="51420cc34f9c4cd389a448a0dd84937deb876b4d1275f8f39553dfe4939ec031" Namespace="calico-system" Pod="csi-node-driver-n2dkx" WorkloadEndpoint="ci--4081--3--6--n--db2d61d92f-k8s-csi--node--driver--n2dkx-eth0" Jan 16 23:57:36.137386 containerd[1615]: 2026-01-16 23:57:36.081 [INFO][4409] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="51420cc34f9c4cd389a448a0dd84937deb876b4d1275f8f39553dfe4939ec031" Namespace="calico-system" Pod="csi-node-driver-n2dkx" WorkloadEndpoint="ci--4081--3--6--n--db2d61d92f-k8s-csi--node--driver--n2dkx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--db2d61d92f-k8s-csi--node--driver--n2dkx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"418b98a5-873e-4b20-a6d4-0ef55480b923", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 57, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-db2d61d92f", ContainerID:"51420cc34f9c4cd389a448a0dd84937deb876b4d1275f8f39553dfe4939ec031", Pod:"csi-node-driver-n2dkx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.58.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic9ed88e9a46", MAC:"8a:85:71:9a:9f:c9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:57:36.137386 containerd[1615]: 2026-01-16 23:57:36.122 [INFO][4409] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="51420cc34f9c4cd389a448a0dd84937deb876b4d1275f8f39553dfe4939ec031" Namespace="calico-system" Pod="csi-node-driver-n2dkx" WorkloadEndpoint="ci--4081--3--6--n--db2d61d92f-k8s-csi--node--driver--n2dkx-eth0" Jan 16 23:57:36.203557 systemd-networkd[1238]: calia9ac081bf5c: Link UP Jan 16 23:57:36.204418 systemd-networkd[1238]: calia9ac081bf5c: Gained carrier Jan 16 23:57:36.234711 containerd[1615]: time="2026-01-16T23:57:36.232671126Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 23:57:36.234711 containerd[1615]: time="2026-01-16T23:57:36.232748644Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 23:57:36.234711 containerd[1615]: time="2026-01-16T23:57:36.232759924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:57:36.234711 containerd[1615]: time="2026-01-16T23:57:36.232866041Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:57:36.254586 containerd[1615]: time="2026-01-16T23:57:36.253958294Z" level=info msg="StopPodSandbox for \"2623671246706b3d5ba38c8fbe31e962febe832fb61fbd580c6e09cc51ce5b1b\"" Jan 16 23:57:36.264116 containerd[1615]: 2026-01-16 23:57:35.848 [INFO][4423] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--db2d61d92f-k8s-coredns--668d6bf9bc--czgwf-eth0 coredns-668d6bf9bc- kube-system fa73edb7-b960-4dc9-91ae-ac7984d8d56b 960 0 2026-01-16 23:56:54 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-n-db2d61d92f coredns-668d6bf9bc-czgwf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia9ac081bf5c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c925483c1384c2f045275a51f734807130fef4ff5f1ef11a7ec5bd2ed132cd41" Namespace="kube-system" Pod="coredns-668d6bf9bc-czgwf" WorkloadEndpoint="ci--4081--3--6--n--db2d61d92f-k8s-coredns--668d6bf9bc--czgwf-" Jan 16 23:57:36.264116 containerd[1615]: 2026-01-16 23:57:35.850 [INFO][4423] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c925483c1384c2f045275a51f734807130fef4ff5f1ef11a7ec5bd2ed132cd41" Namespace="kube-system" Pod="coredns-668d6bf9bc-czgwf" WorkloadEndpoint="ci--4081--3--6--n--db2d61d92f-k8s-coredns--668d6bf9bc--czgwf-eth0" Jan 16 23:57:36.264116 containerd[1615]: 2026-01-16 23:57:36.040 [INFO][4480] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c925483c1384c2f045275a51f734807130fef4ff5f1ef11a7ec5bd2ed132cd41" HandleID="k8s-pod-network.c925483c1384c2f045275a51f734807130fef4ff5f1ef11a7ec5bd2ed132cd41" Workload="ci--4081--3--6--n--db2d61d92f-k8s-coredns--668d6bf9bc--czgwf-eth0" Jan 16 23:57:36.264116 containerd[1615]: 2026-01-16 23:57:36.040 [INFO][4480] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c925483c1384c2f045275a51f734807130fef4ff5f1ef11a7ec5bd2ed132cd41" HandleID="k8s-pod-network.c925483c1384c2f045275a51f734807130fef4ff5f1ef11a7ec5bd2ed132cd41" Workload="ci--4081--3--6--n--db2d61d92f-k8s-coredns--668d6bf9bc--czgwf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002573d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-n-db2d61d92f", "pod":"coredns-668d6bf9bc-czgwf", "timestamp":"2026-01-16 23:57:36.040433465 +0000 UTC"}, Hostname:"ci-4081-3-6-n-db2d61d92f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 23:57:36.264116 containerd[1615]: 2026-01-16 23:57:36.040 [INFO][4480] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:36.264116 containerd[1615]: 2026-01-16 23:57:36.040 [INFO][4480] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:36.264116 containerd[1615]: 2026-01-16 23:57:36.040 [INFO][4480] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-db2d61d92f' Jan 16 23:57:36.264116 containerd[1615]: 2026-01-16 23:57:36.070 [INFO][4480] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c925483c1384c2f045275a51f734807130fef4ff5f1ef11a7ec5bd2ed132cd41" host="ci-4081-3-6-n-db2d61d92f" Jan 16 23:57:36.264116 containerd[1615]: 2026-01-16 23:57:36.083 [INFO][4480] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-db2d61d92f" Jan 16 23:57:36.264116 containerd[1615]: 2026-01-16 23:57:36.113 [INFO][4480] ipam/ipam.go 511: Trying affinity for 192.168.58.64/26 host="ci-4081-3-6-n-db2d61d92f" Jan 16 23:57:36.264116 containerd[1615]: 2026-01-16 23:57:36.146 [INFO][4480] ipam/ipam.go 158: Attempting to load block cidr=192.168.58.64/26 host="ci-4081-3-6-n-db2d61d92f" Jan 16 23:57:36.264116 containerd[1615]: 2026-01-16 23:57:36.156 [INFO][4480] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.58.64/26 host="ci-4081-3-6-n-db2d61d92f" Jan 16 23:57:36.264116 containerd[1615]: 2026-01-16 23:57:36.156 [INFO][4480] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.58.64/26 handle="k8s-pod-network.c925483c1384c2f045275a51f734807130fef4ff5f1ef11a7ec5bd2ed132cd41" host="ci-4081-3-6-n-db2d61d92f" Jan 16 23:57:36.264116 containerd[1615]: 2026-01-16 23:57:36.160 [INFO][4480] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c925483c1384c2f045275a51f734807130fef4ff5f1ef11a7ec5bd2ed132cd41 Jan 16 23:57:36.264116 containerd[1615]: 2026-01-16 23:57:36.173 [INFO][4480] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.58.64/26 handle="k8s-pod-network.c925483c1384c2f045275a51f734807130fef4ff5f1ef11a7ec5bd2ed132cd41" host="ci-4081-3-6-n-db2d61d92f" Jan 16 23:57:36.264116 containerd[1615]: 2026-01-16 23:57:36.187 [INFO][4480] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.58.68/26] block=192.168.58.64/26 handle="k8s-pod-network.c925483c1384c2f045275a51f734807130fef4ff5f1ef11a7ec5bd2ed132cd41" host="ci-4081-3-6-n-db2d61d92f" Jan 16 23:57:36.264116 containerd[1615]: 2026-01-16 23:57:36.187 [INFO][4480] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.58.68/26] handle="k8s-pod-network.c925483c1384c2f045275a51f734807130fef4ff5f1ef11a7ec5bd2ed132cd41" host="ci-4081-3-6-n-db2d61d92f" Jan 16 23:57:36.264116 containerd[1615]: 2026-01-16 23:57:36.187 [INFO][4480] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:36.264116 containerd[1615]: 2026-01-16 23:57:36.187 [INFO][4480] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.58.68/26] IPv6=[] ContainerID="c925483c1384c2f045275a51f734807130fef4ff5f1ef11a7ec5bd2ed132cd41" HandleID="k8s-pod-network.c925483c1384c2f045275a51f734807130fef4ff5f1ef11a7ec5bd2ed132cd41" Workload="ci--4081--3--6--n--db2d61d92f-k8s-coredns--668d6bf9bc--czgwf-eth0" Jan 16 23:57:36.265186 containerd[1615]: 2026-01-16 23:57:36.189 [INFO][4423] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c925483c1384c2f045275a51f734807130fef4ff5f1ef11a7ec5bd2ed132cd41" Namespace="kube-system" Pod="coredns-668d6bf9bc-czgwf" WorkloadEndpoint="ci--4081--3--6--n--db2d61d92f-k8s-coredns--668d6bf9bc--czgwf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--db2d61d92f-k8s-coredns--668d6bf9bc--czgwf-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"fa73edb7-b960-4dc9-91ae-ac7984d8d56b", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 56, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-db2d61d92f", ContainerID:"", Pod:"coredns-668d6bf9bc-czgwf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.58.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia9ac081bf5c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:57:36.265186 containerd[1615]: 2026-01-16 23:57:36.191 [INFO][4423] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.58.68/32] ContainerID="c925483c1384c2f045275a51f734807130fef4ff5f1ef11a7ec5bd2ed132cd41" Namespace="kube-system" Pod="coredns-668d6bf9bc-czgwf" WorkloadEndpoint="ci--4081--3--6--n--db2d61d92f-k8s-coredns--668d6bf9bc--czgwf-eth0" Jan 16 23:57:36.265186 containerd[1615]: 2026-01-16 23:57:36.191 [INFO][4423] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia9ac081bf5c ContainerID="c925483c1384c2f045275a51f734807130fef4ff5f1ef11a7ec5bd2ed132cd41" Namespace="kube-system" Pod="coredns-668d6bf9bc-czgwf" WorkloadEndpoint="ci--4081--3--6--n--db2d61d92f-k8s-coredns--668d6bf9bc--czgwf-eth0" Jan 16 23:57:36.265186 containerd[1615]: 2026-01-16 23:57:36.201 [INFO][4423] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c925483c1384c2f045275a51f734807130fef4ff5f1ef11a7ec5bd2ed132cd41" Namespace="kube-system" Pod="coredns-668d6bf9bc-czgwf" WorkloadEndpoint="ci--4081--3--6--n--db2d61d92f-k8s-coredns--668d6bf9bc--czgwf-eth0" Jan 16 23:57:36.265186 containerd[1615]: 2026-01-16 23:57:36.220 [INFO][4423] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c925483c1384c2f045275a51f734807130fef4ff5f1ef11a7ec5bd2ed132cd41" Namespace="kube-system" Pod="coredns-668d6bf9bc-czgwf" WorkloadEndpoint="ci--4081--3--6--n--db2d61d92f-k8s-coredns--668d6bf9bc--czgwf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--db2d61d92f-k8s-coredns--668d6bf9bc--czgwf-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"fa73edb7-b960-4dc9-91ae-ac7984d8d56b", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 56, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-db2d61d92f", ContainerID:"c925483c1384c2f045275a51f734807130fef4ff5f1ef11a7ec5bd2ed132cd41", Pod:"coredns-668d6bf9bc-czgwf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.58.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia9ac081bf5c", MAC:"9a:5b:e7:eb:08:ac", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:57:36.265186 containerd[1615]: 2026-01-16 23:57:36.238 [INFO][4423] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c925483c1384c2f045275a51f734807130fef4ff5f1ef11a7ec5bd2ed132cd41" Namespace="kube-system" Pod="coredns-668d6bf9bc-czgwf" WorkloadEndpoint="ci--4081--3--6--n--db2d61d92f-k8s-coredns--668d6bf9bc--czgwf-eth0" Jan 16 23:57:36.376819 systemd-networkd[1238]: cali4437fd1b0e4: Link UP Jan 16 23:57:36.380566 systemd-networkd[1238]: cali4437fd1b0e4: Gained carrier Jan 16 23:57:36.408121 containerd[1615]: time="2026-01-16T23:57:36.407743927Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 23:57:36.408474 containerd[1615]: time="2026-01-16T23:57:36.408342793Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 23:57:36.408563 containerd[1615]: time="2026-01-16T23:57:36.408393832Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:57:36.409021 containerd[1615]: time="2026-01-16T23:57:36.408924021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:57:36.449807 containerd[1615]: 2026-01-16 23:57:35.945 [INFO][4424] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--db2d61d92f-k8s-calico--apiserver--6f7cdf6968--8z7jz-eth0 calico-apiserver-6f7cdf6968- calico-apiserver 98bfe59b-02e4-4bdc-9e5a-0a8209ddd104 962 0 2026-01-16 23:57:04 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6f7cdf6968 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-n-db2d61d92f calico-apiserver-6f7cdf6968-8z7jz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4437fd1b0e4 [] [] }} ContainerID="dc5e0e9fa234e8db8b1d18e5aeb8417e3cf308e52b90115e9a2c7f9c337ba48c" Namespace="calico-apiserver" Pod="calico-apiserver-6f7cdf6968-8z7jz" WorkloadEndpoint="ci--4081--3--6--n--db2d61d92f-k8s-calico--apiserver--6f7cdf6968--8z7jz-" Jan 16 23:57:36.449807 containerd[1615]: 2026-01-16 23:57:35.946 [INFO][4424] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dc5e0e9fa234e8db8b1d18e5aeb8417e3cf308e52b90115e9a2c7f9c337ba48c" Namespace="calico-apiserver" Pod="calico-apiserver-6f7cdf6968-8z7jz" WorkloadEndpoint="ci--4081--3--6--n--db2d61d92f-k8s-calico--apiserver--6f7cdf6968--8z7jz-eth0" Jan 16 23:57:36.449807 containerd[1615]: 2026-01-16 23:57:36.146 [INFO][4491] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dc5e0e9fa234e8db8b1d18e5aeb8417e3cf308e52b90115e9a2c7f9c337ba48c" HandleID="k8s-pod-network.dc5e0e9fa234e8db8b1d18e5aeb8417e3cf308e52b90115e9a2c7f9c337ba48c" Workload="ci--4081--3--6--n--db2d61d92f-k8s-calico--apiserver--6f7cdf6968--8z7jz-eth0" Jan 16 23:57:36.449807 containerd[1615]: 2026-01-16 23:57:36.147 [INFO][4491] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="dc5e0e9fa234e8db8b1d18e5aeb8417e3cf308e52b90115e9a2c7f9c337ba48c" HandleID="k8s-pod-network.dc5e0e9fa234e8db8b1d18e5aeb8417e3cf308e52b90115e9a2c7f9c337ba48c" Workload="ci--4081--3--6--n--db2d61d92f-k8s-calico--apiserver--6f7cdf6968--8z7jz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002db8c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-6-n-db2d61d92f", "pod":"calico-apiserver-6f7cdf6968-8z7jz", "timestamp":"2026-01-16 23:57:36.146456076 +0000 UTC"}, Hostname:"ci-4081-3-6-n-db2d61d92f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 23:57:36.449807 containerd[1615]: 2026-01-16 23:57:36.147 [INFO][4491] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:36.449807 containerd[1615]: 2026-01-16 23:57:36.187 [INFO][4491] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:36.449807 containerd[1615]: 2026-01-16 23:57:36.188 [INFO][4491] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-db2d61d92f' Jan 16 23:57:36.449807 containerd[1615]: 2026-01-16 23:57:36.206 [INFO][4491] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dc5e0e9fa234e8db8b1d18e5aeb8417e3cf308e52b90115e9a2c7f9c337ba48c" host="ci-4081-3-6-n-db2d61d92f" Jan 16 23:57:36.449807 containerd[1615]: 2026-01-16 23:57:36.220 [INFO][4491] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-db2d61d92f" Jan 16 23:57:36.449807 containerd[1615]: 2026-01-16 23:57:36.252 [INFO][4491] ipam/ipam.go 511: Trying affinity for 192.168.58.64/26 host="ci-4081-3-6-n-db2d61d92f" Jan 16 23:57:36.449807 containerd[1615]: 2026-01-16 23:57:36.269 [INFO][4491] ipam/ipam.go 158: Attempting to load block cidr=192.168.58.64/26 host="ci-4081-3-6-n-db2d61d92f" Jan 16 23:57:36.449807 containerd[1615]: 2026-01-16 23:57:36.284 [INFO][4491] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.58.64/26 host="ci-4081-3-6-n-db2d61d92f" Jan 16 23:57:36.449807 containerd[1615]: 2026-01-16 23:57:36.286 [INFO][4491] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.58.64/26 handle="k8s-pod-network.dc5e0e9fa234e8db8b1d18e5aeb8417e3cf308e52b90115e9a2c7f9c337ba48c" host="ci-4081-3-6-n-db2d61d92f" Jan 16 23:57:36.449807 containerd[1615]: 2026-01-16 23:57:36.292 [INFO][4491] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.dc5e0e9fa234e8db8b1d18e5aeb8417e3cf308e52b90115e9a2c7f9c337ba48c Jan 16 23:57:36.449807 containerd[1615]: 2026-01-16 23:57:36.301 [INFO][4491] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.58.64/26 handle="k8s-pod-network.dc5e0e9fa234e8db8b1d18e5aeb8417e3cf308e52b90115e9a2c7f9c337ba48c" host="ci-4081-3-6-n-db2d61d92f" Jan 16 23:57:36.449807 containerd[1615]: 2026-01-16 23:57:36.331 [INFO][4491] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.58.69/26] block=192.168.58.64/26 handle="k8s-pod-network.dc5e0e9fa234e8db8b1d18e5aeb8417e3cf308e52b90115e9a2c7f9c337ba48c" host="ci-4081-3-6-n-db2d61d92f" Jan 16 23:57:36.449807 containerd[1615]: 2026-01-16 23:57:36.331 [INFO][4491] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.58.69/26] handle="k8s-pod-network.dc5e0e9fa234e8db8b1d18e5aeb8417e3cf308e52b90115e9a2c7f9c337ba48c" host="ci-4081-3-6-n-db2d61d92f" Jan 16 23:57:36.449807 containerd[1615]: 2026-01-16 23:57:36.331 [INFO][4491] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:36.449807 containerd[1615]: 2026-01-16 23:57:36.332 [INFO][4491] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.58.69/26] IPv6=[] ContainerID="dc5e0e9fa234e8db8b1d18e5aeb8417e3cf308e52b90115e9a2c7f9c337ba48c" HandleID="k8s-pod-network.dc5e0e9fa234e8db8b1d18e5aeb8417e3cf308e52b90115e9a2c7f9c337ba48c" Workload="ci--4081--3--6--n--db2d61d92f-k8s-calico--apiserver--6f7cdf6968--8z7jz-eth0" Jan 16 23:57:36.451940 containerd[1615]: 2026-01-16 23:57:36.361 [INFO][4424] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dc5e0e9fa234e8db8b1d18e5aeb8417e3cf308e52b90115e9a2c7f9c337ba48c" Namespace="calico-apiserver" Pod="calico-apiserver-6f7cdf6968-8z7jz" WorkloadEndpoint="ci--4081--3--6--n--db2d61d92f-k8s-calico--apiserver--6f7cdf6968--8z7jz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--db2d61d92f-k8s-calico--apiserver--6f7cdf6968--8z7jz-eth0", GenerateName:"calico-apiserver-6f7cdf6968-", Namespace:"calico-apiserver", SelfLink:"", UID:"98bfe59b-02e4-4bdc-9e5a-0a8209ddd104", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 57, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f7cdf6968", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-db2d61d92f", ContainerID:"", Pod:"calico-apiserver-6f7cdf6968-8z7jz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.58.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4437fd1b0e4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:57:36.451940 containerd[1615]: 2026-01-16 23:57:36.361 [INFO][4424] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.58.69/32] ContainerID="dc5e0e9fa234e8db8b1d18e5aeb8417e3cf308e52b90115e9a2c7f9c337ba48c" Namespace="calico-apiserver" Pod="calico-apiserver-6f7cdf6968-8z7jz" WorkloadEndpoint="ci--4081--3--6--n--db2d61d92f-k8s-calico--apiserver--6f7cdf6968--8z7jz-eth0" Jan 16 23:57:36.451940 containerd[1615]: 2026-01-16 23:57:36.361 [INFO][4424] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4437fd1b0e4 ContainerID="dc5e0e9fa234e8db8b1d18e5aeb8417e3cf308e52b90115e9a2c7f9c337ba48c" Namespace="calico-apiserver" Pod="calico-apiserver-6f7cdf6968-8z7jz" WorkloadEndpoint="ci--4081--3--6--n--db2d61d92f-k8s-calico--apiserver--6f7cdf6968--8z7jz-eth0" Jan 16 23:57:36.451940 containerd[1615]: 2026-01-16 23:57:36.393 [INFO][4424] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dc5e0e9fa234e8db8b1d18e5aeb8417e3cf308e52b90115e9a2c7f9c337ba48c" Namespace="calico-apiserver" Pod="calico-apiserver-6f7cdf6968-8z7jz" WorkloadEndpoint="ci--4081--3--6--n--db2d61d92f-k8s-calico--apiserver--6f7cdf6968--8z7jz-eth0" Jan 16 23:57:36.451940 containerd[1615]: 2026-01-16 23:57:36.395 [INFO][4424] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dc5e0e9fa234e8db8b1d18e5aeb8417e3cf308e52b90115e9a2c7f9c337ba48c" Namespace="calico-apiserver" Pod="calico-apiserver-6f7cdf6968-8z7jz" WorkloadEndpoint="ci--4081--3--6--n--db2d61d92f-k8s-calico--apiserver--6f7cdf6968--8z7jz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--db2d61d92f-k8s-calico--apiserver--6f7cdf6968--8z7jz-eth0", GenerateName:"calico-apiserver-6f7cdf6968-", Namespace:"calico-apiserver", SelfLink:"", UID:"98bfe59b-02e4-4bdc-9e5a-0a8209ddd104", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 57, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f7cdf6968", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-db2d61d92f", ContainerID:"dc5e0e9fa234e8db8b1d18e5aeb8417e3cf308e52b90115e9a2c7f9c337ba48c", Pod:"calico-apiserver-6f7cdf6968-8z7jz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.58.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4437fd1b0e4", MAC:"62:ba:5e:0e:75:87", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:57:36.451940 containerd[1615]: 2026-01-16 23:57:36.434 [INFO][4424] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dc5e0e9fa234e8db8b1d18e5aeb8417e3cf308e52b90115e9a2c7f9c337ba48c" Namespace="calico-apiserver" Pod="calico-apiserver-6f7cdf6968-8z7jz" WorkloadEndpoint="ci--4081--3--6--n--db2d61d92f-k8s-calico--apiserver--6f7cdf6968--8z7jz-eth0" Jan 16 23:57:36.501848 systemd[1]: run-netns-cni\x2d9577a6b6\x2d4174\x2d9f08\x2d98b4\x2d6a96e5141f3f.mount: Deactivated successfully. Jan 16 23:57:36.572735 containerd[1615]: time="2026-01-16T23:57:36.572213723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-czgwf,Uid:fa73edb7-b960-4dc9-91ae-ac7984d8d56b,Namespace:kube-system,Attempt:1,} returns sandbox id \"c925483c1384c2f045275a51f734807130fef4ff5f1ef11a7ec5bd2ed132cd41\"" Jan 16 23:57:36.604564 containerd[1615]: time="2026-01-16T23:57:36.604420609Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 23:57:36.604885 containerd[1615]: time="2026-01-16T23:57:36.604511207Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 23:57:36.604885 containerd[1615]: time="2026-01-16T23:57:36.604548646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:57:36.604885 containerd[1615]: time="2026-01-16T23:57:36.604707043Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:57:36.641206 containerd[1615]: time="2026-01-16T23:57:36.641042798Z" level=info msg="CreateContainer within sandbox \"c925483c1384c2f045275a51f734807130fef4ff5f1ef11a7ec5bd2ed132cd41\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 16 23:57:36.685731 containerd[1615]: time="2026-01-16T23:57:36.684114843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n2dkx,Uid:418b98a5-873e-4b20-a6d4-0ef55480b923,Namespace:calico-system,Attempt:1,} returns sandbox id \"51420cc34f9c4cd389a448a0dd84937deb876b4d1275f8f39553dfe4939ec031\"" Jan 16 23:57:36.689053 systemd-networkd[1238]: cali7141e2966ae: Link UP Jan 16 23:57:36.694060 systemd-networkd[1238]: cali7141e2966ae: Gained carrier Jan 16 23:57:36.745426 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2807054252.mount: Deactivated successfully. Jan 16 23:57:36.747468 containerd[1615]: 2026-01-16 23:57:35.945 [INFO][4447] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--db2d61d92f-k8s-goldmane--666569f655--npqz8-eth0 goldmane-666569f655- calico-system db404f65-1005-4977-b8b1-05db5155d53d 961 0 2026-01-16 23:57:09 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-6-n-db2d61d92f goldmane-666569f655-npqz8 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali7141e2966ae [] [] }} ContainerID="c3878b8540035f8eb0956e8fccdcca11c48f09e1cea572e2a739dd51f8717454" Namespace="calico-system" Pod="goldmane-666569f655-npqz8" WorkloadEndpoint="ci--4081--3--6--n--db2d61d92f-k8s-goldmane--666569f655--npqz8-" Jan 16 23:57:36.747468 containerd[1615]: 2026-01-16 23:57:35.951 [INFO][4447] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c3878b8540035f8eb0956e8fccdcca11c48f09e1cea572e2a739dd51f8717454" Namespace="calico-system" Pod="goldmane-666569f655-npqz8" WorkloadEndpoint="ci--4081--3--6--n--db2d61d92f-k8s-goldmane--666569f655--npqz8-eth0" Jan 16 23:57:36.747468 containerd[1615]: 2026-01-16 23:57:36.176 [INFO][4496] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c3878b8540035f8eb0956e8fccdcca11c48f09e1cea572e2a739dd51f8717454" HandleID="k8s-pod-network.c3878b8540035f8eb0956e8fccdcca11c48f09e1cea572e2a739dd51f8717454" Workload="ci--4081--3--6--n--db2d61d92f-k8s-goldmane--666569f655--npqz8-eth0" Jan 16 23:57:36.747468 containerd[1615]: 2026-01-16 23:57:36.178 [INFO][4496] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c3878b8540035f8eb0956e8fccdcca11c48f09e1cea572e2a739dd51f8717454" HandleID="k8s-pod-network.c3878b8540035f8eb0956e8fccdcca11c48f09e1cea572e2a739dd51f8717454" Workload="ci--4081--3--6--n--db2d61d92f-k8s-goldmane--666569f655--npqz8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40000a3a30), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-db2d61d92f", "pod":"goldmane-666569f655-npqz8", "timestamp":"2026-01-16 23:57:36.176113779 +0000 UTC"}, Hostname:"ci-4081-3-6-n-db2d61d92f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 23:57:36.747468 containerd[1615]: 2026-01-16 23:57:36.179 [INFO][4496] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:36.747468 containerd[1615]: 2026-01-16 23:57:36.331 [INFO][4496] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:36.747468 containerd[1615]: 2026-01-16 23:57:36.334 [INFO][4496] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-db2d61d92f' Jan 16 23:57:36.747468 containerd[1615]: 2026-01-16 23:57:36.420 [INFO][4496] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c3878b8540035f8eb0956e8fccdcca11c48f09e1cea572e2a739dd51f8717454" host="ci-4081-3-6-n-db2d61d92f" Jan 16 23:57:36.747468 containerd[1615]: 2026-01-16 23:57:36.461 [INFO][4496] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-db2d61d92f" Jan 16 23:57:36.747468 containerd[1615]: 2026-01-16 23:57:36.521 [INFO][4496] ipam/ipam.go 511: Trying affinity for 192.168.58.64/26 host="ci-4081-3-6-n-db2d61d92f" Jan 16 23:57:36.747468 containerd[1615]: 2026-01-16 23:57:36.535 [INFO][4496] ipam/ipam.go 158: Attempting to load block cidr=192.168.58.64/26 host="ci-4081-3-6-n-db2d61d92f" Jan 16 23:57:36.747468 containerd[1615]: 2026-01-16 23:57:36.554 [INFO][4496] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.58.64/26 host="ci-4081-3-6-n-db2d61d92f" Jan 16 23:57:36.747468 containerd[1615]: 2026-01-16 23:57:36.554 [INFO][4496] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.58.64/26 handle="k8s-pod-network.c3878b8540035f8eb0956e8fccdcca11c48f09e1cea572e2a739dd51f8717454" host="ci-4081-3-6-n-db2d61d92f" Jan 16 23:57:36.747468 containerd[1615]: 2026-01-16 23:57:36.563 [INFO][4496] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c3878b8540035f8eb0956e8fccdcca11c48f09e1cea572e2a739dd51f8717454 Jan 16 23:57:36.747468 containerd[1615]: 2026-01-16 23:57:36.586 [INFO][4496] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.58.64/26 handle="k8s-pod-network.c3878b8540035f8eb0956e8fccdcca11c48f09e1cea572e2a739dd51f8717454" host="ci-4081-3-6-n-db2d61d92f" Jan 16 23:57:36.747468 containerd[1615]: 2026-01-16 23:57:36.610 [INFO][4496] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.58.70/26] block=192.168.58.64/26 handle="k8s-pod-network.c3878b8540035f8eb0956e8fccdcca11c48f09e1cea572e2a739dd51f8717454" host="ci-4081-3-6-n-db2d61d92f" Jan 16 23:57:36.747468 containerd[1615]: 2026-01-16 23:57:36.610 [INFO][4496] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.58.70/26] handle="k8s-pod-network.c3878b8540035f8eb0956e8fccdcca11c48f09e1cea572e2a739dd51f8717454" host="ci-4081-3-6-n-db2d61d92f" Jan 16 23:57:36.747468 containerd[1615]: 2026-01-16 23:57:36.610 [INFO][4496] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:36.747468 containerd[1615]: 2026-01-16 23:57:36.610 [INFO][4496] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.58.70/26] IPv6=[] ContainerID="c3878b8540035f8eb0956e8fccdcca11c48f09e1cea572e2a739dd51f8717454" HandleID="k8s-pod-network.c3878b8540035f8eb0956e8fccdcca11c48f09e1cea572e2a739dd51f8717454" Workload="ci--4081--3--6--n--db2d61d92f-k8s-goldmane--666569f655--npqz8-eth0" Jan 16 23:57:36.749284 containerd[1615]: 2026-01-16 23:57:36.657 [INFO][4447] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c3878b8540035f8eb0956e8fccdcca11c48f09e1cea572e2a739dd51f8717454" Namespace="calico-system" Pod="goldmane-666569f655-npqz8" WorkloadEndpoint="ci--4081--3--6--n--db2d61d92f-k8s-goldmane--666569f655--npqz8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--db2d61d92f-k8s-goldmane--666569f655--npqz8-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"db404f65-1005-4977-b8b1-05db5155d53d", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 57, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-db2d61d92f", ContainerID:"", Pod:"goldmane-666569f655-npqz8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.58.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7141e2966ae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:57:36.749284 containerd[1615]: 2026-01-16 23:57:36.665 [INFO][4447] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.58.70/32] ContainerID="c3878b8540035f8eb0956e8fccdcca11c48f09e1cea572e2a739dd51f8717454" Namespace="calico-system" Pod="goldmane-666569f655-npqz8" WorkloadEndpoint="ci--4081--3--6--n--db2d61d92f-k8s-goldmane--666569f655--npqz8-eth0" Jan 16 23:57:36.749284 containerd[1615]: 2026-01-16 23:57:36.665 [INFO][4447] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7141e2966ae ContainerID="c3878b8540035f8eb0956e8fccdcca11c48f09e1cea572e2a739dd51f8717454" Namespace="calico-system" Pod="goldmane-666569f655-npqz8" WorkloadEndpoint="ci--4081--3--6--n--db2d61d92f-k8s-goldmane--666569f655--npqz8-eth0" Jan 16 23:57:36.749284 containerd[1615]: 2026-01-16 23:57:36.694 [INFO][4447] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c3878b8540035f8eb0956e8fccdcca11c48f09e1cea572e2a739dd51f8717454" Namespace="calico-system" Pod="goldmane-666569f655-npqz8" WorkloadEndpoint="ci--4081--3--6--n--db2d61d92f-k8s-goldmane--666569f655--npqz8-eth0" Jan 16 23:57:36.749284 containerd[1615]: 2026-01-16 23:57:36.707 [INFO][4447] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c3878b8540035f8eb0956e8fccdcca11c48f09e1cea572e2a739dd51f8717454" Namespace="calico-system" Pod="goldmane-666569f655-npqz8" WorkloadEndpoint="ci--4081--3--6--n--db2d61d92f-k8s-goldmane--666569f655--npqz8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--db2d61d92f-k8s-goldmane--666569f655--npqz8-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"db404f65-1005-4977-b8b1-05db5155d53d", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 57, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-db2d61d92f", ContainerID:"c3878b8540035f8eb0956e8fccdcca11c48f09e1cea572e2a739dd51f8717454", Pod:"goldmane-666569f655-npqz8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.58.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7141e2966ae", MAC:"4a:74:74:4c:0a:d4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:57:36.749284 containerd[1615]: 2026-01-16 23:57:36.734 [INFO][4447] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c3878b8540035f8eb0956e8fccdcca11c48f09e1cea572e2a739dd51f8717454" Namespace="calico-system" Pod="goldmane-666569f655-npqz8" WorkloadEndpoint="ci--4081--3--6--n--db2d61d92f-k8s-goldmane--666569f655--npqz8-eth0" Jan 16 23:57:36.777721 containerd[1615]: time="2026-01-16T23:57:36.776826109Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 16 23:57:36.778790 containerd[1615]: time="2026-01-16T23:57:36.778518432Z" level=info msg="CreateContainer within sandbox \"c925483c1384c2f045275a51f734807130fef4ff5f1ef11a7ec5bd2ed132cd41\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9be88320471f92166c78a94836b5e0bca3d6a08ae90d1486f1afd3977619e2e4\"" Jan 16 23:57:36.780990 containerd[1615]: time="2026-01-16T23:57:36.780830100Z" level=info msg="StartContainer for \"9be88320471f92166c78a94836b5e0bca3d6a08ae90d1486f1afd3977619e2e4\"" Jan 16 23:57:36.911391 containerd[1615]: 2026-01-16 23:57:36.482 [INFO][4569] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2623671246706b3d5ba38c8fbe31e962febe832fb61fbd580c6e09cc51ce5b1b" Jan 16 23:57:36.911391 containerd[1615]: 2026-01-16 23:57:36.490 [INFO][4569] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2623671246706b3d5ba38c8fbe31e962febe832fb61fbd580c6e09cc51ce5b1b" iface="eth0" netns="/var/run/netns/cni-6d9e7bc7-b722-cadf-d11a-7941c1aeb9d6" Jan 16 23:57:36.911391 containerd[1615]: 2026-01-16 23:57:36.497 [INFO][4569] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2623671246706b3d5ba38c8fbe31e962febe832fb61fbd580c6e09cc51ce5b1b" iface="eth0" netns="/var/run/netns/cni-6d9e7bc7-b722-cadf-d11a-7941c1aeb9d6" Jan 16 23:57:36.911391 containerd[1615]: 2026-01-16 23:57:36.499 [INFO][4569] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2623671246706b3d5ba38c8fbe31e962febe832fb61fbd580c6e09cc51ce5b1b" iface="eth0" netns="/var/run/netns/cni-6d9e7bc7-b722-cadf-d11a-7941c1aeb9d6" Jan 16 23:57:36.911391 containerd[1615]: 2026-01-16 23:57:36.499 [INFO][4569] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2623671246706b3d5ba38c8fbe31e962febe832fb61fbd580c6e09cc51ce5b1b" Jan 16 23:57:36.911391 containerd[1615]: 2026-01-16 23:57:36.499 [INFO][4569] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2623671246706b3d5ba38c8fbe31e962febe832fb61fbd580c6e09cc51ce5b1b" Jan 16 23:57:36.911391 containerd[1615]: 2026-01-16 23:57:36.786 [INFO][4627] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2623671246706b3d5ba38c8fbe31e962febe832fb61fbd580c6e09cc51ce5b1b" HandleID="k8s-pod-network.2623671246706b3d5ba38c8fbe31e962febe832fb61fbd580c6e09cc51ce5b1b" Workload="ci--4081--3--6--n--db2d61d92f-k8s-coredns--668d6bf9bc--77fjj-eth0" Jan 16 23:57:36.911391 containerd[1615]: 2026-01-16 23:57:36.787 [INFO][4627] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:36.911391 containerd[1615]: 2026-01-16 23:57:36.788 [INFO][4627] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:36.911391 containerd[1615]: 2026-01-16 23:57:36.829 [WARNING][4627] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2623671246706b3d5ba38c8fbe31e962febe832fb61fbd580c6e09cc51ce5b1b" HandleID="k8s-pod-network.2623671246706b3d5ba38c8fbe31e962febe832fb61fbd580c6e09cc51ce5b1b" Workload="ci--4081--3--6--n--db2d61d92f-k8s-coredns--668d6bf9bc--77fjj-eth0" Jan 16 23:57:36.911391 containerd[1615]: 2026-01-16 23:57:36.830 [INFO][4627] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2623671246706b3d5ba38c8fbe31e962febe832fb61fbd580c6e09cc51ce5b1b" HandleID="k8s-pod-network.2623671246706b3d5ba38c8fbe31e962febe832fb61fbd580c6e09cc51ce5b1b" Workload="ci--4081--3--6--n--db2d61d92f-k8s-coredns--668d6bf9bc--77fjj-eth0" Jan 16 23:57:36.911391 containerd[1615]: 2026-01-16 23:57:36.853 [INFO][4627] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:36.911391 containerd[1615]: 2026-01-16 23:57:36.886 [INFO][4569] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2623671246706b3d5ba38c8fbe31e962febe832fb61fbd580c6e09cc51ce5b1b" Jan 16 23:57:36.917706 containerd[1615]: time="2026-01-16T23:57:36.917361475Z" level=info msg="TearDown network for sandbox \"2623671246706b3d5ba38c8fbe31e962febe832fb61fbd580c6e09cc51ce5b1b\" successfully" Jan 16 23:57:36.917706 containerd[1615]: time="2026-01-16T23:57:36.917583790Z" level=info msg="StopPodSandbox for \"2623671246706b3d5ba38c8fbe31e962febe832fb61fbd580c6e09cc51ce5b1b\" returns successfully" Jan 16 23:57:36.925645 containerd[1615]: time="2026-01-16T23:57:36.922696797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-77fjj,Uid:189086c2-a129-41e8-ba84-b185657d3f10,Namespace:kube-system,Attempt:1,}" Jan 16 23:57:36.930906 containerd[1615]: time="2026-01-16T23:57:36.930615742Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 23:57:36.931637 containerd[1615]: time="2026-01-16T23:57:36.931421244Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 23:57:36.931795 containerd[1615]: time="2026-01-16T23:57:36.931599440Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:57:36.935888 containerd[1615]: time="2026-01-16T23:57:36.933608915Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:57:36.989035 containerd[1615]: time="2026-01-16T23:57:36.988993768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f7cdf6968-8z7jz,Uid:98bfe59b-02e4-4bdc-9e5a-0a8209ddd104,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"dc5e0e9fa234e8db8b1d18e5aeb8417e3cf308e52b90115e9a2c7f9c337ba48c\"" Jan 16 23:57:37.055697 containerd[1615]: time="2026-01-16T23:57:37.055338521Z" level=info msg="StartContainer for \"9be88320471f92166c78a94836b5e0bca3d6a08ae90d1486f1afd3977619e2e4\" returns successfully" Jan 16 23:57:37.174017 containerd[1615]: time="2026-01-16T23:57:37.173864192Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:57:37.179035 containerd[1615]: time="2026-01-16T23:57:37.177745150Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 16 23:57:37.179252 kubelet[2729]: E0116 23:57:37.178034 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 23:57:37.179252 kubelet[2729]: E0116 23:57:37.178150 2729 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 23:57:37.180425 kubelet[2729]: E0116 23:57:37.179591 2729 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2vsp7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-n2dkx_calico-system(418b98a5-873e-4b20-a6d4-0ef55480b923): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 16 23:57:37.180549 containerd[1615]: time="2026-01-16T23:57:37.178854647Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 16 23:57:37.180650 containerd[1615]: time="2026-01-16T23:57:37.180379055Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 23:57:37.181793 containerd[1615]: time="2026-01-16T23:57:37.181739306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-npqz8,Uid:db404f65-1005-4977-b8b1-05db5155d53d,Namespace:calico-system,Attempt:1,} returns sandbox id \"c3878b8540035f8eb0956e8fccdcca11c48f09e1cea572e2a739dd51f8717454\"" Jan 16 23:57:37.247580 containerd[1615]: time="2026-01-16T23:57:37.247491685Z" level=info msg="StopPodSandbox for \"b815939dcb1947f34f4f9e2b2f13ece0d18f4c3a8bb16efca1609be2ecd59af8\"" Jan 16 23:57:37.311408 systemd-networkd[1238]: cali9950a033a30: Link UP Jan 16 23:57:37.316889 systemd-networkd[1238]: cali9950a033a30: Gained carrier Jan 16 23:57:37.351680 containerd[1615]: 2026-01-16 23:57:37.152 [INFO][4782] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--db2d61d92f-k8s-coredns--668d6bf9bc--77fjj-eth0 coredns-668d6bf9bc- kube-system 189086c2-a129-41e8-ba84-b185657d3f10 978 0 2026-01-16 23:56:54 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-n-db2d61d92f coredns-668d6bf9bc-77fjj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9950a033a30 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ef1050efd356e90e7365085408fcd18bb831c42d14d5d4aa84168270173b5550" Namespace="kube-system" Pod="coredns-668d6bf9bc-77fjj" WorkloadEndpoint="ci--4081--3--6--n--db2d61d92f-k8s-coredns--668d6bf9bc--77fjj-" Jan 16 23:57:37.351680 containerd[1615]: 2026-01-16 23:57:37.152 [INFO][4782] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ef1050efd356e90e7365085408fcd18bb831c42d14d5d4aa84168270173b5550" Namespace="kube-system" Pod="coredns-668d6bf9bc-77fjj" WorkloadEndpoint="ci--4081--3--6--n--db2d61d92f-k8s-coredns--668d6bf9bc--77fjj-eth0" Jan 16 23:57:37.351680 containerd[1615]: 2026-01-16 23:57:37.215 [INFO][4832] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ef1050efd356e90e7365085408fcd18bb831c42d14d5d4aa84168270173b5550" HandleID="k8s-pod-network.ef1050efd356e90e7365085408fcd18bb831c42d14d5d4aa84168270173b5550" Workload="ci--4081--3--6--n--db2d61d92f-k8s-coredns--668d6bf9bc--77fjj-eth0" Jan 16 23:57:37.351680 containerd[1615]: 2026-01-16 23:57:37.215 [INFO][4832] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ef1050efd356e90e7365085408fcd18bb831c42d14d5d4aa84168270173b5550" HandleID="k8s-pod-network.ef1050efd356e90e7365085408fcd18bb831c42d14d5d4aa84168270173b5550" Workload="ci--4081--3--6--n--db2d61d92f-k8s-coredns--668d6bf9bc--77fjj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3800), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-n-db2d61d92f", "pod":"coredns-668d6bf9bc-77fjj", "timestamp":"2026-01-16 23:57:37.21538944 +0000 UTC"}, Hostname:"ci-4081-3-6-n-db2d61d92f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 23:57:37.351680 containerd[1615]: 2026-01-16 23:57:37.215 [INFO][4832] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:37.351680 containerd[1615]: 2026-01-16 23:57:37.215 [INFO][4832] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:37.351680 containerd[1615]: 2026-01-16 23:57:37.215 [INFO][4832] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-db2d61d92f' Jan 16 23:57:37.351680 containerd[1615]: 2026-01-16 23:57:37.230 [INFO][4832] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ef1050efd356e90e7365085408fcd18bb831c42d14d5d4aa84168270173b5550" host="ci-4081-3-6-n-db2d61d92f" Jan 16 23:57:37.351680 containerd[1615]: 2026-01-16 23:57:37.238 [INFO][4832] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-db2d61d92f" Jan 16 23:57:37.351680 containerd[1615]: 2026-01-16 23:57:37.246 [INFO][4832] ipam/ipam.go 511: Trying affinity for 192.168.58.64/26 host="ci-4081-3-6-n-db2d61d92f" Jan 16 23:57:37.351680 containerd[1615]: 2026-01-16 23:57:37.260 [INFO][4832] ipam/ipam.go 158: Attempting to load block cidr=192.168.58.64/26 host="ci-4081-3-6-n-db2d61d92f" Jan 16 23:57:37.351680 containerd[1615]: 2026-01-16 23:57:37.265 [INFO][4832] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.58.64/26 host="ci-4081-3-6-n-db2d61d92f" Jan 16 23:57:37.351680 containerd[1615]: 2026-01-16 23:57:37.266 [INFO][4832] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.58.64/26 handle="k8s-pod-network.ef1050efd356e90e7365085408fcd18bb831c42d14d5d4aa84168270173b5550" host="ci-4081-3-6-n-db2d61d92f" Jan 16 23:57:37.351680 containerd[1615]: 2026-01-16 23:57:37.269 [INFO][4832] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ef1050efd356e90e7365085408fcd18bb831c42d14d5d4aa84168270173b5550 Jan 16 23:57:37.351680 containerd[1615]: 2026-01-16 23:57:37.279 [INFO][4832] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.58.64/26 handle="k8s-pod-network.ef1050efd356e90e7365085408fcd18bb831c42d14d5d4aa84168270173b5550" host="ci-4081-3-6-n-db2d61d92f" Jan 16 23:57:37.351680 containerd[1615]: 2026-01-16 23:57:37.293 [INFO][4832] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.58.71/26] block=192.168.58.64/26 handle="k8s-pod-network.ef1050efd356e90e7365085408fcd18bb831c42d14d5d4aa84168270173b5550" host="ci-4081-3-6-n-db2d61d92f" Jan 16 23:57:37.351680 containerd[1615]: 2026-01-16 23:57:37.295 [INFO][4832] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.58.71/26] handle="k8s-pod-network.ef1050efd356e90e7365085408fcd18bb831c42d14d5d4aa84168270173b5550" host="ci-4081-3-6-n-db2d61d92f" Jan 16 23:57:37.351680 containerd[1615]: 2026-01-16 23:57:37.295 [INFO][4832] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:37.351680 containerd[1615]: 2026-01-16 23:57:37.295 [INFO][4832] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.58.71/26] IPv6=[] ContainerID="ef1050efd356e90e7365085408fcd18bb831c42d14d5d4aa84168270173b5550" HandleID="k8s-pod-network.ef1050efd356e90e7365085408fcd18bb831c42d14d5d4aa84168270173b5550" Workload="ci--4081--3--6--n--db2d61d92f-k8s-coredns--668d6bf9bc--77fjj-eth0" Jan 16 23:57:37.353132 containerd[1615]: 2026-01-16 23:57:37.301 [INFO][4782] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ef1050efd356e90e7365085408fcd18bb831c42d14d5d4aa84168270173b5550" Namespace="kube-system" Pod="coredns-668d6bf9bc-77fjj" WorkloadEndpoint="ci--4081--3--6--n--db2d61d92f-k8s-coredns--668d6bf9bc--77fjj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--db2d61d92f-k8s-coredns--668d6bf9bc--77fjj-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"189086c2-a129-41e8-ba84-b185657d3f10", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 56, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-db2d61d92f", ContainerID:"", Pod:"coredns-668d6bf9bc-77fjj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.58.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9950a033a30", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:57:37.353132 containerd[1615]: 2026-01-16 23:57:37.303 [INFO][4782] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.58.71/32] ContainerID="ef1050efd356e90e7365085408fcd18bb831c42d14d5d4aa84168270173b5550" Namespace="kube-system" Pod="coredns-668d6bf9bc-77fjj" WorkloadEndpoint="ci--4081--3--6--n--db2d61d92f-k8s-coredns--668d6bf9bc--77fjj-eth0" Jan 16 23:57:37.353132 containerd[1615]: 2026-01-16 23:57:37.303 [INFO][4782] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9950a033a30 ContainerID="ef1050efd356e90e7365085408fcd18bb831c42d14d5d4aa84168270173b5550" Namespace="kube-system" Pod="coredns-668d6bf9bc-77fjj" WorkloadEndpoint="ci--4081--3--6--n--db2d61d92f-k8s-coredns--668d6bf9bc--77fjj-eth0" Jan 16 23:57:37.353132 containerd[1615]: 2026-01-16 23:57:37.314 [INFO][4782] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ef1050efd356e90e7365085408fcd18bb831c42d14d5d4aa84168270173b5550" Namespace="kube-system" Pod="coredns-668d6bf9bc-77fjj" WorkloadEndpoint="ci--4081--3--6--n--db2d61d92f-k8s-coredns--668d6bf9bc--77fjj-eth0" Jan 16 23:57:37.353132 containerd[1615]: 2026-01-16 23:57:37.316 [INFO][4782] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ef1050efd356e90e7365085408fcd18bb831c42d14d5d4aa84168270173b5550" Namespace="kube-system" Pod="coredns-668d6bf9bc-77fjj" WorkloadEndpoint="ci--4081--3--6--n--db2d61d92f-k8s-coredns--668d6bf9bc--77fjj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--db2d61d92f-k8s-coredns--668d6bf9bc--77fjj-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"189086c2-a129-41e8-ba84-b185657d3f10", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 56, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-db2d61d92f", ContainerID:"ef1050efd356e90e7365085408fcd18bb831c42d14d5d4aa84168270173b5550", Pod:"coredns-668d6bf9bc-77fjj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.58.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9950a033a30", MAC:"36:d0:b4:8c:34:f0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:57:37.353132 containerd[1615]: 2026-01-16 23:57:37.337 [INFO][4782] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ef1050efd356e90e7365085408fcd18bb831c42d14d5d4aa84168270173b5550" Namespace="kube-system" Pod="coredns-668d6bf9bc-77fjj" WorkloadEndpoint="ci--4081--3--6--n--db2d61d92f-k8s-coredns--668d6bf9bc--77fjj-eth0" Jan 16 23:57:37.393805 containerd[1615]: time="2026-01-16T23:57:37.393189625Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 23:57:37.393805 containerd[1615]: time="2026-01-16T23:57:37.393285063Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 23:57:37.393805 containerd[1615]: time="2026-01-16T23:57:37.393297383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:57:37.393805 containerd[1615]: time="2026-01-16T23:57:37.393421980Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:57:37.444444 systemd-networkd[1238]: calic9ed88e9a46: Gained IPv6LL Jan 16 23:57:37.460883 containerd[1615]: 2026-01-16 23:57:37.370 [INFO][4848] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b815939dcb1947f34f4f9e2b2f13ece0d18f4c3a8bb16efca1609be2ecd59af8" Jan 16 23:57:37.460883 containerd[1615]: 2026-01-16 23:57:37.371 [INFO][4848] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b815939dcb1947f34f4f9e2b2f13ece0d18f4c3a8bb16efca1609be2ecd59af8" iface="eth0" netns="/var/run/netns/cni-4d15b9de-0953-750c-0e93-bcd5621c123a" Jan 16 23:57:37.460883 containerd[1615]: 2026-01-16 23:57:37.371 [INFO][4848] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b815939dcb1947f34f4f9e2b2f13ece0d18f4c3a8bb16efca1609be2ecd59af8" iface="eth0" netns="/var/run/netns/cni-4d15b9de-0953-750c-0e93-bcd5621c123a" Jan 16 23:57:37.460883 containerd[1615]: 2026-01-16 23:57:37.372 [INFO][4848] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b815939dcb1947f34f4f9e2b2f13ece0d18f4c3a8bb16efca1609be2ecd59af8" iface="eth0" netns="/var/run/netns/cni-4d15b9de-0953-750c-0e93-bcd5621c123a" Jan 16 23:57:37.460883 containerd[1615]: 2026-01-16 23:57:37.372 [INFO][4848] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b815939dcb1947f34f4f9e2b2f13ece0d18f4c3a8bb16efca1609be2ecd59af8" Jan 16 23:57:37.460883 containerd[1615]: 2026-01-16 23:57:37.372 [INFO][4848] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b815939dcb1947f34f4f9e2b2f13ece0d18f4c3a8bb16efca1609be2ecd59af8" Jan 16 23:57:37.460883 containerd[1615]: 2026-01-16 23:57:37.417 [INFO][4867] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b815939dcb1947f34f4f9e2b2f13ece0d18f4c3a8bb16efca1609be2ecd59af8" HandleID="k8s-pod-network.b815939dcb1947f34f4f9e2b2f13ece0d18f4c3a8bb16efca1609be2ecd59af8" Workload="ci--4081--3--6--n--db2d61d92f-k8s-calico--apiserver--6f7cdf6968--skgrt-eth0" Jan 16 23:57:37.460883 containerd[1615]: 2026-01-16 23:57:37.423 [INFO][4867] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:37.460883 containerd[1615]: 2026-01-16 23:57:37.423 [INFO][4867] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:37.460883 containerd[1615]: 2026-01-16 23:57:37.442 [WARNING][4867] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b815939dcb1947f34f4f9e2b2f13ece0d18f4c3a8bb16efca1609be2ecd59af8" HandleID="k8s-pod-network.b815939dcb1947f34f4f9e2b2f13ece0d18f4c3a8bb16efca1609be2ecd59af8" Workload="ci--4081--3--6--n--db2d61d92f-k8s-calico--apiserver--6f7cdf6968--skgrt-eth0" Jan 16 23:57:37.460883 containerd[1615]: 2026-01-16 23:57:37.442 [INFO][4867] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b815939dcb1947f34f4f9e2b2f13ece0d18f4c3a8bb16efca1609be2ecd59af8" HandleID="k8s-pod-network.b815939dcb1947f34f4f9e2b2f13ece0d18f4c3a8bb16efca1609be2ecd59af8" Workload="ci--4081--3--6--n--db2d61d92f-k8s-calico--apiserver--6f7cdf6968--skgrt-eth0" Jan 16 23:57:37.460883 containerd[1615]: 2026-01-16 23:57:37.448 [INFO][4867] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:37.460883 containerd[1615]: 2026-01-16 23:57:37.455 [INFO][4848] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b815939dcb1947f34f4f9e2b2f13ece0d18f4c3a8bb16efca1609be2ecd59af8" Jan 16 23:57:37.462496 containerd[1615]: time="2026-01-16T23:57:37.461746705Z" level=info msg="TearDown network for sandbox \"b815939dcb1947f34f4f9e2b2f13ece0d18f4c3a8bb16efca1609be2ecd59af8\" successfully" Jan 16 23:57:37.462614 containerd[1615]: time="2026-01-16T23:57:37.461786224Z" level=info msg="StopPodSandbox for \"b815939dcb1947f34f4f9e2b2f13ece0d18f4c3a8bb16efca1609be2ecd59af8\" returns successfully" Jan 16 23:57:37.463529 containerd[1615]: time="2026-01-16T23:57:37.463487628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f7cdf6968-skgrt,Uid:e369e49f-7e57-4c99-8a2f-2c08450c808f,Namespace:calico-apiserver,Attempt:1,}" Jan 16 23:57:37.467435 containerd[1615]: time="2026-01-16T23:57:37.467367107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-77fjj,Uid:189086c2-a129-41e8-ba84-b185657d3f10,Namespace:kube-system,Attempt:1,} returns sandbox id \"ef1050efd356e90e7365085408fcd18bb831c42d14d5d4aa84168270173b5550\"" Jan 16 23:57:37.473506 containerd[1615]: time="2026-01-16T23:57:37.473451459Z" level=info msg="CreateContainer within sandbox \"ef1050efd356e90e7365085408fcd18bb831c42d14d5d4aa84168270173b5550\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 16 23:57:37.493786 systemd[1]: run-netns-cni\x2d4d15b9de\x2d0953\x2d750c\x2d0e93\x2dbcd5621c123a.mount: Deactivated successfully. Jan 16 23:57:37.493952 systemd[1]: run-netns-cni\x2d6d9e7bc7\x2db722\x2dcadf\x2dd11a\x2d7941c1aeb9d6.mount: Deactivated successfully. Jan 16 23:57:37.534741 containerd[1615]: time="2026-01-16T23:57:37.534578055Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:57:37.539713 containerd[1615]: time="2026-01-16T23:57:37.538977803Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 23:57:37.539713 containerd[1615]: time="2026-01-16T23:57:37.539113920Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 16 23:57:37.541141 kubelet[2729]: E0116 23:57:37.539306 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 23:57:37.541141 kubelet[2729]: E0116 23:57:37.539361 2729 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 23:57:37.541141 kubelet[2729]: E0116 23:57:37.539647 2729 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gxmf9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6f7cdf6968-8z7jz_calico-apiserver(98bfe59b-02e4-4bdc-9e5a-0a8209ddd104): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 23:57:37.541141 kubelet[2729]: E0116 23:57:37.541003 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f7cdf6968-8z7jz" podUID="98bfe59b-02e4-4bdc-9e5a-0a8209ddd104" Jan 16 23:57:37.544137 containerd[1615]: time="2026-01-16T23:57:37.543760862Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 16 23:57:37.551456 containerd[1615]: time="2026-01-16T23:57:37.551366703Z" level=info msg="CreateContainer within sandbox \"ef1050efd356e90e7365085408fcd18bb831c42d14d5d4aa84168270173b5550\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2c487531643d6d2a3ce326227f4db0ffc3354d360e734787c66cf4ea9de072cd\"" Jan 16 23:57:37.552696 containerd[1615]: time="2026-01-16T23:57:37.552332762Z" level=info msg="StartContainer for \"2c487531643d6d2a3ce326227f4db0ffc3354d360e734787c66cf4ea9de072cd\"" Jan 16 23:57:37.572844 systemd-networkd[1238]: vxlan.calico: Gained IPv6LL Jan 16 23:57:37.636663 systemd-networkd[1238]: cali4437fd1b0e4: Gained IPv6LL Jan 16 23:57:37.690817 containerd[1615]: time="2026-01-16T23:57:37.690766455Z" level=info msg="StartContainer for \"2c487531643d6d2a3ce326227f4db0ffc3354d360e734787c66cf4ea9de072cd\" returns successfully" Jan 16 23:57:37.774769 kubelet[2729]: E0116 23:57:37.771987 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f7cdf6968-8z7jz" podUID="98bfe59b-02e4-4bdc-9e5a-0a8209ddd104" Jan 16 23:57:37.795411 kubelet[2729]: I0116 23:57:37.793462 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-czgwf" podStartSLOduration=43.793437738 podStartE2EDuration="43.793437738s" podCreationTimestamp="2026-01-16 23:56:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-16 23:57:37.790298004 +0000 UTC m=+49.665803897" watchObservedRunningTime="2026-01-16 23:57:37.793437738 +0000 UTC m=+49.668943551" Jan 16 23:57:37.848903 systemd-networkd[1238]: cali6d78e29e333: Link UP Jan 16 23:57:37.849712 systemd-networkd[1238]: cali6d78e29e333: Gained carrier Jan 16 23:57:37.894290 systemd-networkd[1238]: calia9ac081bf5c: Gained IPv6LL Jan 16 23:57:37.899687 containerd[1615]: 2026-01-16 23:57:37.586 [INFO][4910] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--db2d61d92f-k8s-calico--apiserver--6f7cdf6968--skgrt-eth0 calico-apiserver-6f7cdf6968- calico-apiserver e369e49f-7e57-4c99-8a2f-2c08450c808f 1000 0 2026-01-16 23:57:04 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6f7cdf6968 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-n-db2d61d92f calico-apiserver-6f7cdf6968-skgrt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6d78e29e333 [] [] }} ContainerID="895426a0cb47cc70b69956707c975dce8f7b8033453ffeb554015bc334bd38d6" Namespace="calico-apiserver" Pod="calico-apiserver-6f7cdf6968-skgrt" WorkloadEndpoint="ci--4081--3--6--n--db2d61d92f-k8s-calico--apiserver--6f7cdf6968--skgrt-" Jan 16 23:57:37.899687 containerd[1615]: 2026-01-16 23:57:37.587 [INFO][4910] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="895426a0cb47cc70b69956707c975dce8f7b8033453ffeb554015bc334bd38d6" Namespace="calico-apiserver" Pod="calico-apiserver-6f7cdf6968-skgrt" WorkloadEndpoint="ci--4081--3--6--n--db2d61d92f-k8s-calico--apiserver--6f7cdf6968--skgrt-eth0" Jan 16 23:57:37.899687 containerd[1615]: 2026-01-16 23:57:37.673 [INFO][4941] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="895426a0cb47cc70b69956707c975dce8f7b8033453ffeb554015bc334bd38d6" HandleID="k8s-pod-network.895426a0cb47cc70b69956707c975dce8f7b8033453ffeb554015bc334bd38d6" Workload="ci--4081--3--6--n--db2d61d92f-k8s-calico--apiserver--6f7cdf6968--skgrt-eth0" Jan 16 23:57:37.899687 containerd[1615]: 2026-01-16 23:57:37.673 [INFO][4941] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="895426a0cb47cc70b69956707c975dce8f7b8033453ffeb554015bc334bd38d6" HandleID="k8s-pod-network.895426a0cb47cc70b69956707c975dce8f7b8033453ffeb554015bc334bd38d6" Workload="ci--4081--3--6--n--db2d61d92f-k8s-calico--apiserver--6f7cdf6968--skgrt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3840), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-6-n-db2d61d92f", "pod":"calico-apiserver-6f7cdf6968-skgrt", "timestamp":"2026-01-16 23:57:37.673641694 +0000 UTC"}, Hostname:"ci-4081-3-6-n-db2d61d92f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 23:57:37.899687 containerd[1615]: 2026-01-16 23:57:37.674 [INFO][4941] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:37.899687 containerd[1615]: 2026-01-16 23:57:37.674 [INFO][4941] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:37.899687 containerd[1615]: 2026-01-16 23:57:37.674 [INFO][4941] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-db2d61d92f' Jan 16 23:57:37.899687 containerd[1615]: 2026-01-16 23:57:37.698 [INFO][4941] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.895426a0cb47cc70b69956707c975dce8f7b8033453ffeb554015bc334bd38d6" host="ci-4081-3-6-n-db2d61d92f" Jan 16 23:57:37.899687 containerd[1615]: 2026-01-16 23:57:37.715 [INFO][4941] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-db2d61d92f" Jan 16 23:57:37.899687 containerd[1615]: 2026-01-16 23:57:37.732 [INFO][4941] ipam/ipam.go 511: Trying affinity for 192.168.58.64/26 host="ci-4081-3-6-n-db2d61d92f" Jan 16 23:57:37.899687 containerd[1615]: 2026-01-16 23:57:37.738 [INFO][4941] ipam/ipam.go 158: Attempting to load block cidr=192.168.58.64/26 host="ci-4081-3-6-n-db2d61d92f" Jan 16 23:57:37.899687 containerd[1615]: 2026-01-16 23:57:37.748 [INFO][4941] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.58.64/26 host="ci-4081-3-6-n-db2d61d92f" Jan 16 23:57:37.899687 containerd[1615]: 2026-01-16 23:57:37.748 [INFO][4941] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.58.64/26 handle="k8s-pod-network.895426a0cb47cc70b69956707c975dce8f7b8033453ffeb554015bc334bd38d6" host="ci-4081-3-6-n-db2d61d92f" Jan 16 23:57:37.899687 containerd[1615]: 2026-01-16 23:57:37.759 [INFO][4941] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.895426a0cb47cc70b69956707c975dce8f7b8033453ffeb554015bc334bd38d6 Jan 16 23:57:37.899687 containerd[1615]: 2026-01-16 23:57:37.790 [INFO][4941] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.58.64/26 handle="k8s-pod-network.895426a0cb47cc70b69956707c975dce8f7b8033453ffeb554015bc334bd38d6" host="ci-4081-3-6-n-db2d61d92f" Jan 16 23:57:37.899687 containerd[1615]: 2026-01-16 23:57:37.820 [INFO][4941] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.58.72/26] block=192.168.58.64/26 handle="k8s-pod-network.895426a0cb47cc70b69956707c975dce8f7b8033453ffeb554015bc334bd38d6" host="ci-4081-3-6-n-db2d61d92f" Jan 16 23:57:37.899687 containerd[1615]: 2026-01-16 23:57:37.822 [INFO][4941] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.58.72/26] handle="k8s-pod-network.895426a0cb47cc70b69956707c975dce8f7b8033453ffeb554015bc334bd38d6" host="ci-4081-3-6-n-db2d61d92f" Jan 16 23:57:37.899687 containerd[1615]: 2026-01-16 23:57:37.822 [INFO][4941] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:37.899687 containerd[1615]: 2026-01-16 23:57:37.823 [INFO][4941] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.58.72/26] IPv6=[] ContainerID="895426a0cb47cc70b69956707c975dce8f7b8033453ffeb554015bc334bd38d6" HandleID="k8s-pod-network.895426a0cb47cc70b69956707c975dce8f7b8033453ffeb554015bc334bd38d6" Workload="ci--4081--3--6--n--db2d61d92f-k8s-calico--apiserver--6f7cdf6968--skgrt-eth0" Jan 16 23:57:37.901073 containerd[1615]: 2026-01-16 23:57:37.828 [INFO][4910] cni-plugin/k8s.go 418: Populated endpoint ContainerID="895426a0cb47cc70b69956707c975dce8f7b8033453ffeb554015bc334bd38d6" Namespace="calico-apiserver" Pod="calico-apiserver-6f7cdf6968-skgrt" WorkloadEndpoint="ci--4081--3--6--n--db2d61d92f-k8s-calico--apiserver--6f7cdf6968--skgrt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--db2d61d92f-k8s-calico--apiserver--6f7cdf6968--skgrt-eth0", GenerateName:"calico-apiserver-6f7cdf6968-", Namespace:"calico-apiserver", SelfLink:"", UID:"e369e49f-7e57-4c99-8a2f-2c08450c808f", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 57, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f7cdf6968", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-db2d61d92f", ContainerID:"", Pod:"calico-apiserver-6f7cdf6968-skgrt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.58.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6d78e29e333", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:57:37.901073 containerd[1615]: 2026-01-16 23:57:37.829 [INFO][4910] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.58.72/32] ContainerID="895426a0cb47cc70b69956707c975dce8f7b8033453ffeb554015bc334bd38d6" Namespace="calico-apiserver" Pod="calico-apiserver-6f7cdf6968-skgrt" WorkloadEndpoint="ci--4081--3--6--n--db2d61d92f-k8s-calico--apiserver--6f7cdf6968--skgrt-eth0" Jan 16 23:57:37.901073 containerd[1615]: 2026-01-16 23:57:37.829 [INFO][4910] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6d78e29e333 ContainerID="895426a0cb47cc70b69956707c975dce8f7b8033453ffeb554015bc334bd38d6" Namespace="calico-apiserver" Pod="calico-apiserver-6f7cdf6968-skgrt" WorkloadEndpoint="ci--4081--3--6--n--db2d61d92f-k8s-calico--apiserver--6f7cdf6968--skgrt-eth0" Jan 16 23:57:37.901073 containerd[1615]: 2026-01-16 23:57:37.842 [INFO][4910] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="895426a0cb47cc70b69956707c975dce8f7b8033453ffeb554015bc334bd38d6" Namespace="calico-apiserver" Pod="calico-apiserver-6f7cdf6968-skgrt" WorkloadEndpoint="ci--4081--3--6--n--db2d61d92f-k8s-calico--apiserver--6f7cdf6968--skgrt-eth0" Jan 16 23:57:37.901073 containerd[1615]: 2026-01-16 23:57:37.851 [INFO][4910] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="895426a0cb47cc70b69956707c975dce8f7b8033453ffeb554015bc334bd38d6" Namespace="calico-apiserver" Pod="calico-apiserver-6f7cdf6968-skgrt" WorkloadEndpoint="ci--4081--3--6--n--db2d61d92f-k8s-calico--apiserver--6f7cdf6968--skgrt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--db2d61d92f-k8s-calico--apiserver--6f7cdf6968--skgrt-eth0", GenerateName:"calico-apiserver-6f7cdf6968-", Namespace:"calico-apiserver", SelfLink:"", UID:"e369e49f-7e57-4c99-8a2f-2c08450c808f", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 57, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f7cdf6968", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-db2d61d92f", ContainerID:"895426a0cb47cc70b69956707c975dce8f7b8033453ffeb554015bc334bd38d6", Pod:"calico-apiserver-6f7cdf6968-skgrt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.58.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6d78e29e333", MAC:"32:a0:5b:13:df:55", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:57:37.901073 containerd[1615]: 2026-01-16 23:57:37.888 [INFO][4910] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="895426a0cb47cc70b69956707c975dce8f7b8033453ffeb554015bc334bd38d6" Namespace="calico-apiserver" Pod="calico-apiserver-6f7cdf6968-skgrt" WorkloadEndpoint="ci--4081--3--6--n--db2d61d92f-k8s-calico--apiserver--6f7cdf6968--skgrt-eth0" Jan 16 23:57:37.910283 containerd[1615]: time="2026-01-16T23:57:37.909736215Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:57:37.915014 containerd[1615]: time="2026-01-16T23:57:37.914674672Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 16 23:57:37.915014 containerd[1615]: time="2026-01-16T23:57:37.914821509Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 16 23:57:37.917944 kubelet[2729]: E0116 23:57:37.916332 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 23:57:37.917944 kubelet[2729]: E0116 23:57:37.916389 2729 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 23:57:37.917944 kubelet[2729]: E0116 23:57:37.916617 2729 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2vsp7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-n2dkx_calico-system(418b98a5-873e-4b20-a6d4-0ef55480b923): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 16 23:57:37.921884 containerd[1615]: time="2026-01-16T23:57:37.920578028Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 16 23:57:37.922124 kubelet[2729]: E0116 23:57:37.921746 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n2dkx" podUID="418b98a5-873e-4b20-a6d4-0ef55480b923" Jan 16 23:57:37.956925 containerd[1615]: time="2026-01-16T23:57:37.949731015Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 23:57:37.956925 containerd[1615]: time="2026-01-16T23:57:37.949823933Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 23:57:37.956925 containerd[1615]: time="2026-01-16T23:57:37.949940891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:57:37.956925 containerd[1615]: time="2026-01-16T23:57:37.950091448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:57:38.014616 containerd[1615]: time="2026-01-16T23:57:38.014513190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f7cdf6968-skgrt,Uid:e369e49f-7e57-4c99-8a2f-2c08450c808f,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"895426a0cb47cc70b69956707c975dce8f7b8033453ffeb554015bc334bd38d6\"" Jan 16 23:57:38.266054 containerd[1615]: time="2026-01-16T23:57:38.265373401Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:57:38.268504 containerd[1615]: time="2026-01-16T23:57:38.268118787Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 16 23:57:38.268504 containerd[1615]: time="2026-01-16T23:57:38.268261944Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 16 23:57:38.269515 kubelet[2729]: E0116 23:57:38.268975 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 23:57:38.269515 kubelet[2729]: E0116 23:57:38.269064 2729 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 23:57:38.269515 kubelet[2729]: E0116 23:57:38.269457 2729 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2fqch,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-npqz8_calico-system(db404f65-1005-4977-b8b1-05db5155d53d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 16 23:57:38.272927 containerd[1615]: time="2026-01-16T23:57:38.272305103Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 23:57:38.273934 kubelet[2729]: E0116 23:57:38.273416 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-npqz8" podUID="db404f65-1005-4977-b8b1-05db5155d53d" Jan 16 23:57:38.595969 systemd-networkd[1238]: cali7141e2966ae: Gained IPv6LL Jan 16 23:57:38.608316 containerd[1615]: time="2026-01-16T23:57:38.607907509Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:57:38.610240 containerd[1615]: time="2026-01-16T23:57:38.609690514Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 23:57:38.610240 containerd[1615]: time="2026-01-16T23:57:38.609773552Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 16 23:57:38.610384 kubelet[2729]: E0116 23:57:38.610040 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 23:57:38.610384 kubelet[2729]: E0116 23:57:38.610100 2729 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 23:57:38.610384 kubelet[2729]: E0116 23:57:38.610244 2729 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2j77x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6f7cdf6968-skgrt_calico-apiserver(e369e49f-7e57-4c99-8a2f-2c08450c808f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 23:57:38.611807 kubelet[2729]: E0116 23:57:38.611743 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f7cdf6968-skgrt" podUID="e369e49f-7e57-4c99-8a2f-2c08450c808f" Jan 16 23:57:38.792706 kubelet[2729]: E0116 23:57:38.792333 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f7cdf6968-skgrt" podUID="e369e49f-7e57-4c99-8a2f-2c08450c808f" Jan 16 23:57:38.792706 kubelet[2729]: E0116 23:57:38.792428 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f7cdf6968-8z7jz" podUID="98bfe59b-02e4-4bdc-9e5a-0a8209ddd104" Jan 16 23:57:38.792706 kubelet[2729]: E0116 23:57:38.792545 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-npqz8" podUID="db404f65-1005-4977-b8b1-05db5155d53d" Jan 16 23:57:38.794909 kubelet[2729]: E0116 23:57:38.794698 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n2dkx" podUID="418b98a5-873e-4b20-a6d4-0ef55480b923" Jan 16 23:57:38.821810 kubelet[2729]: I0116 23:57:38.819336 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-77fjj" podStartSLOduration=44.819314625 podStartE2EDuration="44.819314625s" podCreationTimestamp="2026-01-16 23:56:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-16 23:57:38.800306203 +0000 UTC m=+50.675812096" watchObservedRunningTime="2026-01-16 23:57:38.819314625 +0000 UTC m=+50.694820478" Jan 16 23:57:39.112242 systemd-networkd[1238]: cali6d78e29e333: Gained IPv6LL Jan 16 23:57:39.364138 systemd-networkd[1238]: cali9950a033a30: Gained IPv6LL Jan 16 23:57:39.794359 kubelet[2729]: E0116 23:57:39.794145 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f7cdf6968-skgrt" podUID="e369e49f-7e57-4c99-8a2f-2c08450c808f" Jan 16 23:57:45.252153 containerd[1615]: time="2026-01-16T23:57:45.251776838Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 16 23:57:45.600834 containerd[1615]: time="2026-01-16T23:57:45.600529028Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:57:45.602190 containerd[1615]: time="2026-01-16T23:57:45.602091488Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 16 23:57:45.602357 containerd[1615]: time="2026-01-16T23:57:45.602248445Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 16 23:57:45.602691 kubelet[2729]: E0116 23:57:45.602466 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 23:57:45.602691 kubelet[2729]: E0116 23:57:45.602552 2729 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 23:57:45.603702 kubelet[2729]: E0116 23:57:45.602724 2729 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a43c7915b4074bd1b752561b3055b41c,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cq6n5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5c6b945f-kpw28_calico-system(7b693a10-74eb-43ee-978b-c4010636e57f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 16 23:57:45.606400 containerd[1615]: time="2026-01-16T23:57:45.606105395Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 16 23:57:45.946877 containerd[1615]: time="2026-01-16T23:57:45.946577053Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:57:45.948564 containerd[1615]: time="2026-01-16T23:57:45.948368949Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 16 23:57:45.948564 containerd[1615]: time="2026-01-16T23:57:45.948495188Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 16 23:57:45.949182 kubelet[2729]: E0116 23:57:45.949001 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 23:57:45.949316 kubelet[2729]: E0116 23:57:45.949186 2729 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 23:57:45.950051 kubelet[2729]: E0116 23:57:45.949462 2729 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cq6n5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5c6b945f-kpw28_calico-system(7b693a10-74eb-43ee-978b-c4010636e57f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 16 23:57:45.951280 kubelet[2729]: E0116 23:57:45.951206 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c6b945f-kpw28" podUID="7b693a10-74eb-43ee-978b-c4010636e57f" Jan 16 23:57:48.225482 containerd[1615]: time="2026-01-16T23:57:48.225216550Z" level=info msg="StopPodSandbox for \"19cc241b89dd849b0574ebc2d04f961d4bd8583e7202c758d54ec24098a09a1b\"" Jan 16 23:57:48.334093 containerd[1615]: 2026-01-16 23:57:48.277 [WARNING][5050] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="19cc241b89dd849b0574ebc2d04f961d4bd8583e7202c758d54ec24098a09a1b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--db2d61d92f-k8s-goldmane--666569f655--npqz8-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"db404f65-1005-4977-b8b1-05db5155d53d", ResourceVersion:"1051", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 57, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-db2d61d92f", ContainerID:"c3878b8540035f8eb0956e8fccdcca11c48f09e1cea572e2a739dd51f8717454", Pod:"goldmane-666569f655-npqz8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.58.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7141e2966ae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:57:48.334093 containerd[1615]: 2026-01-16 23:57:48.279 [INFO][5050] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="19cc241b89dd849b0574ebc2d04f961d4bd8583e7202c758d54ec24098a09a1b" Jan 16 23:57:48.334093 containerd[1615]: 2026-01-16 23:57:48.279 [INFO][5050] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="19cc241b89dd849b0574ebc2d04f961d4bd8583e7202c758d54ec24098a09a1b" iface="eth0" netns="" Jan 16 23:57:48.334093 containerd[1615]: 2026-01-16 23:57:48.279 [INFO][5050] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="19cc241b89dd849b0574ebc2d04f961d4bd8583e7202c758d54ec24098a09a1b" Jan 16 23:57:48.334093 containerd[1615]: 2026-01-16 23:57:48.279 [INFO][5050] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="19cc241b89dd849b0574ebc2d04f961d4bd8583e7202c758d54ec24098a09a1b" Jan 16 23:57:48.334093 containerd[1615]: 2026-01-16 23:57:48.306 [INFO][5059] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="19cc241b89dd849b0574ebc2d04f961d4bd8583e7202c758d54ec24098a09a1b" HandleID="k8s-pod-network.19cc241b89dd849b0574ebc2d04f961d4bd8583e7202c758d54ec24098a09a1b" Workload="ci--4081--3--6--n--db2d61d92f-k8s-goldmane--666569f655--npqz8-eth0" Jan 16 23:57:48.334093 containerd[1615]: 2026-01-16 23:57:48.306 [INFO][5059] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:48.334093 containerd[1615]: 2026-01-16 23:57:48.306 [INFO][5059] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:48.334093 containerd[1615]: 2026-01-16 23:57:48.320 [WARNING][5059] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="19cc241b89dd849b0574ebc2d04f961d4bd8583e7202c758d54ec24098a09a1b" HandleID="k8s-pod-network.19cc241b89dd849b0574ebc2d04f961d4bd8583e7202c758d54ec24098a09a1b" Workload="ci--4081--3--6--n--db2d61d92f-k8s-goldmane--666569f655--npqz8-eth0" Jan 16 23:57:48.334093 containerd[1615]: 2026-01-16 23:57:48.321 [INFO][5059] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="19cc241b89dd849b0574ebc2d04f961d4bd8583e7202c758d54ec24098a09a1b" HandleID="k8s-pod-network.19cc241b89dd849b0574ebc2d04f961d4bd8583e7202c758d54ec24098a09a1b" Workload="ci--4081--3--6--n--db2d61d92f-k8s-goldmane--666569f655--npqz8-eth0" Jan 16 23:57:48.334093 containerd[1615]: 2026-01-16 23:57:48.327 [INFO][5059] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:48.334093 containerd[1615]: 2026-01-16 23:57:48.331 [INFO][5050] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="19cc241b89dd849b0574ebc2d04f961d4bd8583e7202c758d54ec24098a09a1b" Jan 16 23:57:48.334093 containerd[1615]: time="2026-01-16T23:57:48.333927251Z" level=info msg="TearDown network for sandbox \"19cc241b89dd849b0574ebc2d04f961d4bd8583e7202c758d54ec24098a09a1b\" successfully" Jan 16 23:57:48.334093 containerd[1615]: time="2026-01-16T23:57:48.333972771Z" level=info msg="StopPodSandbox for \"19cc241b89dd849b0574ebc2d04f961d4bd8583e7202c758d54ec24098a09a1b\" returns successfully" Jan 16 23:57:48.335452 containerd[1615]: time="2026-01-16T23:57:48.334703883Z" level=info msg="RemovePodSandbox for \"19cc241b89dd849b0574ebc2d04f961d4bd8583e7202c758d54ec24098a09a1b\"" Jan 16 23:57:48.335452 containerd[1615]: time="2026-01-16T23:57:48.334927881Z" level=info msg="Forcibly stopping sandbox \"19cc241b89dd849b0574ebc2d04f961d4bd8583e7202c758d54ec24098a09a1b\"" Jan 16 23:57:48.433199 containerd[1615]: 2026-01-16 23:57:48.387 [WARNING][5073] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="19cc241b89dd849b0574ebc2d04f961d4bd8583e7202c758d54ec24098a09a1b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--db2d61d92f-k8s-goldmane--666569f655--npqz8-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"db404f65-1005-4977-b8b1-05db5155d53d", ResourceVersion:"1051", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 57, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-db2d61d92f", ContainerID:"c3878b8540035f8eb0956e8fccdcca11c48f09e1cea572e2a739dd51f8717454", Pod:"goldmane-666569f655-npqz8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.58.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7141e2966ae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:57:48.433199 containerd[1615]: 2026-01-16 23:57:48.388 [INFO][5073] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="19cc241b89dd849b0574ebc2d04f961d4bd8583e7202c758d54ec24098a09a1b" Jan 16 23:57:48.433199 containerd[1615]: 2026-01-16 23:57:48.388 [INFO][5073] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="19cc241b89dd849b0574ebc2d04f961d4bd8583e7202c758d54ec24098a09a1b" iface="eth0" netns="" Jan 16 23:57:48.433199 containerd[1615]: 2026-01-16 23:57:48.388 [INFO][5073] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="19cc241b89dd849b0574ebc2d04f961d4bd8583e7202c758d54ec24098a09a1b" Jan 16 23:57:48.433199 containerd[1615]: 2026-01-16 23:57:48.388 [INFO][5073] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="19cc241b89dd849b0574ebc2d04f961d4bd8583e7202c758d54ec24098a09a1b" Jan 16 23:57:48.433199 containerd[1615]: 2026-01-16 23:57:48.412 [INFO][5080] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="19cc241b89dd849b0574ebc2d04f961d4bd8583e7202c758d54ec24098a09a1b" HandleID="k8s-pod-network.19cc241b89dd849b0574ebc2d04f961d4bd8583e7202c758d54ec24098a09a1b" Workload="ci--4081--3--6--n--db2d61d92f-k8s-goldmane--666569f655--npqz8-eth0" Jan 16 23:57:48.433199 containerd[1615]: 2026-01-16 23:57:48.413 [INFO][5080] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:48.433199 containerd[1615]: 2026-01-16 23:57:48.413 [INFO][5080] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:48.433199 containerd[1615]: 2026-01-16 23:57:48.423 [WARNING][5080] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="19cc241b89dd849b0574ebc2d04f961d4bd8583e7202c758d54ec24098a09a1b" HandleID="k8s-pod-network.19cc241b89dd849b0574ebc2d04f961d4bd8583e7202c758d54ec24098a09a1b" Workload="ci--4081--3--6--n--db2d61d92f-k8s-goldmane--666569f655--npqz8-eth0" Jan 16 23:57:48.433199 containerd[1615]: 2026-01-16 23:57:48.423 [INFO][5080] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="19cc241b89dd849b0574ebc2d04f961d4bd8583e7202c758d54ec24098a09a1b" HandleID="k8s-pod-network.19cc241b89dd849b0574ebc2d04f961d4bd8583e7202c758d54ec24098a09a1b" Workload="ci--4081--3--6--n--db2d61d92f-k8s-goldmane--666569f655--npqz8-eth0" Jan 16 23:57:48.433199 containerd[1615]: 2026-01-16 23:57:48.426 [INFO][5080] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:48.433199 containerd[1615]: 2026-01-16 23:57:48.431 [INFO][5073] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="19cc241b89dd849b0574ebc2d04f961d4bd8583e7202c758d54ec24098a09a1b" Jan 16 23:57:48.434549 containerd[1615]: time="2026-01-16T23:57:48.433952604Z" level=info msg="TearDown network for sandbox \"19cc241b89dd849b0574ebc2d04f961d4bd8583e7202c758d54ec24098a09a1b\" successfully" Jan 16 23:57:48.439666 containerd[1615]: time="2026-01-16T23:57:48.439510666Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"19cc241b89dd849b0574ebc2d04f961d4bd8583e7202c758d54ec24098a09a1b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 16 23:57:48.439666 containerd[1615]: time="2026-01-16T23:57:48.439594065Z" level=info msg="RemovePodSandbox \"19cc241b89dd849b0574ebc2d04f961d4bd8583e7202c758d54ec24098a09a1b\" returns successfully" Jan 16 23:57:48.440281 containerd[1615]: time="2026-01-16T23:57:48.440245338Z" level=info msg="StopPodSandbox for \"68fe52e0f5db0f1599e8d3156051f848d55dc1748249d82bf0152d8603c48a9a\"" Jan 16 23:57:48.541865 containerd[1615]: 2026-01-16 23:57:48.486 [WARNING][5094] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="68fe52e0f5db0f1599e8d3156051f848d55dc1748249d82bf0152d8603c48a9a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--db2d61d92f-k8s-coredns--668d6bf9bc--czgwf-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"fa73edb7-b960-4dc9-91ae-ac7984d8d56b", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 56, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-db2d61d92f", ContainerID:"c925483c1384c2f045275a51f734807130fef4ff5f1ef11a7ec5bd2ed132cd41", Pod:"coredns-668d6bf9bc-czgwf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.58.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia9ac081bf5c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:57:48.541865 containerd[1615]: 2026-01-16 23:57:48.487 [INFO][5094] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="68fe52e0f5db0f1599e8d3156051f848d55dc1748249d82bf0152d8603c48a9a" Jan 16 23:57:48.541865 containerd[1615]: 2026-01-16 23:57:48.487 [INFO][5094] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="68fe52e0f5db0f1599e8d3156051f848d55dc1748249d82bf0152d8603c48a9a" iface="eth0" netns="" Jan 16 23:57:48.541865 containerd[1615]: 2026-01-16 23:57:48.487 [INFO][5094] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="68fe52e0f5db0f1599e8d3156051f848d55dc1748249d82bf0152d8603c48a9a" Jan 16 23:57:48.541865 containerd[1615]: 2026-01-16 23:57:48.487 [INFO][5094] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="68fe52e0f5db0f1599e8d3156051f848d55dc1748249d82bf0152d8603c48a9a" Jan 16 23:57:48.541865 containerd[1615]: 2026-01-16 23:57:48.513 [INFO][5101] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="68fe52e0f5db0f1599e8d3156051f848d55dc1748249d82bf0152d8603c48a9a" HandleID="k8s-pod-network.68fe52e0f5db0f1599e8d3156051f848d55dc1748249d82bf0152d8603c48a9a" Workload="ci--4081--3--6--n--db2d61d92f-k8s-coredns--668d6bf9bc--czgwf-eth0" Jan 16 23:57:48.541865 containerd[1615]: 2026-01-16 23:57:48.513 [INFO][5101] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:48.541865 containerd[1615]: 2026-01-16 23:57:48.514 [INFO][5101] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:48.541865 containerd[1615]: 2026-01-16 23:57:48.529 [WARNING][5101] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="68fe52e0f5db0f1599e8d3156051f848d55dc1748249d82bf0152d8603c48a9a" HandleID="k8s-pod-network.68fe52e0f5db0f1599e8d3156051f848d55dc1748249d82bf0152d8603c48a9a" Workload="ci--4081--3--6--n--db2d61d92f-k8s-coredns--668d6bf9bc--czgwf-eth0" Jan 16 23:57:48.541865 containerd[1615]: 2026-01-16 23:57:48.529 [INFO][5101] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="68fe52e0f5db0f1599e8d3156051f848d55dc1748249d82bf0152d8603c48a9a" HandleID="k8s-pod-network.68fe52e0f5db0f1599e8d3156051f848d55dc1748249d82bf0152d8603c48a9a" Workload="ci--4081--3--6--n--db2d61d92f-k8s-coredns--668d6bf9bc--czgwf-eth0" Jan 16 23:57:48.541865 containerd[1615]: 2026-01-16 23:57:48.533 [INFO][5101] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:48.541865 containerd[1615]: 2026-01-16 23:57:48.536 [INFO][5094] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="68fe52e0f5db0f1599e8d3156051f848d55dc1748249d82bf0152d8603c48a9a" Jan 16 23:57:48.541865 containerd[1615]: time="2026-01-16T23:57:48.541168721Z" level=info msg="TearDown network for sandbox \"68fe52e0f5db0f1599e8d3156051f848d55dc1748249d82bf0152d8603c48a9a\" successfully" Jan 16 23:57:48.541865 containerd[1615]: time="2026-01-16T23:57:48.541205761Z" level=info msg="StopPodSandbox for \"68fe52e0f5db0f1599e8d3156051f848d55dc1748249d82bf0152d8603c48a9a\" returns successfully" Jan 16 23:57:48.544121 containerd[1615]: time="2026-01-16T23:57:48.543276819Z" level=info msg="RemovePodSandbox for \"68fe52e0f5db0f1599e8d3156051f848d55dc1748249d82bf0152d8603c48a9a\"" Jan 16 23:57:48.544121 containerd[1615]: time="2026-01-16T23:57:48.543325658Z" level=info msg="Forcibly stopping sandbox \"68fe52e0f5db0f1599e8d3156051f848d55dc1748249d82bf0152d8603c48a9a\"" Jan 16 23:57:48.653225 containerd[1615]: 2026-01-16 23:57:48.601 [WARNING][5116] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="68fe52e0f5db0f1599e8d3156051f848d55dc1748249d82bf0152d8603c48a9a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--db2d61d92f-k8s-coredns--668d6bf9bc--czgwf-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"fa73edb7-b960-4dc9-91ae-ac7984d8d56b", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 56, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-db2d61d92f", ContainerID:"c925483c1384c2f045275a51f734807130fef4ff5f1ef11a7ec5bd2ed132cd41", Pod:"coredns-668d6bf9bc-czgwf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.58.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia9ac081bf5c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:57:48.653225 containerd[1615]: 2026-01-16 23:57:48.602 [INFO][5116] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="68fe52e0f5db0f1599e8d3156051f848d55dc1748249d82bf0152d8603c48a9a" Jan 16 23:57:48.653225 containerd[1615]: 2026-01-16 23:57:48.602 [INFO][5116] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="68fe52e0f5db0f1599e8d3156051f848d55dc1748249d82bf0152d8603c48a9a" iface="eth0" netns="" Jan 16 23:57:48.653225 containerd[1615]: 2026-01-16 23:57:48.602 [INFO][5116] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="68fe52e0f5db0f1599e8d3156051f848d55dc1748249d82bf0152d8603c48a9a" Jan 16 23:57:48.653225 containerd[1615]: 2026-01-16 23:57:48.602 [INFO][5116] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="68fe52e0f5db0f1599e8d3156051f848d55dc1748249d82bf0152d8603c48a9a" Jan 16 23:57:48.653225 containerd[1615]: 2026-01-16 23:57:48.628 [INFO][5123] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="68fe52e0f5db0f1599e8d3156051f848d55dc1748249d82bf0152d8603c48a9a" HandleID="k8s-pod-network.68fe52e0f5db0f1599e8d3156051f848d55dc1748249d82bf0152d8603c48a9a" Workload="ci--4081--3--6--n--db2d61d92f-k8s-coredns--668d6bf9bc--czgwf-eth0" Jan 16 23:57:48.653225 containerd[1615]: 2026-01-16 23:57:48.628 [INFO][5123] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:48.653225 containerd[1615]: 2026-01-16 23:57:48.628 [INFO][5123] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:48.653225 containerd[1615]: 2026-01-16 23:57:48.640 [WARNING][5123] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="68fe52e0f5db0f1599e8d3156051f848d55dc1748249d82bf0152d8603c48a9a" HandleID="k8s-pod-network.68fe52e0f5db0f1599e8d3156051f848d55dc1748249d82bf0152d8603c48a9a" Workload="ci--4081--3--6--n--db2d61d92f-k8s-coredns--668d6bf9bc--czgwf-eth0" Jan 16 23:57:48.653225 containerd[1615]: 2026-01-16 23:57:48.640 [INFO][5123] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="68fe52e0f5db0f1599e8d3156051f848d55dc1748249d82bf0152d8603c48a9a" HandleID="k8s-pod-network.68fe52e0f5db0f1599e8d3156051f848d55dc1748249d82bf0152d8603c48a9a" Workload="ci--4081--3--6--n--db2d61d92f-k8s-coredns--668d6bf9bc--czgwf-eth0" Jan 16 23:57:48.653225 containerd[1615]: 2026-01-16 23:57:48.644 [INFO][5123] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:48.653225 containerd[1615]: 2026-01-16 23:57:48.648 [INFO][5116] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="68fe52e0f5db0f1599e8d3156051f848d55dc1748249d82bf0152d8603c48a9a" Jan 16 23:57:48.653225 containerd[1615]: time="2026-01-16T23:57:48.652715833Z" level=info msg="TearDown network for sandbox \"68fe52e0f5db0f1599e8d3156051f848d55dc1748249d82bf0152d8603c48a9a\" successfully" Jan 16 23:57:48.658085 containerd[1615]: time="2026-01-16T23:57:48.657844299Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"68fe52e0f5db0f1599e8d3156051f848d55dc1748249d82bf0152d8603c48a9a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 16 23:57:48.658085 containerd[1615]: time="2026-01-16T23:57:48.657943218Z" level=info msg="RemovePodSandbox \"68fe52e0f5db0f1599e8d3156051f848d55dc1748249d82bf0152d8603c48a9a\" returns successfully" Jan 16 23:57:48.659061 containerd[1615]: time="2026-01-16T23:57:48.659006407Z" level=info msg="StopPodSandbox for \"5c3bb6d7c27e5790e94f8f9d03e414cd0001ffd0b8524745bcca295dc616eaf1\"" Jan 16 23:57:48.764157 containerd[1615]: 2026-01-16 23:57:48.712 [WARNING][5137] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5c3bb6d7c27e5790e94f8f9d03e414cd0001ffd0b8524745bcca295dc616eaf1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--db2d61d92f-k8s-calico--kube--controllers--79df8bc6d5--wt9rj-eth0", GenerateName:"calico-kube-controllers-79df8bc6d5-", Namespace:"calico-system", SelfLink:"", UID:"59191777-c11b-4c90-aa9f-cb135874655c", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 57, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79df8bc6d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-db2d61d92f", ContainerID:"9b3fc1e59e2c1bc327cd5c7bf9cf24594c2d8b5ff5e6acb1228138d5faf784af", Pod:"calico-kube-controllers-79df8bc6d5-wt9rj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.58.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliacac572f0e4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:57:48.764157 containerd[1615]: 2026-01-16 23:57:48.712 [INFO][5137] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5c3bb6d7c27e5790e94f8f9d03e414cd0001ffd0b8524745bcca295dc616eaf1" Jan 16 23:57:48.764157 containerd[1615]: 2026-01-16 23:57:48.712 [INFO][5137] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5c3bb6d7c27e5790e94f8f9d03e414cd0001ffd0b8524745bcca295dc616eaf1" iface="eth0" netns="" Jan 16 23:57:48.764157 containerd[1615]: 2026-01-16 23:57:48.712 [INFO][5137] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5c3bb6d7c27e5790e94f8f9d03e414cd0001ffd0b8524745bcca295dc616eaf1" Jan 16 23:57:48.764157 containerd[1615]: 2026-01-16 23:57:48.712 [INFO][5137] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5c3bb6d7c27e5790e94f8f9d03e414cd0001ffd0b8524745bcca295dc616eaf1" Jan 16 23:57:48.764157 containerd[1615]: 2026-01-16 23:57:48.741 [INFO][5145] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5c3bb6d7c27e5790e94f8f9d03e414cd0001ffd0b8524745bcca295dc616eaf1" HandleID="k8s-pod-network.5c3bb6d7c27e5790e94f8f9d03e414cd0001ffd0b8524745bcca295dc616eaf1" Workload="ci--4081--3--6--n--db2d61d92f-k8s-calico--kube--controllers--79df8bc6d5--wt9rj-eth0" Jan 16 23:57:48.764157 containerd[1615]: 2026-01-16 23:57:48.741 [INFO][5145] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:48.764157 containerd[1615]: 2026-01-16 23:57:48.741 [INFO][5145] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:48.764157 containerd[1615]: 2026-01-16 23:57:48.757 [WARNING][5145] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5c3bb6d7c27e5790e94f8f9d03e414cd0001ffd0b8524745bcca295dc616eaf1" HandleID="k8s-pod-network.5c3bb6d7c27e5790e94f8f9d03e414cd0001ffd0b8524745bcca295dc616eaf1" Workload="ci--4081--3--6--n--db2d61d92f-k8s-calico--kube--controllers--79df8bc6d5--wt9rj-eth0" Jan 16 23:57:48.764157 containerd[1615]: 2026-01-16 23:57:48.757 [INFO][5145] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5c3bb6d7c27e5790e94f8f9d03e414cd0001ffd0b8524745bcca295dc616eaf1" HandleID="k8s-pod-network.5c3bb6d7c27e5790e94f8f9d03e414cd0001ffd0b8524745bcca295dc616eaf1" Workload="ci--4081--3--6--n--db2d61d92f-k8s-calico--kube--controllers--79df8bc6d5--wt9rj-eth0" Jan 16 23:57:48.764157 containerd[1615]: 2026-01-16 23:57:48.759 [INFO][5145] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:48.764157 containerd[1615]: 2026-01-16 23:57:48.762 [INFO][5137] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5c3bb6d7c27e5790e94f8f9d03e414cd0001ffd0b8524745bcca295dc616eaf1" Jan 16 23:57:48.765275 containerd[1615]: time="2026-01-16T23:57:48.764239025Z" level=info msg="TearDown network for sandbox \"5c3bb6d7c27e5790e94f8f9d03e414cd0001ffd0b8524745bcca295dc616eaf1\" successfully" Jan 16 23:57:48.765275 containerd[1615]: time="2026-01-16T23:57:48.764271705Z" level=info msg="StopPodSandbox for \"5c3bb6d7c27e5790e94f8f9d03e414cd0001ffd0b8524745bcca295dc616eaf1\" returns successfully" Jan 16 23:57:48.765362 containerd[1615]: time="2026-01-16T23:57:48.765287334Z" level=info msg="RemovePodSandbox for \"5c3bb6d7c27e5790e94f8f9d03e414cd0001ffd0b8524745bcca295dc616eaf1\"" Jan 16 23:57:48.765362 containerd[1615]: time="2026-01-16T23:57:48.765339413Z" level=info msg="Forcibly stopping sandbox \"5c3bb6d7c27e5790e94f8f9d03e414cd0001ffd0b8524745bcca295dc616eaf1\"" Jan 16 23:57:48.876613 containerd[1615]: 2026-01-16 23:57:48.820 [WARNING][5160] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5c3bb6d7c27e5790e94f8f9d03e414cd0001ffd0b8524745bcca295dc616eaf1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--db2d61d92f-k8s-calico--kube--controllers--79df8bc6d5--wt9rj-eth0", GenerateName:"calico-kube-controllers-79df8bc6d5-", Namespace:"calico-system", SelfLink:"", UID:"59191777-c11b-4c90-aa9f-cb135874655c", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 57, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79df8bc6d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-db2d61d92f", ContainerID:"9b3fc1e59e2c1bc327cd5c7bf9cf24594c2d8b5ff5e6acb1228138d5faf784af", Pod:"calico-kube-controllers-79df8bc6d5-wt9rj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.58.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliacac572f0e4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:57:48.876613 containerd[1615]: 2026-01-16 23:57:48.820 [INFO][5160] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5c3bb6d7c27e5790e94f8f9d03e414cd0001ffd0b8524745bcca295dc616eaf1" Jan 16 23:57:48.876613 containerd[1615]: 2026-01-16 23:57:48.820 [INFO][5160] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5c3bb6d7c27e5790e94f8f9d03e414cd0001ffd0b8524745bcca295dc616eaf1" iface="eth0" netns="" Jan 16 23:57:48.876613 containerd[1615]: 2026-01-16 23:57:48.820 [INFO][5160] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5c3bb6d7c27e5790e94f8f9d03e414cd0001ffd0b8524745bcca295dc616eaf1" Jan 16 23:57:48.876613 containerd[1615]: 2026-01-16 23:57:48.820 [INFO][5160] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5c3bb6d7c27e5790e94f8f9d03e414cd0001ffd0b8524745bcca295dc616eaf1" Jan 16 23:57:48.876613 containerd[1615]: 2026-01-16 23:57:48.852 [INFO][5167] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5c3bb6d7c27e5790e94f8f9d03e414cd0001ffd0b8524745bcca295dc616eaf1" HandleID="k8s-pod-network.5c3bb6d7c27e5790e94f8f9d03e414cd0001ffd0b8524745bcca295dc616eaf1" Workload="ci--4081--3--6--n--db2d61d92f-k8s-calico--kube--controllers--79df8bc6d5--wt9rj-eth0" Jan 16 23:57:48.876613 containerd[1615]: 2026-01-16 23:57:48.853 [INFO][5167] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:48.876613 containerd[1615]: 2026-01-16 23:57:48.853 [INFO][5167] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:48.876613 containerd[1615]: 2026-01-16 23:57:48.865 [WARNING][5167] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5c3bb6d7c27e5790e94f8f9d03e414cd0001ffd0b8524745bcca295dc616eaf1" HandleID="k8s-pod-network.5c3bb6d7c27e5790e94f8f9d03e414cd0001ffd0b8524745bcca295dc616eaf1" Workload="ci--4081--3--6--n--db2d61d92f-k8s-calico--kube--controllers--79df8bc6d5--wt9rj-eth0" Jan 16 23:57:48.876613 containerd[1615]: 2026-01-16 23:57:48.865 [INFO][5167] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5c3bb6d7c27e5790e94f8f9d03e414cd0001ffd0b8524745bcca295dc616eaf1" HandleID="k8s-pod-network.5c3bb6d7c27e5790e94f8f9d03e414cd0001ffd0b8524745bcca295dc616eaf1" Workload="ci--4081--3--6--n--db2d61d92f-k8s-calico--kube--controllers--79df8bc6d5--wt9rj-eth0" Jan 16 23:57:48.876613 containerd[1615]: 2026-01-16 23:57:48.868 [INFO][5167] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:48.876613 containerd[1615]: 2026-01-16 23:57:48.872 [INFO][5160] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5c3bb6d7c27e5790e94f8f9d03e414cd0001ffd0b8524745bcca295dc616eaf1" Jan 16 23:57:48.877330 containerd[1615]: time="2026-01-16T23:57:48.876684727Z" level=info msg="TearDown network for sandbox \"5c3bb6d7c27e5790e94f8f9d03e414cd0001ffd0b8524745bcca295dc616eaf1\" successfully" Jan 16 23:57:48.882219 containerd[1615]: time="2026-01-16T23:57:48.882146830Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5c3bb6d7c27e5790e94f8f9d03e414cd0001ffd0b8524745bcca295dc616eaf1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 16 23:57:48.882378 containerd[1615]: time="2026-01-16T23:57:48.882262189Z" level=info msg="RemovePodSandbox \"5c3bb6d7c27e5790e94f8f9d03e414cd0001ffd0b8524745bcca295dc616eaf1\" returns successfully" Jan 16 23:57:48.883334 containerd[1615]: time="2026-01-16T23:57:48.882982262Z" level=info msg="StopPodSandbox for \"8bcc19a3b81706a4c02b39edd5dfc7a5bf10b0db2b317ad075008ecc9d3cbb40\"" Jan 16 23:57:48.978572 containerd[1615]: 2026-01-16 23:57:48.935 [WARNING][5181] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8bcc19a3b81706a4c02b39edd5dfc7a5bf10b0db2b317ad075008ecc9d3cbb40" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--db2d61d92f-k8s-csi--node--driver--n2dkx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"418b98a5-873e-4b20-a6d4-0ef55480b923", ResourceVersion:"1049", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 57, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-db2d61d92f", ContainerID:"51420cc34f9c4cd389a448a0dd84937deb876b4d1275f8f39553dfe4939ec031", Pod:"csi-node-driver-n2dkx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.58.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic9ed88e9a46", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:57:48.978572 containerd[1615]: 2026-01-16 23:57:48.935 [INFO][5181] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8bcc19a3b81706a4c02b39edd5dfc7a5bf10b0db2b317ad075008ecc9d3cbb40" Jan 16 23:57:48.978572 containerd[1615]: 2026-01-16 23:57:48.935 [INFO][5181] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8bcc19a3b81706a4c02b39edd5dfc7a5bf10b0db2b317ad075008ecc9d3cbb40" iface="eth0" netns="" Jan 16 23:57:48.978572 containerd[1615]: 2026-01-16 23:57:48.935 [INFO][5181] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8bcc19a3b81706a4c02b39edd5dfc7a5bf10b0db2b317ad075008ecc9d3cbb40" Jan 16 23:57:48.978572 containerd[1615]: 2026-01-16 23:57:48.935 [INFO][5181] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8bcc19a3b81706a4c02b39edd5dfc7a5bf10b0db2b317ad075008ecc9d3cbb40" Jan 16 23:57:48.978572 containerd[1615]: 2026-01-16 23:57:48.960 [INFO][5188] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8bcc19a3b81706a4c02b39edd5dfc7a5bf10b0db2b317ad075008ecc9d3cbb40" HandleID="k8s-pod-network.8bcc19a3b81706a4c02b39edd5dfc7a5bf10b0db2b317ad075008ecc9d3cbb40" Workload="ci--4081--3--6--n--db2d61d92f-k8s-csi--node--driver--n2dkx-eth0" Jan 16 23:57:48.978572 containerd[1615]: 2026-01-16 23:57:48.961 [INFO][5188] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:48.978572 containerd[1615]: 2026-01-16 23:57:48.961 [INFO][5188] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:48.978572 containerd[1615]: 2026-01-16 23:57:48.971 [WARNING][5188] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8bcc19a3b81706a4c02b39edd5dfc7a5bf10b0db2b317ad075008ecc9d3cbb40" HandleID="k8s-pod-network.8bcc19a3b81706a4c02b39edd5dfc7a5bf10b0db2b317ad075008ecc9d3cbb40" Workload="ci--4081--3--6--n--db2d61d92f-k8s-csi--node--driver--n2dkx-eth0" Jan 16 23:57:48.978572 containerd[1615]: 2026-01-16 23:57:48.971 [INFO][5188] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8bcc19a3b81706a4c02b39edd5dfc7a5bf10b0db2b317ad075008ecc9d3cbb40" HandleID="k8s-pod-network.8bcc19a3b81706a4c02b39edd5dfc7a5bf10b0db2b317ad075008ecc9d3cbb40" Workload="ci--4081--3--6--n--db2d61d92f-k8s-csi--node--driver--n2dkx-eth0" Jan 16 23:57:48.978572 containerd[1615]: 2026-01-16 23:57:48.974 [INFO][5188] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:48.978572 containerd[1615]: 2026-01-16 23:57:48.976 [INFO][5181] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8bcc19a3b81706a4c02b39edd5dfc7a5bf10b0db2b317ad075008ecc9d3cbb40" Jan 16 23:57:48.979797 containerd[1615]: time="2026-01-16T23:57:48.979491451Z" level=info msg="TearDown network for sandbox \"8bcc19a3b81706a4c02b39edd5dfc7a5bf10b0db2b317ad075008ecc9d3cbb40\" successfully" Jan 16 23:57:48.979797 containerd[1615]: time="2026-01-16T23:57:48.979555210Z" level=info msg="StopPodSandbox for \"8bcc19a3b81706a4c02b39edd5dfc7a5bf10b0db2b317ad075008ecc9d3cbb40\" returns successfully" Jan 16 23:57:48.979797 containerd[1615]: time="2026-01-16T23:57:48.980389761Z" level=info msg="RemovePodSandbox for \"8bcc19a3b81706a4c02b39edd5dfc7a5bf10b0db2b317ad075008ecc9d3cbb40\"" Jan 16 23:57:48.979797 containerd[1615]: time="2026-01-16T23:57:48.980430321Z" level=info msg="Forcibly stopping sandbox \"8bcc19a3b81706a4c02b39edd5dfc7a5bf10b0db2b317ad075008ecc9d3cbb40\"" Jan 16 23:57:49.075122 containerd[1615]: 2026-01-16 23:57:49.027 [WARNING][5202] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8bcc19a3b81706a4c02b39edd5dfc7a5bf10b0db2b317ad075008ecc9d3cbb40" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--db2d61d92f-k8s-csi--node--driver--n2dkx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"418b98a5-873e-4b20-a6d4-0ef55480b923", ResourceVersion:"1049", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 57, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-db2d61d92f", ContainerID:"51420cc34f9c4cd389a448a0dd84937deb876b4d1275f8f39553dfe4939ec031", Pod:"csi-node-driver-n2dkx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.58.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic9ed88e9a46", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:57:49.075122 containerd[1615]: 2026-01-16 23:57:49.028 [INFO][5202] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8bcc19a3b81706a4c02b39edd5dfc7a5bf10b0db2b317ad075008ecc9d3cbb40" Jan 16 23:57:49.075122 containerd[1615]: 2026-01-16 23:57:49.028 [INFO][5202] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8bcc19a3b81706a4c02b39edd5dfc7a5bf10b0db2b317ad075008ecc9d3cbb40" iface="eth0" netns="" Jan 16 23:57:49.075122 containerd[1615]: 2026-01-16 23:57:49.028 [INFO][5202] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8bcc19a3b81706a4c02b39edd5dfc7a5bf10b0db2b317ad075008ecc9d3cbb40" Jan 16 23:57:49.075122 containerd[1615]: 2026-01-16 23:57:49.028 [INFO][5202] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8bcc19a3b81706a4c02b39edd5dfc7a5bf10b0db2b317ad075008ecc9d3cbb40" Jan 16 23:57:49.075122 containerd[1615]: 2026-01-16 23:57:49.053 [INFO][5209] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8bcc19a3b81706a4c02b39edd5dfc7a5bf10b0db2b317ad075008ecc9d3cbb40" HandleID="k8s-pod-network.8bcc19a3b81706a4c02b39edd5dfc7a5bf10b0db2b317ad075008ecc9d3cbb40" Workload="ci--4081--3--6--n--db2d61d92f-k8s-csi--node--driver--n2dkx-eth0" Jan 16 23:57:49.075122 containerd[1615]: 2026-01-16 23:57:49.053 [INFO][5209] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:49.075122 containerd[1615]: 2026-01-16 23:57:49.053 [INFO][5209] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:49.075122 containerd[1615]: 2026-01-16 23:57:49.065 [WARNING][5209] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8bcc19a3b81706a4c02b39edd5dfc7a5bf10b0db2b317ad075008ecc9d3cbb40" HandleID="k8s-pod-network.8bcc19a3b81706a4c02b39edd5dfc7a5bf10b0db2b317ad075008ecc9d3cbb40" Workload="ci--4081--3--6--n--db2d61d92f-k8s-csi--node--driver--n2dkx-eth0" Jan 16 23:57:49.075122 containerd[1615]: 2026-01-16 23:57:49.066 [INFO][5209] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8bcc19a3b81706a4c02b39edd5dfc7a5bf10b0db2b317ad075008ecc9d3cbb40" HandleID="k8s-pod-network.8bcc19a3b81706a4c02b39edd5dfc7a5bf10b0db2b317ad075008ecc9d3cbb40" Workload="ci--4081--3--6--n--db2d61d92f-k8s-csi--node--driver--n2dkx-eth0" Jan 16 23:57:49.075122 containerd[1615]: 2026-01-16 23:57:49.069 [INFO][5209] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:49.075122 containerd[1615]: 2026-01-16 23:57:49.072 [INFO][5202] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8bcc19a3b81706a4c02b39edd5dfc7a5bf10b0db2b317ad075008ecc9d3cbb40" Jan 16 23:57:49.075615 containerd[1615]: time="2026-01-16T23:57:49.075195587Z" level=info msg="TearDown network for sandbox \"8bcc19a3b81706a4c02b39edd5dfc7a5bf10b0db2b317ad075008ecc9d3cbb40\" successfully" Jan 16 23:57:49.084508 containerd[1615]: time="2026-01-16T23:57:49.084430298Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8bcc19a3b81706a4c02b39edd5dfc7a5bf10b0db2b317ad075008ecc9d3cbb40\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 16 23:57:49.084691 containerd[1615]: time="2026-01-16T23:57:49.084572697Z" level=info msg="RemovePodSandbox \"8bcc19a3b81706a4c02b39edd5dfc7a5bf10b0db2b317ad075008ecc9d3cbb40\" returns successfully" Jan 16 23:57:49.085639 containerd[1615]: time="2026-01-16T23:57:49.085564447Z" level=info msg="StopPodSandbox for \"aa192c094faf8b69bcbb6f9d843eb29a2c377500ef98c82221c385697ba71bf2\"" Jan 16 23:57:49.178743 containerd[1615]: 2026-01-16 23:57:49.131 [WARNING][5223] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="aa192c094faf8b69bcbb6f9d843eb29a2c377500ef98c82221c385697ba71bf2" WorkloadEndpoint="ci--4081--3--6--n--db2d61d92f-k8s-whisker--7f48594d95--tmkh9-eth0" Jan 16 23:57:49.178743 containerd[1615]: 2026-01-16 23:57:49.131 [INFO][5223] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="aa192c094faf8b69bcbb6f9d843eb29a2c377500ef98c82221c385697ba71bf2" Jan 16 23:57:49.178743 containerd[1615]: 2026-01-16 23:57:49.131 [INFO][5223] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aa192c094faf8b69bcbb6f9d843eb29a2c377500ef98c82221c385697ba71bf2" iface="eth0" netns="" Jan 16 23:57:49.178743 containerd[1615]: 2026-01-16 23:57:49.131 [INFO][5223] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="aa192c094faf8b69bcbb6f9d843eb29a2c377500ef98c82221c385697ba71bf2" Jan 16 23:57:49.178743 containerd[1615]: 2026-01-16 23:57:49.131 [INFO][5223] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aa192c094faf8b69bcbb6f9d843eb29a2c377500ef98c82221c385697ba71bf2" Jan 16 23:57:49.178743 containerd[1615]: 2026-01-16 23:57:49.160 [INFO][5230] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="aa192c094faf8b69bcbb6f9d843eb29a2c377500ef98c82221c385697ba71bf2" HandleID="k8s-pod-network.aa192c094faf8b69bcbb6f9d843eb29a2c377500ef98c82221c385697ba71bf2" Workload="ci--4081--3--6--n--db2d61d92f-k8s-whisker--7f48594d95--tmkh9-eth0" Jan 16 23:57:49.178743 containerd[1615]: 2026-01-16 23:57:49.160 [INFO][5230] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:49.178743 containerd[1615]: 2026-01-16 23:57:49.160 [INFO][5230] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:49.178743 containerd[1615]: 2026-01-16 23:57:49.172 [WARNING][5230] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="aa192c094faf8b69bcbb6f9d843eb29a2c377500ef98c82221c385697ba71bf2" HandleID="k8s-pod-network.aa192c094faf8b69bcbb6f9d843eb29a2c377500ef98c82221c385697ba71bf2" Workload="ci--4081--3--6--n--db2d61d92f-k8s-whisker--7f48594d95--tmkh9-eth0" Jan 16 23:57:49.178743 containerd[1615]: 2026-01-16 23:57:49.172 [INFO][5230] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="aa192c094faf8b69bcbb6f9d843eb29a2c377500ef98c82221c385697ba71bf2" HandleID="k8s-pod-network.aa192c094faf8b69bcbb6f9d843eb29a2c377500ef98c82221c385697ba71bf2" Workload="ci--4081--3--6--n--db2d61d92f-k8s-whisker--7f48594d95--tmkh9-eth0" Jan 16 23:57:49.178743 containerd[1615]: 2026-01-16 23:57:49.174 [INFO][5230] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:49.178743 containerd[1615]: 2026-01-16 23:57:49.176 [INFO][5223] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="aa192c094faf8b69bcbb6f9d843eb29a2c377500ef98c82221c385697ba71bf2" Jan 16 23:57:49.178743 containerd[1615]: time="2026-01-16T23:57:49.178583706Z" level=info msg="TearDown network for sandbox \"aa192c094faf8b69bcbb6f9d843eb29a2c377500ef98c82221c385697ba71bf2\" successfully" Jan 16 23:57:49.178743 containerd[1615]: time="2026-01-16T23:57:49.178614306Z" level=info msg="StopPodSandbox for \"aa192c094faf8b69bcbb6f9d843eb29a2c377500ef98c82221c385697ba71bf2\" returns successfully" Jan 16 23:57:49.180438 containerd[1615]: time="2026-01-16T23:57:49.180385609Z" level=info msg="RemovePodSandbox for \"aa192c094faf8b69bcbb6f9d843eb29a2c377500ef98c82221c385697ba71bf2\"" Jan 16 23:57:49.180438 containerd[1615]: time="2026-01-16T23:57:49.180433288Z" level=info msg="Forcibly stopping sandbox \"aa192c094faf8b69bcbb6f9d843eb29a2c377500ef98c82221c385697ba71bf2\"" Jan 16 23:57:49.255522 containerd[1615]: time="2026-01-16T23:57:49.255430362Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 16 23:57:49.287107 containerd[1615]: 2026-01-16 23:57:49.224 [WARNING][5244] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="aa192c094faf8b69bcbb6f9d843eb29a2c377500ef98c82221c385697ba71bf2" WorkloadEndpoint="ci--4081--3--6--n--db2d61d92f-k8s-whisker--7f48594d95--tmkh9-eth0" Jan 16 23:57:49.287107 containerd[1615]: 2026-01-16 23:57:49.224 [INFO][5244] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="aa192c094faf8b69bcbb6f9d843eb29a2c377500ef98c82221c385697ba71bf2" Jan 16 23:57:49.287107 containerd[1615]: 2026-01-16 23:57:49.224 [INFO][5244] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aa192c094faf8b69bcbb6f9d843eb29a2c377500ef98c82221c385697ba71bf2" iface="eth0" netns="" Jan 16 23:57:49.287107 containerd[1615]: 2026-01-16 23:57:49.224 [INFO][5244] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="aa192c094faf8b69bcbb6f9d843eb29a2c377500ef98c82221c385697ba71bf2" Jan 16 23:57:49.287107 containerd[1615]: 2026-01-16 23:57:49.224 [INFO][5244] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aa192c094faf8b69bcbb6f9d843eb29a2c377500ef98c82221c385697ba71bf2" Jan 16 23:57:49.287107 containerd[1615]: 2026-01-16 23:57:49.262 [INFO][5251] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="aa192c094faf8b69bcbb6f9d843eb29a2c377500ef98c82221c385697ba71bf2" HandleID="k8s-pod-network.aa192c094faf8b69bcbb6f9d843eb29a2c377500ef98c82221c385697ba71bf2" Workload="ci--4081--3--6--n--db2d61d92f-k8s-whisker--7f48594d95--tmkh9-eth0" Jan 16 23:57:49.287107 containerd[1615]: 2026-01-16 23:57:49.263 [INFO][5251] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:49.287107 containerd[1615]: 2026-01-16 23:57:49.264 [INFO][5251] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:49.287107 containerd[1615]: 2026-01-16 23:57:49.276 [WARNING][5251] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="aa192c094faf8b69bcbb6f9d843eb29a2c377500ef98c82221c385697ba71bf2" HandleID="k8s-pod-network.aa192c094faf8b69bcbb6f9d843eb29a2c377500ef98c82221c385697ba71bf2" Workload="ci--4081--3--6--n--db2d61d92f-k8s-whisker--7f48594d95--tmkh9-eth0" Jan 16 23:57:49.287107 containerd[1615]: 2026-01-16 23:57:49.276 [INFO][5251] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="aa192c094faf8b69bcbb6f9d843eb29a2c377500ef98c82221c385697ba71bf2" HandleID="k8s-pod-network.aa192c094faf8b69bcbb6f9d843eb29a2c377500ef98c82221c385697ba71bf2" Workload="ci--4081--3--6--n--db2d61d92f-k8s-whisker--7f48594d95--tmkh9-eth0" Jan 16 23:57:49.287107 containerd[1615]: 2026-01-16 23:57:49.280 [INFO][5251] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:49.287107 containerd[1615]: 2026-01-16 23:57:49.284 [INFO][5244] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="aa192c094faf8b69bcbb6f9d843eb29a2c377500ef98c82221c385697ba71bf2" Jan 16 23:57:49.288320 containerd[1615]: time="2026-01-16T23:57:49.287696729Z" level=info msg="TearDown network for sandbox \"aa192c094faf8b69bcbb6f9d843eb29a2c377500ef98c82221c385697ba71bf2\" successfully" Jan 16 23:57:49.293489 containerd[1615]: time="2026-01-16T23:57:49.293330795Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"aa192c094faf8b69bcbb6f9d843eb29a2c377500ef98c82221c385697ba71bf2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 16 23:57:49.293489 containerd[1615]: time="2026-01-16T23:57:49.293441234Z" level=info msg="RemovePodSandbox \"aa192c094faf8b69bcbb6f9d843eb29a2c377500ef98c82221c385697ba71bf2\" returns successfully" Jan 16 23:57:49.294498 containerd[1615]: time="2026-01-16T23:57:49.294445024Z" level=info msg="StopPodSandbox for \"813465b72d802fde6d8262e94d1f9553f7d5daa5c3321fc020244dd436f9356b\"" Jan 16 23:57:49.402123 containerd[1615]: 2026-01-16 23:57:49.347 [WARNING][5265] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="813465b72d802fde6d8262e94d1f9553f7d5daa5c3321fc020244dd436f9356b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--db2d61d92f-k8s-calico--apiserver--6f7cdf6968--8z7jz-eth0", GenerateName:"calico-apiserver-6f7cdf6968-", Namespace:"calico-apiserver", SelfLink:"", UID:"98bfe59b-02e4-4bdc-9e5a-0a8209ddd104", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 57, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f7cdf6968", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-db2d61d92f", ContainerID:"dc5e0e9fa234e8db8b1d18e5aeb8417e3cf308e52b90115e9a2c7f9c337ba48c", Pod:"calico-apiserver-6f7cdf6968-8z7jz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.58.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4437fd1b0e4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:57:49.402123 containerd[1615]: 2026-01-16 23:57:49.348 [INFO][5265] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="813465b72d802fde6d8262e94d1f9553f7d5daa5c3321fc020244dd436f9356b" Jan 16 23:57:49.402123 containerd[1615]: 2026-01-16 23:57:49.348 [INFO][5265] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="813465b72d802fde6d8262e94d1f9553f7d5daa5c3321fc020244dd436f9356b" iface="eth0" netns="" Jan 16 23:57:49.402123 containerd[1615]: 2026-01-16 23:57:49.348 [INFO][5265] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="813465b72d802fde6d8262e94d1f9553f7d5daa5c3321fc020244dd436f9356b" Jan 16 23:57:49.402123 containerd[1615]: 2026-01-16 23:57:49.348 [INFO][5265] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="813465b72d802fde6d8262e94d1f9553f7d5daa5c3321fc020244dd436f9356b" Jan 16 23:57:49.402123 containerd[1615]: 2026-01-16 23:57:49.383 [INFO][5273] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="813465b72d802fde6d8262e94d1f9553f7d5daa5c3321fc020244dd436f9356b" HandleID="k8s-pod-network.813465b72d802fde6d8262e94d1f9553f7d5daa5c3321fc020244dd436f9356b" Workload="ci--4081--3--6--n--db2d61d92f-k8s-calico--apiserver--6f7cdf6968--8z7jz-eth0" Jan 16 23:57:49.402123 containerd[1615]: 2026-01-16 23:57:49.383 [INFO][5273] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:49.402123 containerd[1615]: 2026-01-16 23:57:49.383 [INFO][5273] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:49.402123 containerd[1615]: 2026-01-16 23:57:49.395 [WARNING][5273] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="813465b72d802fde6d8262e94d1f9553f7d5daa5c3321fc020244dd436f9356b" HandleID="k8s-pod-network.813465b72d802fde6d8262e94d1f9553f7d5daa5c3321fc020244dd436f9356b" Workload="ci--4081--3--6--n--db2d61d92f-k8s-calico--apiserver--6f7cdf6968--8z7jz-eth0" Jan 16 23:57:49.402123 containerd[1615]: 2026-01-16 23:57:49.395 [INFO][5273] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="813465b72d802fde6d8262e94d1f9553f7d5daa5c3321fc020244dd436f9356b" HandleID="k8s-pod-network.813465b72d802fde6d8262e94d1f9553f7d5daa5c3321fc020244dd436f9356b" Workload="ci--4081--3--6--n--db2d61d92f-k8s-calico--apiserver--6f7cdf6968--8z7jz-eth0" Jan 16 23:57:49.402123 containerd[1615]: 2026-01-16 23:57:49.397 [INFO][5273] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:49.402123 containerd[1615]: 2026-01-16 23:57:49.399 [INFO][5265] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="813465b72d802fde6d8262e94d1f9553f7d5daa5c3321fc020244dd436f9356b" Jan 16 23:57:49.402918 containerd[1615]: time="2026-01-16T23:57:49.402186061Z" level=info msg="TearDown network for sandbox \"813465b72d802fde6d8262e94d1f9553f7d5daa5c3321fc020244dd436f9356b\" successfully" Jan 16 23:57:49.402918 containerd[1615]: time="2026-01-16T23:57:49.402223420Z" level=info msg="StopPodSandbox for \"813465b72d802fde6d8262e94d1f9553f7d5daa5c3321fc020244dd436f9356b\" returns successfully" Jan 16 23:57:49.402918 containerd[1615]: time="2026-01-16T23:57:49.402840094Z" level=info msg="RemovePodSandbox for \"813465b72d802fde6d8262e94d1f9553f7d5daa5c3321fc020244dd436f9356b\"" Jan 16 23:57:49.402918 containerd[1615]: time="2026-01-16T23:57:49.402894734Z" level=info msg="Forcibly stopping sandbox \"813465b72d802fde6d8262e94d1f9553f7d5daa5c3321fc020244dd436f9356b\"" Jan 16 23:57:49.502856 containerd[1615]: 2026-01-16 23:57:49.451 [WARNING][5287] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="813465b72d802fde6d8262e94d1f9553f7d5daa5c3321fc020244dd436f9356b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--db2d61d92f-k8s-calico--apiserver--6f7cdf6968--8z7jz-eth0", GenerateName:"calico-apiserver-6f7cdf6968-", Namespace:"calico-apiserver", SelfLink:"", UID:"98bfe59b-02e4-4bdc-9e5a-0a8209ddd104", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 57, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f7cdf6968", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-db2d61d92f", ContainerID:"dc5e0e9fa234e8db8b1d18e5aeb8417e3cf308e52b90115e9a2c7f9c337ba48c", Pod:"calico-apiserver-6f7cdf6968-8z7jz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.58.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4437fd1b0e4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:57:49.502856 containerd[1615]: 2026-01-16 23:57:49.451 [INFO][5287] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="813465b72d802fde6d8262e94d1f9553f7d5daa5c3321fc020244dd436f9356b" Jan 16 23:57:49.502856 containerd[1615]: 2026-01-16 23:57:49.451 [INFO][5287] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="813465b72d802fde6d8262e94d1f9553f7d5daa5c3321fc020244dd436f9356b" iface="eth0" netns="" Jan 16 23:57:49.502856 containerd[1615]: 2026-01-16 23:57:49.451 [INFO][5287] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="813465b72d802fde6d8262e94d1f9553f7d5daa5c3321fc020244dd436f9356b" Jan 16 23:57:49.502856 containerd[1615]: 2026-01-16 23:57:49.451 [INFO][5287] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="813465b72d802fde6d8262e94d1f9553f7d5daa5c3321fc020244dd436f9356b" Jan 16 23:57:49.502856 containerd[1615]: 2026-01-16 23:57:49.477 [INFO][5294] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="813465b72d802fde6d8262e94d1f9553f7d5daa5c3321fc020244dd436f9356b" HandleID="k8s-pod-network.813465b72d802fde6d8262e94d1f9553f7d5daa5c3321fc020244dd436f9356b" Workload="ci--4081--3--6--n--db2d61d92f-k8s-calico--apiserver--6f7cdf6968--8z7jz-eth0" Jan 16 23:57:49.502856 containerd[1615]: 2026-01-16 23:57:49.478 [INFO][5294] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:49.502856 containerd[1615]: 2026-01-16 23:57:49.478 [INFO][5294] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:49.502856 containerd[1615]: 2026-01-16 23:57:49.493 [WARNING][5294] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="813465b72d802fde6d8262e94d1f9553f7d5daa5c3321fc020244dd436f9356b" HandleID="k8s-pod-network.813465b72d802fde6d8262e94d1f9553f7d5daa5c3321fc020244dd436f9356b" Workload="ci--4081--3--6--n--db2d61d92f-k8s-calico--apiserver--6f7cdf6968--8z7jz-eth0" Jan 16 23:57:49.502856 containerd[1615]: 2026-01-16 23:57:49.493 [INFO][5294] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="813465b72d802fde6d8262e94d1f9553f7d5daa5c3321fc020244dd436f9356b" HandleID="k8s-pod-network.813465b72d802fde6d8262e94d1f9553f7d5daa5c3321fc020244dd436f9356b" Workload="ci--4081--3--6--n--db2d61d92f-k8s-calico--apiserver--6f7cdf6968--8z7jz-eth0" Jan 16 23:57:49.502856 containerd[1615]: 2026-01-16 23:57:49.495 [INFO][5294] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:49.502856 containerd[1615]: 2026-01-16 23:57:49.498 [INFO][5287] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="813465b72d802fde6d8262e94d1f9553f7d5daa5c3321fc020244dd436f9356b" Jan 16 23:57:49.502856 containerd[1615]: time="2026-01-16T23:57:49.501147422Z" level=info msg="TearDown network for sandbox \"813465b72d802fde6d8262e94d1f9553f7d5daa5c3321fc020244dd436f9356b\" successfully" Jan 16 23:57:49.510726 containerd[1615]: time="2026-01-16T23:57:49.510600571Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"813465b72d802fde6d8262e94d1f9553f7d5daa5c3321fc020244dd436f9356b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 16 23:57:49.510950 containerd[1615]: time="2026-01-16T23:57:49.510767209Z" level=info msg="RemovePodSandbox \"813465b72d802fde6d8262e94d1f9553f7d5daa5c3321fc020244dd436f9356b\" returns successfully" Jan 16 23:57:49.511740 containerd[1615]: time="2026-01-16T23:57:49.511656240Z" level=info msg="StopPodSandbox for \"b815939dcb1947f34f4f9e2b2f13ece0d18f4c3a8bb16efca1609be2ecd59af8\"" Jan 16 23:57:49.604694 containerd[1615]: time="2026-01-16T23:57:49.604382182Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:57:49.606231 containerd[1615]: time="2026-01-16T23:57:49.606101446Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 16 23:57:49.607582 containerd[1615]: time="2026-01-16T23:57:49.606188205Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 16 23:57:49.607850 kubelet[2729]: E0116 23:57:49.606736 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 23:57:49.607850 kubelet[2729]: E0116 23:57:49.606811 2729 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 23:57:49.607850 kubelet[2729]: E0116 23:57:49.607021 2729 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hlwr5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-79df8bc6d5-wt9rj_calico-system(59191777-c11b-4c90-aa9f-cb135874655c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 16 23:57:49.608833 kubelet[2729]: E0116 23:57:49.608761 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-79df8bc6d5-wt9rj" podUID="59191777-c11b-4c90-aa9f-cb135874655c" Jan 16 23:57:49.626725 containerd[1615]: 2026-01-16 23:57:49.573 [WARNING][5308] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b815939dcb1947f34f4f9e2b2f13ece0d18f4c3a8bb16efca1609be2ecd59af8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--db2d61d92f-k8s-calico--apiserver--6f7cdf6968--skgrt-eth0", GenerateName:"calico-apiserver-6f7cdf6968-", Namespace:"calico-apiserver", SelfLink:"", UID:"e369e49f-7e57-4c99-8a2f-2c08450c808f", ResourceVersion:"1060", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 57, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f7cdf6968", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-db2d61d92f", ContainerID:"895426a0cb47cc70b69956707c975dce8f7b8033453ffeb554015bc334bd38d6", Pod:"calico-apiserver-6f7cdf6968-skgrt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.58.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6d78e29e333", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:57:49.626725 containerd[1615]: 2026-01-16 23:57:49.573 [INFO][5308] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b815939dcb1947f34f4f9e2b2f13ece0d18f4c3a8bb16efca1609be2ecd59af8" Jan 16 23:57:49.626725 containerd[1615]: 2026-01-16 23:57:49.573 [INFO][5308] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b815939dcb1947f34f4f9e2b2f13ece0d18f4c3a8bb16efca1609be2ecd59af8" iface="eth0" netns="" Jan 16 23:57:49.626725 containerd[1615]: 2026-01-16 23:57:49.573 [INFO][5308] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b815939dcb1947f34f4f9e2b2f13ece0d18f4c3a8bb16efca1609be2ecd59af8" Jan 16 23:57:49.626725 containerd[1615]: 2026-01-16 23:57:49.573 [INFO][5308] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b815939dcb1947f34f4f9e2b2f13ece0d18f4c3a8bb16efca1609be2ecd59af8" Jan 16 23:57:49.626725 containerd[1615]: 2026-01-16 23:57:49.598 [INFO][5315] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b815939dcb1947f34f4f9e2b2f13ece0d18f4c3a8bb16efca1609be2ecd59af8" HandleID="k8s-pod-network.b815939dcb1947f34f4f9e2b2f13ece0d18f4c3a8bb16efca1609be2ecd59af8" Workload="ci--4081--3--6--n--db2d61d92f-k8s-calico--apiserver--6f7cdf6968--skgrt-eth0" Jan 16 23:57:49.626725 containerd[1615]: 2026-01-16 23:57:49.599 [INFO][5315] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:49.626725 containerd[1615]: 2026-01-16 23:57:49.599 [INFO][5315] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:49.626725 containerd[1615]: 2026-01-16 23:57:49.614 [WARNING][5315] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b815939dcb1947f34f4f9e2b2f13ece0d18f4c3a8bb16efca1609be2ecd59af8" HandleID="k8s-pod-network.b815939dcb1947f34f4f9e2b2f13ece0d18f4c3a8bb16efca1609be2ecd59af8" Workload="ci--4081--3--6--n--db2d61d92f-k8s-calico--apiserver--6f7cdf6968--skgrt-eth0" Jan 16 23:57:49.626725 containerd[1615]: 2026-01-16 23:57:49.615 [INFO][5315] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b815939dcb1947f34f4f9e2b2f13ece0d18f4c3a8bb16efca1609be2ecd59af8" HandleID="k8s-pod-network.b815939dcb1947f34f4f9e2b2f13ece0d18f4c3a8bb16efca1609be2ecd59af8" Workload="ci--4081--3--6--n--db2d61d92f-k8s-calico--apiserver--6f7cdf6968--skgrt-eth0" Jan 16 23:57:49.626725 containerd[1615]: 2026-01-16 23:57:49.619 [INFO][5315] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:49.626725 containerd[1615]: 2026-01-16 23:57:49.623 [INFO][5308] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b815939dcb1947f34f4f9e2b2f13ece0d18f4c3a8bb16efca1609be2ecd59af8" Jan 16 23:57:49.626725 containerd[1615]: time="2026-01-16T23:57:49.626234171Z" level=info msg="TearDown network for sandbox \"b815939dcb1947f34f4f9e2b2f13ece0d18f4c3a8bb16efca1609be2ecd59af8\" successfully" Jan 16 23:57:49.626725 containerd[1615]: time="2026-01-16T23:57:49.626271290Z" level=info msg="StopPodSandbox for \"b815939dcb1947f34f4f9e2b2f13ece0d18f4c3a8bb16efca1609be2ecd59af8\" returns successfully" Jan 16 23:57:49.629401 containerd[1615]: time="2026-01-16T23:57:49.627577558Z" level=info msg="RemovePodSandbox for \"b815939dcb1947f34f4f9e2b2f13ece0d18f4c3a8bb16efca1609be2ecd59af8\"" Jan 16 23:57:49.631733 containerd[1615]: time="2026-01-16T23:57:49.629546299Z" level=info msg="Forcibly stopping sandbox \"b815939dcb1947f34f4f9e2b2f13ece0d18f4c3a8bb16efca1609be2ecd59af8\"" Jan 16 23:57:49.736921 containerd[1615]: 2026-01-16 23:57:49.686 [WARNING][5329] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b815939dcb1947f34f4f9e2b2f13ece0d18f4c3a8bb16efca1609be2ecd59af8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--db2d61d92f-k8s-calico--apiserver--6f7cdf6968--skgrt-eth0", GenerateName:"calico-apiserver-6f7cdf6968-", Namespace:"calico-apiserver", SelfLink:"", UID:"e369e49f-7e57-4c99-8a2f-2c08450c808f", ResourceVersion:"1060", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 57, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f7cdf6968", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-db2d61d92f", ContainerID:"895426a0cb47cc70b69956707c975dce8f7b8033453ffeb554015bc334bd38d6", Pod:"calico-apiserver-6f7cdf6968-skgrt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.58.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6d78e29e333", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:57:49.736921 containerd[1615]: 2026-01-16 23:57:49.686 [INFO][5329] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b815939dcb1947f34f4f9e2b2f13ece0d18f4c3a8bb16efca1609be2ecd59af8" Jan 16 23:57:49.736921 containerd[1615]: 2026-01-16 23:57:49.686 [INFO][5329] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b815939dcb1947f34f4f9e2b2f13ece0d18f4c3a8bb16efca1609be2ecd59af8" iface="eth0" netns="" Jan 16 23:57:49.736921 containerd[1615]: 2026-01-16 23:57:49.686 [INFO][5329] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b815939dcb1947f34f4f9e2b2f13ece0d18f4c3a8bb16efca1609be2ecd59af8" Jan 16 23:57:49.736921 containerd[1615]: 2026-01-16 23:57:49.686 [INFO][5329] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b815939dcb1947f34f4f9e2b2f13ece0d18f4c3a8bb16efca1609be2ecd59af8" Jan 16 23:57:49.736921 containerd[1615]: 2026-01-16 23:57:49.713 [INFO][5337] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b815939dcb1947f34f4f9e2b2f13ece0d18f4c3a8bb16efca1609be2ecd59af8" HandleID="k8s-pod-network.b815939dcb1947f34f4f9e2b2f13ece0d18f4c3a8bb16efca1609be2ecd59af8" Workload="ci--4081--3--6--n--db2d61d92f-k8s-calico--apiserver--6f7cdf6968--skgrt-eth0" Jan 16 23:57:49.736921 containerd[1615]: 2026-01-16 23:57:49.714 [INFO][5337] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:49.736921 containerd[1615]: 2026-01-16 23:57:49.714 [INFO][5337] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:49.736921 containerd[1615]: 2026-01-16 23:57:49.727 [WARNING][5337] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b815939dcb1947f34f4f9e2b2f13ece0d18f4c3a8bb16efca1609be2ecd59af8" HandleID="k8s-pod-network.b815939dcb1947f34f4f9e2b2f13ece0d18f4c3a8bb16efca1609be2ecd59af8" Workload="ci--4081--3--6--n--db2d61d92f-k8s-calico--apiserver--6f7cdf6968--skgrt-eth0" Jan 16 23:57:49.736921 containerd[1615]: 2026-01-16 23:57:49.727 [INFO][5337] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b815939dcb1947f34f4f9e2b2f13ece0d18f4c3a8bb16efca1609be2ecd59af8" HandleID="k8s-pod-network.b815939dcb1947f34f4f9e2b2f13ece0d18f4c3a8bb16efca1609be2ecd59af8" Workload="ci--4081--3--6--n--db2d61d92f-k8s-calico--apiserver--6f7cdf6968--skgrt-eth0" Jan 16 23:57:49.736921 containerd[1615]: 2026-01-16 23:57:49.731 [INFO][5337] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:49.736921 containerd[1615]: 2026-01-16 23:57:49.734 [INFO][5329] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b815939dcb1947f34f4f9e2b2f13ece0d18f4c3a8bb16efca1609be2ecd59af8" Jan 16 23:57:49.737405 containerd[1615]: time="2026-01-16T23:57:49.736972578Z" level=info msg="TearDown network for sandbox \"b815939dcb1947f34f4f9e2b2f13ece0d18f4c3a8bb16efca1609be2ecd59af8\" successfully" Jan 16 23:57:49.741556 containerd[1615]: time="2026-01-16T23:57:49.741489294Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b815939dcb1947f34f4f9e2b2f13ece0d18f4c3a8bb16efca1609be2ecd59af8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 16 23:57:49.742013 containerd[1615]: time="2026-01-16T23:57:49.741599053Z" level=info msg="RemovePodSandbox \"b815939dcb1947f34f4f9e2b2f13ece0d18f4c3a8bb16efca1609be2ecd59af8\" returns successfully" Jan 16 23:57:49.742913 containerd[1615]: time="2026-01-16T23:57:49.742290887Z" level=info msg="StopPodSandbox for \"2623671246706b3d5ba38c8fbe31e962febe832fb61fbd580c6e09cc51ce5b1b\"" Jan 16 23:57:49.847669 containerd[1615]: 2026-01-16 23:57:49.791 [WARNING][5352] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2623671246706b3d5ba38c8fbe31e962febe832fb61fbd580c6e09cc51ce5b1b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--db2d61d92f-k8s-coredns--668d6bf9bc--77fjj-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"189086c2-a129-41e8-ba84-b185657d3f10", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 56, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-db2d61d92f", ContainerID:"ef1050efd356e90e7365085408fcd18bb831c42d14d5d4aa84168270173b5550", Pod:"coredns-668d6bf9bc-77fjj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.58.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9950a033a30", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:57:49.847669 containerd[1615]: 2026-01-16 23:57:49.791 [INFO][5352] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2623671246706b3d5ba38c8fbe31e962febe832fb61fbd580c6e09cc51ce5b1b" Jan 16 23:57:49.847669 containerd[1615]: 2026-01-16 23:57:49.791 [INFO][5352] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2623671246706b3d5ba38c8fbe31e962febe832fb61fbd580c6e09cc51ce5b1b" iface="eth0" netns="" Jan 16 23:57:49.847669 containerd[1615]: 2026-01-16 23:57:49.791 [INFO][5352] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2623671246706b3d5ba38c8fbe31e962febe832fb61fbd580c6e09cc51ce5b1b" Jan 16 23:57:49.847669 containerd[1615]: 2026-01-16 23:57:49.791 [INFO][5352] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2623671246706b3d5ba38c8fbe31e962febe832fb61fbd580c6e09cc51ce5b1b" Jan 16 23:57:49.847669 containerd[1615]: 2026-01-16 23:57:49.821 [INFO][5360] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2623671246706b3d5ba38c8fbe31e962febe832fb61fbd580c6e09cc51ce5b1b" HandleID="k8s-pod-network.2623671246706b3d5ba38c8fbe31e962febe832fb61fbd580c6e09cc51ce5b1b" Workload="ci--4081--3--6--n--db2d61d92f-k8s-coredns--668d6bf9bc--77fjj-eth0" Jan 16 23:57:49.847669 containerd[1615]: 2026-01-16 23:57:49.822 [INFO][5360] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:49.847669 containerd[1615]: 2026-01-16 23:57:49.822 [INFO][5360] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:49.847669 containerd[1615]: 2026-01-16 23:57:49.835 [WARNING][5360] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2623671246706b3d5ba38c8fbe31e962febe832fb61fbd580c6e09cc51ce5b1b" HandleID="k8s-pod-network.2623671246706b3d5ba38c8fbe31e962febe832fb61fbd580c6e09cc51ce5b1b" Workload="ci--4081--3--6--n--db2d61d92f-k8s-coredns--668d6bf9bc--77fjj-eth0" Jan 16 23:57:49.847669 containerd[1615]: 2026-01-16 23:57:49.835 [INFO][5360] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2623671246706b3d5ba38c8fbe31e962febe832fb61fbd580c6e09cc51ce5b1b" HandleID="k8s-pod-network.2623671246706b3d5ba38c8fbe31e962febe832fb61fbd580c6e09cc51ce5b1b" Workload="ci--4081--3--6--n--db2d61d92f-k8s-coredns--668d6bf9bc--77fjj-eth0" Jan 16 23:57:49.847669 containerd[1615]: 2026-01-16 23:57:49.839 [INFO][5360] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:49.847669 containerd[1615]: 2026-01-16 23:57:49.843 [INFO][5352] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2623671246706b3d5ba38c8fbe31e962febe832fb61fbd580c6e09cc51ce5b1b" Jan 16 23:57:49.847669 containerd[1615]: time="2026-01-16T23:57:49.847382269Z" level=info msg="TearDown network for sandbox \"2623671246706b3d5ba38c8fbe31e962febe832fb61fbd580c6e09cc51ce5b1b\" successfully" Jan 16 23:57:49.847669 containerd[1615]: time="2026-01-16T23:57:49.847410949Z" level=info msg="StopPodSandbox for \"2623671246706b3d5ba38c8fbe31e962febe832fb61fbd580c6e09cc51ce5b1b\" returns successfully" Jan 16 23:57:49.848897 containerd[1615]: time="2026-01-16T23:57:49.848846295Z" level=info msg="RemovePodSandbox for \"2623671246706b3d5ba38c8fbe31e962febe832fb61fbd580c6e09cc51ce5b1b\"" Jan 16 23:57:49.848990 containerd[1615]: time="2026-01-16T23:57:49.848906814Z" level=info msg="Forcibly stopping sandbox \"2623671246706b3d5ba38c8fbe31e962febe832fb61fbd580c6e09cc51ce5b1b\"" Jan 16 23:57:49.950843 containerd[1615]: 2026-01-16 23:57:49.897 [WARNING][5374] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2623671246706b3d5ba38c8fbe31e962febe832fb61fbd580c6e09cc51ce5b1b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--db2d61d92f-k8s-coredns--668d6bf9bc--77fjj-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"189086c2-a129-41e8-ba84-b185657d3f10", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 56, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-db2d61d92f", ContainerID:"ef1050efd356e90e7365085408fcd18bb831c42d14d5d4aa84168270173b5550", Pod:"coredns-668d6bf9bc-77fjj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.58.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9950a033a30", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:57:49.950843 containerd[1615]: 2026-01-16 23:57:49.897 [INFO][5374] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2623671246706b3d5ba38c8fbe31e962febe832fb61fbd580c6e09cc51ce5b1b" Jan 16 23:57:49.950843 containerd[1615]: 2026-01-16 23:57:49.897 [INFO][5374] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2623671246706b3d5ba38c8fbe31e962febe832fb61fbd580c6e09cc51ce5b1b" iface="eth0" netns="" Jan 16 23:57:49.950843 containerd[1615]: 2026-01-16 23:57:49.897 [INFO][5374] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2623671246706b3d5ba38c8fbe31e962febe832fb61fbd580c6e09cc51ce5b1b" Jan 16 23:57:49.950843 containerd[1615]: 2026-01-16 23:57:49.897 [INFO][5374] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2623671246706b3d5ba38c8fbe31e962febe832fb61fbd580c6e09cc51ce5b1b" Jan 16 23:57:49.950843 containerd[1615]: 2026-01-16 23:57:49.929 [INFO][5382] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2623671246706b3d5ba38c8fbe31e962febe832fb61fbd580c6e09cc51ce5b1b" HandleID="k8s-pod-network.2623671246706b3d5ba38c8fbe31e962febe832fb61fbd580c6e09cc51ce5b1b" Workload="ci--4081--3--6--n--db2d61d92f-k8s-coredns--668d6bf9bc--77fjj-eth0" Jan 16 23:57:49.950843 containerd[1615]: 2026-01-16 23:57:49.929 [INFO][5382] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:57:49.950843 containerd[1615]: 2026-01-16 23:57:49.929 [INFO][5382] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:57:49.950843 containerd[1615]: 2026-01-16 23:57:49.942 [WARNING][5382] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2623671246706b3d5ba38c8fbe31e962febe832fb61fbd580c6e09cc51ce5b1b" HandleID="k8s-pod-network.2623671246706b3d5ba38c8fbe31e962febe832fb61fbd580c6e09cc51ce5b1b" Workload="ci--4081--3--6--n--db2d61d92f-k8s-coredns--668d6bf9bc--77fjj-eth0" Jan 16 23:57:49.950843 containerd[1615]: 2026-01-16 23:57:49.942 [INFO][5382] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2623671246706b3d5ba38c8fbe31e962febe832fb61fbd580c6e09cc51ce5b1b" HandleID="k8s-pod-network.2623671246706b3d5ba38c8fbe31e962febe832fb61fbd580c6e09cc51ce5b1b" Workload="ci--4081--3--6--n--db2d61d92f-k8s-coredns--668d6bf9bc--77fjj-eth0" Jan 16 23:57:49.950843 containerd[1615]: 2026-01-16 23:57:49.945 [INFO][5382] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:57:49.950843 containerd[1615]: 2026-01-16 23:57:49.947 [INFO][5374] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2623671246706b3d5ba38c8fbe31e962febe832fb61fbd580c6e09cc51ce5b1b" Jan 16 23:57:49.950843 containerd[1615]: time="2026-01-16T23:57:49.950770508Z" level=info msg="TearDown network for sandbox \"2623671246706b3d5ba38c8fbe31e962febe832fb61fbd580c6e09cc51ce5b1b\" successfully" Jan 16 23:57:49.957094 containerd[1615]: time="2026-01-16T23:57:49.956924768Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2623671246706b3d5ba38c8fbe31e962febe832fb61fbd580c6e09cc51ce5b1b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 16 23:57:49.957094 containerd[1615]: time="2026-01-16T23:57:49.957070087Z" level=info msg="RemovePodSandbox \"2623671246706b3d5ba38c8fbe31e962febe832fb61fbd580c6e09cc51ce5b1b\" returns successfully" Jan 16 23:57:50.250059 containerd[1615]: time="2026-01-16T23:57:50.249523361Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 23:57:50.606474 containerd[1615]: time="2026-01-16T23:57:50.606347138Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:57:50.608486 containerd[1615]: time="2026-01-16T23:57:50.608246441Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 23:57:50.608686 containerd[1615]: time="2026-01-16T23:57:50.608364240Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 16 23:57:50.609005 kubelet[2729]: E0116 23:57:50.608940 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 23:57:50.609616 kubelet[2729]: E0116 23:57:50.609018 2729 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 23:57:50.609616 kubelet[2729]: E0116 23:57:50.609182 2729 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2j77x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6f7cdf6968-skgrt_calico-apiserver(e369e49f-7e57-4c99-8a2f-2c08450c808f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 23:57:50.610699 kubelet[2729]: E0116 23:57:50.610639 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f7cdf6968-skgrt" podUID="e369e49f-7e57-4c99-8a2f-2c08450c808f" Jan 16 23:57:51.250458 containerd[1615]: time="2026-01-16T23:57:51.250329856Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 16 23:57:51.593569 containerd[1615]: time="2026-01-16T23:57:51.593304529Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:57:51.595176 containerd[1615]: time="2026-01-16T23:57:51.594980035Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 16 23:57:51.595176 containerd[1615]: time="2026-01-16T23:57:51.595129234Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 16 23:57:51.596609 kubelet[2729]: E0116 23:57:51.595660 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 23:57:51.596609 kubelet[2729]: E0116 23:57:51.595731 2729 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 23:57:51.596609 kubelet[2729]: E0116 23:57:51.596039 2729 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2vsp7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-n2dkx_calico-system(418b98a5-873e-4b20-a6d4-0ef55480b923): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 16 23:57:51.602269 containerd[1615]: time="2026-01-16T23:57:51.598809084Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 16 23:57:51.929990 containerd[1615]: time="2026-01-16T23:57:51.929800655Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:57:51.932216 containerd[1615]: time="2026-01-16T23:57:51.931732520Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 16 23:57:51.932216 containerd[1615]: time="2026-01-16T23:57:51.931918158Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 16 23:57:51.934101 kubelet[2729]: E0116 23:57:51.932984 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 23:57:51.934101 kubelet[2729]: E0116 23:57:51.933049 2729 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 23:57:51.934101 kubelet[2729]: E0116 23:57:51.933408 2729 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2fqch,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-npqz8_calico-system(db404f65-1005-4977-b8b1-05db5155d53d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 16 23:57:51.935544 kubelet[2729]: E0116 23:57:51.935318 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-npqz8" podUID="db404f65-1005-4977-b8b1-05db5155d53d" Jan 16 23:57:51.936092 containerd[1615]: time="2026-01-16T23:57:51.935996005Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 16 23:57:52.272023 containerd[1615]: time="2026-01-16T23:57:52.271728491Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:57:52.274381 containerd[1615]: time="2026-01-16T23:57:52.274204073Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 16 23:57:52.274381 containerd[1615]: time="2026-01-16T23:57:52.274267593Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 16 23:57:52.274648 kubelet[2729]: E0116 23:57:52.274544 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 23:57:52.274733 kubelet[2729]: E0116 23:57:52.274612 2729 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 23:57:52.275520 kubelet[2729]: E0116 23:57:52.274903 2729 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2vsp7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-n2dkx_calico-system(418b98a5-873e-4b20-a6d4-0ef55480b923): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 16 23:57:52.276285 kubelet[2729]: E0116 23:57:52.276230 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n2dkx" podUID="418b98a5-873e-4b20-a6d4-0ef55480b923" Jan 16 23:57:54.252612 containerd[1615]: time="2026-01-16T23:57:54.252554857Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 23:57:54.604917 containerd[1615]: time="2026-01-16T23:57:54.604828027Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:57:54.606778 containerd[1615]: time="2026-01-16T23:57:54.606710816Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 23:57:54.606962 containerd[1615]: time="2026-01-16T23:57:54.606846535Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 16 23:57:54.607177 kubelet[2729]: E0116 23:57:54.607091 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 23:57:54.607177 kubelet[2729]: E0116 23:57:54.607162 2729 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 23:57:54.607861 kubelet[2729]: E0116 23:57:54.607317 2729 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gxmf9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6f7cdf6968-8z7jz_calico-apiserver(98bfe59b-02e4-4bdc-9e5a-0a8209ddd104): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 23:57:54.608670 kubelet[2729]: E0116 23:57:54.608576 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f7cdf6968-8z7jz" podUID="98bfe59b-02e4-4bdc-9e5a-0a8209ddd104" Jan 16 23:58:00.251209 kubelet[2729]: E0116 23:58:00.250147 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c6b945f-kpw28" podUID="7b693a10-74eb-43ee-978b-c4010636e57f" Jan 16 23:58:03.249174 kubelet[2729]: E0116 23:58:03.248999 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f7cdf6968-skgrt" podUID="e369e49f-7e57-4c99-8a2f-2c08450c808f" Jan 16 23:58:03.253748 kubelet[2729]: E0116 23:58:03.252345 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n2dkx" podUID="418b98a5-873e-4b20-a6d4-0ef55480b923" Jan 16 23:58:04.249344 kubelet[2729]: E0116 23:58:04.248941 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-79df8bc6d5-wt9rj" podUID="59191777-c11b-4c90-aa9f-cb135874655c" Jan 16 23:58:07.250029 kubelet[2729]: E0116 23:58:07.249478 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-npqz8" podUID="db404f65-1005-4977-b8b1-05db5155d53d" Jan 16 23:58:07.250029 kubelet[2729]: E0116 23:58:07.249538 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f7cdf6968-8z7jz" podUID="98bfe59b-02e4-4bdc-9e5a-0a8209ddd104" Jan 16 23:58:14.251559 containerd[1615]: time="2026-01-16T23:58:14.251487302Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 16 23:58:14.607122 containerd[1615]: time="2026-01-16T23:58:14.607033853Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:58:14.608722 containerd[1615]: time="2026-01-16T23:58:14.608652659Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 16 23:58:14.608916 containerd[1615]: time="2026-01-16T23:58:14.608843860Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 16 23:58:14.609937 kubelet[2729]: E0116 23:58:14.609875 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 23:58:14.609937 kubelet[2729]: E0116 23:58:14.609938 2729 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 23:58:14.610383 kubelet[2729]: E0116 23:58:14.610068 2729 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a43c7915b4074bd1b752561b3055b41c,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cq6n5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5c6b945f-kpw28_calico-system(7b693a10-74eb-43ee-978b-c4010636e57f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 16 23:58:14.614097 containerd[1615]: time="2026-01-16T23:58:14.614031959Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 16 23:58:14.953048 containerd[1615]: time="2026-01-16T23:58:14.952865808Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:58:14.955217 containerd[1615]: time="2026-01-16T23:58:14.955076337Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 16 23:58:14.955217 containerd[1615]: time="2026-01-16T23:58:14.955164337Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 16 23:58:14.955562 kubelet[2729]: E0116 23:58:14.955400 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 23:58:14.955686 kubelet[2729]: E0116 23:58:14.955583 2729 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 23:58:14.956224 kubelet[2729]: E0116 23:58:14.955867 2729 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cq6n5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5c6b945f-kpw28_calico-system(7b693a10-74eb-43ee-978b-c4010636e57f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 16 23:58:14.957602 kubelet[2729]: E0116 23:58:14.957534 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c6b945f-kpw28" podUID="7b693a10-74eb-43ee-978b-c4010636e57f" Jan 16 23:58:16.255881 containerd[1615]: time="2026-01-16T23:58:16.254965247Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 16 23:58:16.587767 containerd[1615]: time="2026-01-16T23:58:16.587556460Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:58:16.589129 containerd[1615]: time="2026-01-16T23:58:16.588988426Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 16 23:58:16.589129 containerd[1615]: time="2026-01-16T23:58:16.589034506Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 16 23:58:16.590155 kubelet[2729]: E0116 23:58:16.589616 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 23:58:16.590155 kubelet[2729]: E0116 23:58:16.589713 2729 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 23:58:16.590155 kubelet[2729]: E0116 23:58:16.590014 2729 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hlwr5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-79df8bc6d5-wt9rj_calico-system(59191777-c11b-4c90-aa9f-cb135874655c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 16 23:58:16.591338 kubelet[2729]: E0116 23:58:16.591267 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-79df8bc6d5-wt9rj" podUID="59191777-c11b-4c90-aa9f-cb135874655c" Jan 16 23:58:17.255016 containerd[1615]: time="2026-01-16T23:58:17.254954096Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 16 23:58:17.589603 containerd[1615]: time="2026-01-16T23:58:17.589510665Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:58:17.591287 containerd[1615]: time="2026-01-16T23:58:17.591155073Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 16 23:58:17.591287 containerd[1615]: time="2026-01-16T23:58:17.591239593Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 16 23:58:17.591576 kubelet[2729]: E0116 23:58:17.591513 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 23:58:17.591983 kubelet[2729]: E0116 23:58:17.591590 2729 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 23:58:17.591983 kubelet[2729]: E0116 23:58:17.591810 2729 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2vsp7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-n2dkx_calico-system(418b98a5-873e-4b20-a6d4-0ef55480b923): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 16 23:58:17.596751 containerd[1615]: time="2026-01-16T23:58:17.596689739Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 16 23:58:17.937603 containerd[1615]: time="2026-01-16T23:58:17.936833454Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:58:17.939923 containerd[1615]: time="2026-01-16T23:58:17.939762108Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 16 23:58:17.940119 containerd[1615]: time="2026-01-16T23:58:17.940026309Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 16 23:58:17.941670 kubelet[2729]: E0116 23:58:17.940283 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 23:58:17.941670 kubelet[2729]: E0116 23:58:17.940351 2729 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 23:58:17.941670 kubelet[2729]: E0116 23:58:17.940471 2729 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2vsp7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-n2dkx_calico-system(418b98a5-873e-4b20-a6d4-0ef55480b923): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 16 23:58:17.942066 kubelet[2729]: E0116 23:58:17.942007 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n2dkx" podUID="418b98a5-873e-4b20-a6d4-0ef55480b923" Jan 16 23:58:18.256345 containerd[1615]: time="2026-01-16T23:58:18.256173912Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 23:58:18.601310 containerd[1615]: time="2026-01-16T23:58:18.601230319Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:58:18.603127 containerd[1615]: time="2026-01-16T23:58:18.602999328Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 23:58:18.603127 containerd[1615]: time="2026-01-16T23:58:18.603081128Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 16 23:58:18.604404 kubelet[2729]: E0116 23:58:18.604285 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 23:58:18.604404 kubelet[2729]: E0116 23:58:18.604356 2729 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 23:58:18.605256 kubelet[2729]: E0116 23:58:18.604482 2729 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2j77x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6f7cdf6968-skgrt_calico-apiserver(e369e49f-7e57-4c99-8a2f-2c08450c808f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 23:58:18.605977 kubelet[2729]: E0116 23:58:18.605876 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f7cdf6968-skgrt" podUID="e369e49f-7e57-4c99-8a2f-2c08450c808f" Jan 16 23:58:19.251406 containerd[1615]: time="2026-01-16T23:58:19.251306848Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 16 23:58:19.609695 containerd[1615]: time="2026-01-16T23:58:19.608581224Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:58:19.611808 containerd[1615]: time="2026-01-16T23:58:19.611579160Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 16 23:58:19.611808 containerd[1615]: time="2026-01-16T23:58:19.611669760Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 16 23:58:19.612338 kubelet[2729]: E0116 23:58:19.612135 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 23:58:19.612338 kubelet[2729]: E0116 23:58:19.612208 2729 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 23:58:19.613949 kubelet[2729]: E0116 23:58:19.613878 2729 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2fqch,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-npqz8_calico-system(db404f65-1005-4977-b8b1-05db5155d53d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 16 23:58:19.615367 kubelet[2729]: E0116 23:58:19.615323 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-npqz8" podUID="db404f65-1005-4977-b8b1-05db5155d53d" Jan 16 23:58:21.251740 containerd[1615]: time="2026-01-16T23:58:21.250699059Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 23:58:21.609187 containerd[1615]: time="2026-01-16T23:58:21.608920847Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:58:21.612032 containerd[1615]: time="2026-01-16T23:58:21.611924545Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 23:58:21.612032 containerd[1615]: time="2026-01-16T23:58:21.611989665Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 16 23:58:21.613078 kubelet[2729]: E0116 23:58:21.612429 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 23:58:21.613078 kubelet[2729]: E0116 23:58:21.612498 2729 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 23:58:21.613078 kubelet[2729]: E0116 23:58:21.612711 2729 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gxmf9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6f7cdf6968-8z7jz_calico-apiserver(98bfe59b-02e4-4bdc-9e5a-0a8209ddd104): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 23:58:21.614249 kubelet[2729]: E0116 23:58:21.613845 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f7cdf6968-8z7jz" podUID="98bfe59b-02e4-4bdc-9e5a-0a8209ddd104" Jan 16 23:58:29.251574 kubelet[2729]: E0116 23:58:29.251091 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n2dkx" podUID="418b98a5-873e-4b20-a6d4-0ef55480b923" Jan 16 23:58:30.253681 kubelet[2729]: E0116 23:58:30.252758 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-79df8bc6d5-wt9rj" podUID="59191777-c11b-4c90-aa9f-cb135874655c" Jan 16 23:58:30.253681 kubelet[2729]: E0116 23:58:30.252877 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c6b945f-kpw28" podUID="7b693a10-74eb-43ee-978b-c4010636e57f" Jan 16 23:58:31.249171 kubelet[2729]: E0116 23:58:31.249102 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f7cdf6968-skgrt" podUID="e369e49f-7e57-4c99-8a2f-2c08450c808f" Jan 16 23:58:31.556132 systemd[1]: run-containerd-runc-k8s.io-707af708cb1ea9d712dc9e4c665075513beac9690bd8e2ceb6686a3c51f0d2db-runc.kh09AY.mount: Deactivated successfully. Jan 16 23:58:35.250655 kubelet[2729]: E0116 23:58:35.248556 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-npqz8" podUID="db404f65-1005-4977-b8b1-05db5155d53d" Jan 16 23:58:36.250989 kubelet[2729]: E0116 23:58:36.250872 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f7cdf6968-8z7jz" podUID="98bfe59b-02e4-4bdc-9e5a-0a8209ddd104" Jan 16 23:58:41.248011 kubelet[2729]: E0116 23:58:41.247908 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-79df8bc6d5-wt9rj" podUID="59191777-c11b-4c90-aa9f-cb135874655c" Jan 16 23:58:43.251910 kubelet[2729]: E0116 23:58:43.251847 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c6b945f-kpw28" podUID="7b693a10-74eb-43ee-978b-c4010636e57f" Jan 16 23:58:43.252394 kubelet[2729]: E0116 23:58:43.251975 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n2dkx" podUID="418b98a5-873e-4b20-a6d4-0ef55480b923" Jan 16 23:58:46.254004 kubelet[2729]: E0116 23:58:46.251081 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f7cdf6968-skgrt" podUID="e369e49f-7e57-4c99-8a2f-2c08450c808f" Jan 16 23:58:49.248632 kubelet[2729]: E0116 23:58:49.247843 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f7cdf6968-8z7jz" podUID="98bfe59b-02e4-4bdc-9e5a-0a8209ddd104" Jan 16 23:58:49.248632 kubelet[2729]: E0116 23:58:49.248281 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-npqz8" podUID="db404f65-1005-4977-b8b1-05db5155d53d" Jan 16 23:58:54.252576 kubelet[2729]: E0116 23:58:54.250979 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-79df8bc6d5-wt9rj" podUID="59191777-c11b-4c90-aa9f-cb135874655c" Jan 16 23:58:54.255684 kubelet[2729]: E0116 23:58:54.255585 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c6b945f-kpw28" podUID="7b693a10-74eb-43ee-978b-c4010636e57f" Jan 16 23:58:55.251883 kubelet[2729]: E0116 23:58:55.251818 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n2dkx" podUID="418b98a5-873e-4b20-a6d4-0ef55480b923" Jan 16 23:58:59.250726 containerd[1615]: time="2026-01-16T23:58:59.248722452Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 23:58:59.598342 containerd[1615]: time="2026-01-16T23:58:59.598261314Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:58:59.599899 containerd[1615]: time="2026-01-16T23:58:59.599822493Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 23:58:59.600516 containerd[1615]: time="2026-01-16T23:58:59.599991775Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 16 23:58:59.600619 kubelet[2729]: E0116 23:58:59.600196 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 23:58:59.600619 kubelet[2729]: E0116 23:58:59.600258 2729 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 23:58:59.600619 kubelet[2729]: E0116 23:58:59.600412 2729 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2j77x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6f7cdf6968-skgrt_calico-apiserver(e369e49f-7e57-4c99-8a2f-2c08450c808f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 23:58:59.602428 kubelet[2729]: E0116 23:58:59.602006 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f7cdf6968-skgrt" podUID="e369e49f-7e57-4c99-8a2f-2c08450c808f" Jan 16 23:59:02.255937 containerd[1615]: time="2026-01-16T23:59:02.255891563Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 23:59:02.608480 containerd[1615]: time="2026-01-16T23:59:02.608395746Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:59:02.610126 containerd[1615]: time="2026-01-16T23:59:02.610005125Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 23:59:02.610126 containerd[1615]: time="2026-01-16T23:59:02.610085446Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 16 23:59:02.611353 kubelet[2729]: E0116 23:59:02.610336 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 23:59:02.611353 kubelet[2729]: E0116 23:59:02.610411 2729 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 23:59:02.611353 kubelet[2729]: E0116 23:59:02.610607 2729 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gxmf9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6f7cdf6968-8z7jz_calico-apiserver(98bfe59b-02e4-4bdc-9e5a-0a8209ddd104): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 23:59:02.612227 kubelet[2729]: E0116 23:59:02.612135 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f7cdf6968-8z7jz" podUID="98bfe59b-02e4-4bdc-9e5a-0a8209ddd104" Jan 16 23:59:03.250690 containerd[1615]: time="2026-01-16T23:59:03.248698853Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 16 23:59:03.582024 containerd[1615]: time="2026-01-16T23:59:03.581819861Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:59:03.583751 containerd[1615]: time="2026-01-16T23:59:03.583532363Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 16 23:59:03.583751 containerd[1615]: time="2026-01-16T23:59:03.583700885Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 16 23:59:03.583956 kubelet[2729]: E0116 23:59:03.583868 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 23:59:03.583956 kubelet[2729]: E0116 23:59:03.583918 2729 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 23:59:03.584150 kubelet[2729]: E0116 23:59:03.584040 2729 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2fqch,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-npqz8_calico-system(db404f65-1005-4977-b8b1-05db5155d53d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 16 23:59:03.585326 kubelet[2729]: E0116 23:59:03.585285 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-npqz8" podUID="db404f65-1005-4977-b8b1-05db5155d53d" Jan 16 23:59:08.256079 containerd[1615]: time="2026-01-16T23:59:08.255813707Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 16 23:59:08.609563 containerd[1615]: time="2026-01-16T23:59:08.609365850Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:59:08.611530 containerd[1615]: time="2026-01-16T23:59:08.611378996Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 16 23:59:08.611530 containerd[1615]: time="2026-01-16T23:59:08.611491357Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 16 23:59:08.611726 kubelet[2729]: E0116 23:59:08.611635 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 23:59:08.611726 kubelet[2729]: E0116 23:59:08.611687 2729 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 23:59:08.612107 kubelet[2729]: E0116 23:59:08.611896 2729 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hlwr5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-79df8bc6d5-wt9rj_calico-system(59191777-c11b-4c90-aa9f-cb135874655c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 16 23:59:08.613110 containerd[1615]: time="2026-01-16T23:59:08.612700493Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 16 23:59:08.613974 kubelet[2729]: E0116 23:59:08.613929 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-79df8bc6d5-wt9rj" podUID="59191777-c11b-4c90-aa9f-cb135874655c" Jan 16 23:59:08.964435 containerd[1615]: time="2026-01-16T23:59:08.963413920Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:59:08.965274 containerd[1615]: time="2026-01-16T23:59:08.965200782Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 16 23:59:08.965380 containerd[1615]: time="2026-01-16T23:59:08.965329464Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 16 23:59:08.966693 kubelet[2729]: E0116 23:59:08.965505 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 23:59:08.966693 kubelet[2729]: E0116 23:59:08.965548 2729 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 23:59:08.966693 kubelet[2729]: E0116 23:59:08.965769 2729 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2vsp7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-n2dkx_calico-system(418b98a5-873e-4b20-a6d4-0ef55480b923): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 16 23:59:08.966920 containerd[1615]: time="2026-01-16T23:59:08.966191235Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 16 23:59:09.309829 containerd[1615]: time="2026-01-16T23:59:09.309451346Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:59:09.313521 containerd[1615]: time="2026-01-16T23:59:09.312081660Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 16 23:59:09.313521 containerd[1615]: time="2026-01-16T23:59:09.312225222Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 16 23:59:09.313765 kubelet[2729]: E0116 23:59:09.312574 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 23:59:09.313765 kubelet[2729]: E0116 23:59:09.312707 2729 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 23:59:09.313765 kubelet[2729]: E0116 23:59:09.313254 2729 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a43c7915b4074bd1b752561b3055b41c,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cq6n5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5c6b945f-kpw28_calico-system(7b693a10-74eb-43ee-978b-c4010636e57f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 16 23:59:09.314152 containerd[1615]: time="2026-01-16T23:59:09.314103046Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 16 23:59:09.662936 containerd[1615]: time="2026-01-16T23:59:09.662757108Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:59:09.664747 containerd[1615]: time="2026-01-16T23:59:09.664009604Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 16 23:59:09.664747 containerd[1615]: time="2026-01-16T23:59:09.664082565Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 16 23:59:09.664913 kubelet[2729]: E0116 23:59:09.664217 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 23:59:09.664913 kubelet[2729]: E0116 23:59:09.664464 2729 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 23:59:09.664913 kubelet[2729]: E0116 23:59:09.664711 2729 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2vsp7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-n2dkx_calico-system(418b98a5-873e-4b20-a6d4-0ef55480b923): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 16 23:59:09.667785 kubelet[2729]: E0116 23:59:09.666019 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n2dkx" podUID="418b98a5-873e-4b20-a6d4-0ef55480b923" Jan 16 23:59:09.667934 containerd[1615]: time="2026-01-16T23:59:09.666290393Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 16 23:59:10.000795 containerd[1615]: time="2026-01-16T23:59:10.000619832Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:59:10.002498 containerd[1615]: time="2026-01-16T23:59:10.002439295Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 16 23:59:10.002498 containerd[1615]: time="2026-01-16T23:59:10.002472896Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 16 23:59:10.003824 kubelet[2729]: E0116 23:59:10.003764 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 23:59:10.003824 kubelet[2729]: E0116 23:59:10.003821 2729 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 23:59:10.004326 kubelet[2729]: E0116 23:59:10.003932 2729 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cq6n5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5c6b945f-kpw28_calico-system(7b693a10-74eb-43ee-978b-c4010636e57f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 16 23:59:10.005526 kubelet[2729]: E0116 23:59:10.005480 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c6b945f-kpw28" podUID="7b693a10-74eb-43ee-978b-c4010636e57f" Jan 16 23:59:12.251955 kubelet[2729]: E0116 23:59:12.251901 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f7cdf6968-skgrt" podUID="e369e49f-7e57-4c99-8a2f-2c08450c808f" Jan 16 23:59:16.251405 kubelet[2729]: E0116 23:59:16.251351 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f7cdf6968-8z7jz" podUID="98bfe59b-02e4-4bdc-9e5a-0a8209ddd104" Jan 16 23:59:18.248827 kubelet[2729]: E0116 23:59:18.248474 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-npqz8" podUID="db404f65-1005-4977-b8b1-05db5155d53d" Jan 16 23:59:19.039269 systemd[1]: Started sshd@7-188.245.124.206:22-4.153.228.146:38478.service - OpenSSH per-connection server daemon (4.153.228.146:38478). Jan 16 23:59:19.673887 sshd[5516]: Accepted publickey for core from 4.153.228.146 port 38478 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 16 23:59:19.675852 sshd[5516]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 23:59:19.681701 systemd-logind[1590]: New session 8 of user core. Jan 16 23:59:19.688299 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 16 23:59:20.248862 sshd[5516]: pam_unix(sshd:session): session closed for user core Jan 16 23:59:20.253396 kubelet[2729]: E0116 23:59:20.251118 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-79df8bc6d5-wt9rj" podUID="59191777-c11b-4c90-aa9f-cb135874655c" Jan 16 23:59:20.261735 systemd[1]: sshd@7-188.245.124.206:22-4.153.228.146:38478.service: Deactivated successfully. Jan 16 23:59:20.267032 systemd[1]: session-8.scope: Deactivated successfully. Jan 16 23:59:20.271109 systemd-logind[1590]: Session 8 logged out. Waiting for processes to exit. Jan 16 23:59:20.274807 systemd-logind[1590]: Removed session 8. Jan 16 23:59:22.281876 kubelet[2729]: E0116 23:59:22.258365 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n2dkx" podUID="418b98a5-873e-4b20-a6d4-0ef55480b923" Jan 16 23:59:23.251533 kubelet[2729]: E0116 23:59:23.251281 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f7cdf6968-skgrt" podUID="e369e49f-7e57-4c99-8a2f-2c08450c808f" Jan 16 23:59:23.253664 kubelet[2729]: E0116 23:59:23.252157 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c6b945f-kpw28" podUID="7b693a10-74eb-43ee-978b-c4010636e57f" Jan 16 23:59:25.349934 systemd[1]: Started sshd@8-188.245.124.206:22-4.153.228.146:40624.service - OpenSSH per-connection server daemon (4.153.228.146:40624). Jan 16 23:59:25.980569 sshd[5533]: Accepted publickey for core from 4.153.228.146 port 40624 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 16 23:59:25.983715 sshd[5533]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 23:59:25.993753 systemd-logind[1590]: New session 9 of user core. Jan 16 23:59:25.998059 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 16 23:59:26.534753 sshd[5533]: pam_unix(sshd:session): session closed for user core Jan 16 23:59:26.544696 systemd-logind[1590]: Session 9 logged out. Waiting for processes to exit. Jan 16 23:59:26.545438 systemd[1]: sshd@8-188.245.124.206:22-4.153.228.146:40624.service: Deactivated successfully. Jan 16 23:59:26.554671 systemd[1]: session-9.scope: Deactivated successfully. Jan 16 23:59:26.558550 systemd-logind[1590]: Removed session 9. Jan 16 23:59:31.255227 kubelet[2729]: E0116 23:59:31.252602 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f7cdf6968-8z7jz" podUID="98bfe59b-02e4-4bdc-9e5a-0a8209ddd104" Jan 16 23:59:31.637995 systemd[1]: Started sshd@9-188.245.124.206:22-4.153.228.146:40628.service - OpenSSH per-connection server daemon (4.153.228.146:40628). Jan 16 23:59:32.241958 sshd[5570]: Accepted publickey for core from 4.153.228.146 port 40628 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 16 23:59:32.249274 sshd[5570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 23:59:32.260908 systemd-logind[1590]: New session 10 of user core. Jan 16 23:59:32.266946 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 16 23:59:32.743014 sshd[5570]: pam_unix(sshd:session): session closed for user core Jan 16 23:59:32.751100 systemd[1]: sshd@9-188.245.124.206:22-4.153.228.146:40628.service: Deactivated successfully. Jan 16 23:59:32.757043 systemd-logind[1590]: Session 10 logged out. Waiting for processes to exit. Jan 16 23:59:32.757046 systemd[1]: session-10.scope: Deactivated successfully. Jan 16 23:59:32.759252 systemd-logind[1590]: Removed session 10. Jan 16 23:59:32.848433 systemd[1]: Started sshd@10-188.245.124.206:22-4.153.228.146:40644.service - OpenSSH per-connection server daemon (4.153.228.146:40644). Jan 16 23:59:33.250438 kubelet[2729]: E0116 23:59:33.250245 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-npqz8" podUID="db404f65-1005-4977-b8b1-05db5155d53d" Jan 16 23:59:33.455783 sshd[5585]: Accepted publickey for core from 4.153.228.146 port 40644 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 16 23:59:33.458581 sshd[5585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 23:59:33.468620 systemd-logind[1590]: New session 11 of user core. Jan 16 23:59:33.473683 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 16 23:59:34.026844 sshd[5585]: pam_unix(sshd:session): session closed for user core Jan 16 23:59:34.031930 systemd[1]: sshd@10-188.245.124.206:22-4.153.228.146:40644.service: Deactivated successfully. Jan 16 23:59:34.041832 systemd-logind[1590]: Session 11 logged out. Waiting for processes to exit. Jan 16 23:59:34.041974 systemd[1]: session-11.scope: Deactivated successfully. Jan 16 23:59:34.047238 systemd-logind[1590]: Removed session 11. Jan 16 23:59:34.134480 systemd[1]: Started sshd@11-188.245.124.206:22-4.153.228.146:40650.service - OpenSSH per-connection server daemon (4.153.228.146:40650). Jan 16 23:59:34.249234 kubelet[2729]: E0116 23:59:34.249148 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-79df8bc6d5-wt9rj" podUID="59191777-c11b-4c90-aa9f-cb135874655c" Jan 16 23:59:34.794812 sshd[5597]: Accepted publickey for core from 4.153.228.146 port 40650 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 16 23:59:34.796943 sshd[5597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 23:59:34.803295 systemd-logind[1590]: New session 12 of user core. Jan 16 23:59:34.807221 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 16 23:59:35.248949 kubelet[2729]: E0116 23:59:35.248881 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c6b945f-kpw28" podUID="7b693a10-74eb-43ee-978b-c4010636e57f" Jan 16 23:59:35.424400 sshd[5597]: pam_unix(sshd:session): session closed for user core Jan 16 23:59:35.435896 systemd-logind[1590]: Session 12 logged out. Waiting for processes to exit. Jan 16 23:59:35.436070 systemd[1]: sshd@11-188.245.124.206:22-4.153.228.146:40650.service: Deactivated successfully. Jan 16 23:59:35.442095 systemd[1]: session-12.scope: Deactivated successfully. Jan 16 23:59:35.446694 systemd-logind[1590]: Removed session 12. Jan 16 23:59:36.253285 kubelet[2729]: E0116 23:59:36.252861 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n2dkx" podUID="418b98a5-873e-4b20-a6d4-0ef55480b923" Jan 16 23:59:37.250152 kubelet[2729]: E0116 23:59:37.250092 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f7cdf6968-skgrt" podUID="e369e49f-7e57-4c99-8a2f-2c08450c808f" Jan 16 23:59:40.534170 systemd[1]: Started sshd@12-188.245.124.206:22-4.153.228.146:50920.service - OpenSSH per-connection server daemon (4.153.228.146:50920). Jan 16 23:59:41.167969 sshd[5616]: Accepted publickey for core from 4.153.228.146 port 50920 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 16 23:59:41.171501 sshd[5616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 23:59:41.182302 systemd-logind[1590]: New session 13 of user core. Jan 16 23:59:41.187982 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 16 23:59:41.701221 sshd[5616]: pam_unix(sshd:session): session closed for user core Jan 16 23:59:41.710800 systemd[1]: sshd@12-188.245.124.206:22-4.153.228.146:50920.service: Deactivated successfully. Jan 16 23:59:41.723235 systemd[1]: session-13.scope: Deactivated successfully. Jan 16 23:59:41.725915 systemd-logind[1590]: Session 13 logged out. Waiting for processes to exit. Jan 16 23:59:41.728605 systemd-logind[1590]: Removed session 13. Jan 16 23:59:41.802498 systemd[1]: Started sshd@13-188.245.124.206:22-4.153.228.146:50922.service - OpenSSH per-connection server daemon (4.153.228.146:50922). Jan 16 23:59:42.250852 kubelet[2729]: E0116 23:59:42.250802 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f7cdf6968-8z7jz" podUID="98bfe59b-02e4-4bdc-9e5a-0a8209ddd104" Jan 16 23:59:42.429561 sshd[5630]: Accepted publickey for core from 4.153.228.146 port 50922 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 16 23:59:42.431358 sshd[5630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 23:59:42.440169 systemd-logind[1590]: New session 14 of user core. Jan 16 23:59:42.446089 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 16 23:59:43.123990 sshd[5630]: pam_unix(sshd:session): session closed for user core Jan 16 23:59:43.133006 systemd[1]: sshd@13-188.245.124.206:22-4.153.228.146:50922.service: Deactivated successfully. Jan 16 23:59:43.136768 systemd[1]: session-14.scope: Deactivated successfully. Jan 16 23:59:43.138013 systemd-logind[1590]: Session 14 logged out. Waiting for processes to exit. Jan 16 23:59:43.139801 systemd-logind[1590]: Removed session 14. Jan 16 23:59:43.232281 systemd[1]: Started sshd@14-188.245.124.206:22-4.153.228.146:50932.service - OpenSSH per-connection server daemon (4.153.228.146:50932). Jan 16 23:59:43.844594 sshd[5642]: Accepted publickey for core from 4.153.228.146 port 50932 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 16 23:59:43.847966 sshd[5642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 23:59:43.859356 systemd-logind[1590]: New session 15 of user core. Jan 16 23:59:43.865277 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 16 23:59:45.066994 sshd[5642]: pam_unix(sshd:session): session closed for user core Jan 16 23:59:45.078042 systemd[1]: sshd@14-188.245.124.206:22-4.153.228.146:50932.service: Deactivated successfully. Jan 16 23:59:45.082230 systemd-logind[1590]: Session 15 logged out. Waiting for processes to exit. Jan 16 23:59:45.082817 systemd[1]: session-15.scope: Deactivated successfully. Jan 16 23:59:45.086121 systemd-logind[1590]: Removed session 15. Jan 16 23:59:45.177952 systemd[1]: Started sshd@15-188.245.124.206:22-4.153.228.146:53868.service - OpenSSH per-connection server daemon (4.153.228.146:53868). Jan 16 23:59:45.806126 sshd[5663]: Accepted publickey for core from 4.153.228.146 port 53868 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 16 23:59:45.810824 sshd[5663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 23:59:45.825967 systemd-logind[1590]: New session 16 of user core. Jan 16 23:59:45.834781 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 16 23:59:46.259726 kubelet[2729]: E0116 23:59:46.257439 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-npqz8" podUID="db404f65-1005-4977-b8b1-05db5155d53d" Jan 16 23:59:46.557078 sshd[5663]: pam_unix(sshd:session): session closed for user core Jan 16 23:59:46.565917 systemd-logind[1590]: Session 16 logged out. Waiting for processes to exit. Jan 16 23:59:46.567065 systemd[1]: sshd@15-188.245.124.206:22-4.153.228.146:53868.service: Deactivated successfully. Jan 16 23:59:46.573146 systemd[1]: session-16.scope: Deactivated successfully. Jan 16 23:59:46.574499 systemd-logind[1590]: Removed session 16. Jan 16 23:59:46.670942 systemd[1]: Started sshd@16-188.245.124.206:22-4.153.228.146:53870.service - OpenSSH per-connection server daemon (4.153.228.146:53870). Jan 16 23:59:47.323292 sshd[5675]: Accepted publickey for core from 4.153.228.146 port 53870 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 16 23:59:47.329995 sshd[5675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 23:59:47.342967 systemd-logind[1590]: New session 17 of user core. Jan 16 23:59:47.349840 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 16 23:59:47.860188 sshd[5675]: pam_unix(sshd:session): session closed for user core Jan 16 23:59:47.867363 systemd[1]: sshd@16-188.245.124.206:22-4.153.228.146:53870.service: Deactivated successfully. Jan 16 23:59:47.875997 systemd[1]: session-17.scope: Deactivated successfully. Jan 16 23:59:47.876179 systemd-logind[1590]: Session 17 logged out. Waiting for processes to exit. Jan 16 23:59:47.879819 systemd-logind[1590]: Removed session 17. Jan 16 23:59:48.260879 kubelet[2729]: E0116 23:59:48.260739 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n2dkx" podUID="418b98a5-873e-4b20-a6d4-0ef55480b923" Jan 16 23:59:48.262727 kubelet[2729]: E0116 23:59:48.261467 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f7cdf6968-skgrt" podUID="e369e49f-7e57-4c99-8a2f-2c08450c808f" Jan 16 23:59:49.248257 kubelet[2729]: E0116 23:59:49.248172 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-79df8bc6d5-wt9rj" podUID="59191777-c11b-4c90-aa9f-cb135874655c" Jan 16 23:59:49.249987 kubelet[2729]: E0116 23:59:49.248989 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c6b945f-kpw28" podUID="7b693a10-74eb-43ee-978b-c4010636e57f" Jan 16 23:59:52.970186 systemd[1]: Started sshd@17-188.245.124.206:22-4.153.228.146:53872.service - OpenSSH per-connection server daemon (4.153.228.146:53872). Jan 16 23:59:53.248271 kubelet[2729]: E0116 23:59:53.246928 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f7cdf6968-8z7jz" podUID="98bfe59b-02e4-4bdc-9e5a-0a8209ddd104" Jan 16 23:59:53.596284 sshd[5693]: Accepted publickey for core from 4.153.228.146 port 53872 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 16 23:59:53.601822 sshd[5693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 23:59:53.608507 systemd-logind[1590]: New session 18 of user core. Jan 16 23:59:53.615178 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 16 23:59:54.156852 sshd[5693]: pam_unix(sshd:session): session closed for user core Jan 16 23:59:54.168582 systemd[1]: sshd@17-188.245.124.206:22-4.153.228.146:53872.service: Deactivated successfully. Jan 16 23:59:54.174002 systemd-logind[1590]: Session 18 logged out. Waiting for processes to exit. Jan 16 23:59:54.174538 systemd[1]: session-18.scope: Deactivated successfully. Jan 16 23:59:54.176914 systemd-logind[1590]: Removed session 18. Jan 16 23:59:59.251109 kubelet[2729]: E0116 23:59:59.251043 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n2dkx" podUID="418b98a5-873e-4b20-a6d4-0ef55480b923" Jan 16 23:59:59.276041 systemd[1]: Started sshd@18-188.245.124.206:22-4.153.228.146:35268.service - OpenSSH per-connection server daemon (4.153.228.146:35268). Jan 16 23:59:59.914513 sshd[5708]: Accepted publickey for core from 4.153.228.146 port 35268 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 16 23:59:59.918352 sshd[5708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 23:59:59.932653 systemd-logind[1590]: New session 19 of user core. Jan 16 23:59:59.939865 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 00:00:00.250088 kubelet[2729]: E0117 00:00:00.249548 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-npqz8" podUID="db404f65-1005-4977-b8b1-05db5155d53d" Jan 17 00:00:00.537337 sshd[5708]: pam_unix(sshd:session): session closed for user core Jan 17 00:00:00.544007 systemd[1]: sshd@18-188.245.124.206:22-4.153.228.146:35268.service: Deactivated successfully. Jan 17 00:00:00.550616 systemd-logind[1590]: Session 19 logged out. Waiting for processes to exit. Jan 17 00:00:00.558014 systemd[1]: Started logrotate.service - Rotate and Compress System Logs. Jan 17 00:00:00.558501 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 00:00:00.564957 systemd-logind[1590]: Removed session 19. Jan 17 00:00:00.573967 systemd[1]: logrotate.service: Deactivated successfully. Jan 17 00:00:02.249571 kubelet[2729]: E0117 00:00:02.249505 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c6b945f-kpw28" podUID="7b693a10-74eb-43ee-978b-c4010636e57f" Jan 17 00:00:03.255560 kubelet[2729]: E0117 00:00:03.253234 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f7cdf6968-skgrt" podUID="e369e49f-7e57-4c99-8a2f-2c08450c808f" Jan 17 00:00:03.255560 kubelet[2729]: E0117 00:00:03.253922 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-79df8bc6d5-wt9rj" podUID="59191777-c11b-4c90-aa9f-cb135874655c" Jan 17 00:00:05.251593 kubelet[2729]: E0117 00:00:05.251542 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f7cdf6968-8z7jz" podUID="98bfe59b-02e4-4bdc-9e5a-0a8209ddd104" Jan 17 00:00:05.643716 systemd[1]: Started sshd@19-188.245.124.206:22-4.153.228.146:49082.service - OpenSSH per-connection server daemon (4.153.228.146:49082). Jan 17 00:00:06.275349 sshd[5748]: Accepted publickey for core from 4.153.228.146 port 49082 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:00:06.279333 sshd[5748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:00:06.290126 systemd-logind[1590]: New session 20 of user core. Jan 17 00:00:06.295981 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 00:00:06.907245 sshd[5748]: pam_unix(sshd:session): session closed for user core Jan 17 00:00:06.918433 systemd[1]: sshd@19-188.245.124.206:22-4.153.228.146:49082.service: Deactivated successfully. Jan 17 00:00:06.924508 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 00:00:06.925176 systemd-logind[1590]: Session 20 logged out. Waiting for processes to exit. Jan 17 00:00:06.927395 systemd-logind[1590]: Removed session 20. Jan 17 00:00:11.248390 kubelet[2729]: E0117 00:00:11.248329 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n2dkx" podUID="418b98a5-873e-4b20-a6d4-0ef55480b923" Jan 17 00:00:12.249617 kubelet[2729]: E0117 00:00:12.249529 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-npqz8" podUID="db404f65-1005-4977-b8b1-05db5155d53d" Jan 17 00:00:15.249661 kubelet[2729]: E0117 00:00:15.248438 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f7cdf6968-skgrt" podUID="e369e49f-7e57-4c99-8a2f-2c08450c808f" Jan 17 00:00:15.251071 kubelet[2729]: E0117 00:00:15.251028 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c6b945f-kpw28" podUID="7b693a10-74eb-43ee-978b-c4010636e57f" Jan 17 00:00:17.248006 kubelet[2729]: E0117 00:00:17.247518 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-79df8bc6d5-wt9rj" podUID="59191777-c11b-4c90-aa9f-cb135874655c" Jan 17 00:00:18.253432 kubelet[2729]: E0117 00:00:18.253391 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f7cdf6968-8z7jz" podUID="98bfe59b-02e4-4bdc-9e5a-0a8209ddd104" Jan 17 00:00:22.252012 kubelet[2729]: E0117 00:00:22.251932 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n2dkx" podUID="418b98a5-873e-4b20-a6d4-0ef55480b923" Jan 17 00:00:23.249155 kubelet[2729]: E0117 00:00:23.249096 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-npqz8" podUID="db404f65-1005-4977-b8b1-05db5155d53d" Jan 17 00:00:26.248606 kubelet[2729]: E0117 00:00:26.247995 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c6b945f-kpw28" podUID="7b693a10-74eb-43ee-978b-c4010636e57f" Jan 17 00:00:29.249059 containerd[1615]: time="2026-01-17T00:00:29.248952986Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:00:33.121589 containerd[1615]: time="2026-01-17T00:00:33.121497040Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:00:33.123576 containerd[1615]: time="2026-01-17T00:00:33.123408857Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:00:33.123698 containerd[1615]: time="2026-01-17T00:00:33.123545415Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:00:33.124405 kubelet[2729]: E0117 00:00:33.123890 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:00:33.124405 kubelet[2729]: E0117 00:00:33.123957 2729 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:00:33.124908 containerd[1615]: time="2026-01-17T00:00:33.124306206Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:00:33.125253 kubelet[2729]: E0117 00:00:33.124895 2729 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2j77x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6f7cdf6968-skgrt_calico-apiserver(e369e49f-7e57-4c99-8a2f-2c08450c808f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:00:33.126741 kubelet[2729]: E0117 00:00:33.126281 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f7cdf6968-skgrt" podUID="e369e49f-7e57-4c99-8a2f-2c08450c808f" Jan 17 00:00:34.657661 containerd[1615]: time="2026-01-17T00:00:34.657572932Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:00:34.660154 containerd[1615]: time="2026-01-17T00:00:34.660058623Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:00:34.660154 containerd[1615]: time="2026-01-17T00:00:34.660122382Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:00:34.660355 kubelet[2729]: E0117 00:00:34.660308 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:00:34.661258 kubelet[2729]: E0117 00:00:34.660371 2729 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:00:34.661258 kubelet[2729]: E0117 00:00:34.660683 2729 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hlwr5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-79df8bc6d5-wt9rj_calico-system(59191777-c11b-4c90-aa9f-cb135874655c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:00:34.661401 containerd[1615]: time="2026-01-17T00:00:34.660737295Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:00:34.663008 kubelet[2729]: E0117 00:00:34.662847 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-79df8bc6d5-wt9rj" podUID="59191777-c11b-4c90-aa9f-cb135874655c" Jan 17 00:00:37.284820 containerd[1615]: time="2026-01-17T00:00:37.284669775Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:00:37.286842 containerd[1615]: time="2026-01-17T00:00:37.286782472Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:00:37.287738 containerd[1615]: time="2026-01-17T00:00:37.286825671Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:00:37.287738 containerd[1615]: time="2026-01-17T00:00:37.287410265Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:00:37.287840 kubelet[2729]: E0117 00:00:37.287025 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:00:37.287840 kubelet[2729]: E0117 00:00:37.287085 2729 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:00:37.287840 kubelet[2729]: E0117 00:00:37.287299 2729 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gxmf9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6f7cdf6968-8z7jz_calico-apiserver(98bfe59b-02e4-4bdc-9e5a-0a8209ddd104): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:00:37.288578 kubelet[2729]: E0117 00:00:37.288446 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f7cdf6968-8z7jz" podUID="98bfe59b-02e4-4bdc-9e5a-0a8209ddd104" Jan 17 00:00:37.611206 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f31776948309214171641927f107e2ff5472c32a100dec1711148a9660ccd1bf-rootfs.mount: Deactivated successfully. Jan 17 00:00:37.617501 containerd[1615]: time="2026-01-17T00:00:37.617418799Z" level=info msg="shim disconnected" id=f31776948309214171641927f107e2ff5472c32a100dec1711148a9660ccd1bf namespace=k8s.io Jan 17 00:00:37.617501 containerd[1615]: time="2026-01-17T00:00:37.617497718Z" level=warning msg="cleaning up after shim disconnected" id=f31776948309214171641927f107e2ff5472c32a100dec1711148a9660ccd1bf namespace=k8s.io Jan 17 00:00:37.617501 containerd[1615]: time="2026-01-17T00:00:37.617509918Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:00:37.630146 containerd[1615]: time="2026-01-17T00:00:37.630081781Z" level=warning msg="cleanup warnings time=\"2026-01-17T00:00:37Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 00:00:38.061413 kubelet[2729]: E0117 00:00:38.060559 2729 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:44224->10.0.0.2:2379: read: connection timed out" Jan 17 00:00:38.104930 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9c0c38ebc3f9687b6e23777c2b81a8dca87600886b244f52bc20d9947d6704ca-rootfs.mount: Deactivated successfully. Jan 17 00:00:38.110547 containerd[1615]: time="2026-01-17T00:00:38.110337276Z" level=info msg="shim disconnected" id=9c0c38ebc3f9687b6e23777c2b81a8dca87600886b244f52bc20d9947d6704ca namespace=k8s.io Jan 17 00:00:38.110547 containerd[1615]: time="2026-01-17T00:00:38.110539914Z" level=warning msg="cleaning up after shim disconnected" id=9c0c38ebc3f9687b6e23777c2b81a8dca87600886b244f52bc20d9947d6704ca namespace=k8s.io Jan 17 00:00:38.110547 containerd[1615]: time="2026-01-17T00:00:38.110554554Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:00:38.484103 kubelet[2729]: I0117 00:00:38.481727 2729 scope.go:117] "RemoveContainer" containerID="f31776948309214171641927f107e2ff5472c32a100dec1711148a9660ccd1bf" Jan 17 00:00:38.486251 containerd[1615]: time="2026-01-17T00:00:38.486029111Z" level=info msg="CreateContainer within sandbox \"a790f11ca5a6ee36bedcd71d949f32e6e4cd510ec886dc1fb78a10df9202d1d6\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 17 00:00:38.487603 kubelet[2729]: I0117 00:00:38.487487 2729 scope.go:117] "RemoveContainer" containerID="9c0c38ebc3f9687b6e23777c2b81a8dca87600886b244f52bc20d9947d6704ca" Jan 17 00:00:38.491562 containerd[1615]: time="2026-01-17T00:00:38.491037058Z" level=info msg="CreateContainer within sandbox \"2572583c81cc20d03990e115aeb6216c10dc5bcd275ecdf63566c015a585d322\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 17 00:00:38.519564 containerd[1615]: time="2026-01-17T00:00:38.519431798Z" level=info msg="CreateContainer within sandbox \"2572583c81cc20d03990e115aeb6216c10dc5bcd275ecdf63566c015a585d322\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"412de6c6d0c9d4ef9a9686ece59e26c902bc64dc9116e2826085d60e79a948eb\"" Jan 17 00:00:38.520368 containerd[1615]: time="2026-01-17T00:00:38.520330309Z" level=info msg="CreateContainer within sandbox \"a790f11ca5a6ee36bedcd71d949f32e6e4cd510ec886dc1fb78a10df9202d1d6\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"483e051670435acd55f20258b823acdb4c4b3b1b7a4f6da0b67eb46a033bb61f\"" Jan 17 00:00:38.521018 containerd[1615]: time="2026-01-17T00:00:38.520801584Z" level=info msg="StartContainer for \"412de6c6d0c9d4ef9a9686ece59e26c902bc64dc9116e2826085d60e79a948eb\"" Jan 17 00:00:38.521370 containerd[1615]: time="2026-01-17T00:00:38.521197739Z" level=info msg="StartContainer for \"483e051670435acd55f20258b823acdb4c4b3b1b7a4f6da0b67eb46a033bb61f\"" Jan 17 00:00:38.605348 containerd[1615]: time="2026-01-17T00:00:38.605231012Z" level=info msg="StartContainer for \"483e051670435acd55f20258b823acdb4c4b3b1b7a4f6da0b67eb46a033bb61f\" returns successfully" Jan 17 00:00:38.608107 containerd[1615]: time="2026-01-17T00:00:38.608076102Z" level=info msg="StartContainer for \"412de6c6d0c9d4ef9a9686ece59e26c902bc64dc9116e2826085d60e79a948eb\" returns successfully" Jan 17 00:00:39.090091 containerd[1615]: time="2026-01-17T00:00:39.089862084Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:00:39.092312 containerd[1615]: time="2026-01-17T00:00:39.092095021Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:00:39.092312 containerd[1615]: time="2026-01-17T00:00:39.092264540Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:00:39.094000 kubelet[2729]: E0117 00:00:39.092744 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:00:39.094000 kubelet[2729]: E0117 00:00:39.092898 2729 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:00:39.094000 kubelet[2729]: E0117 00:00:39.093178 2729 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2vsp7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-n2dkx_calico-system(418b98a5-873e-4b20-a6d4-0ef55480b923): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:00:39.094239 containerd[1615]: time="2026-01-17T00:00:39.093636566Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:00:39.718796 kubelet[2729]: E0117 00:00:39.718335 2729 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:44032->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{calico-apiserver-6f7cdf6968-skgrt.188b5b79f4096cd7 calico-apiserver 1364 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-apiserver,Name:calico-apiserver-6f7cdf6968-skgrt,UID:e369e49f-7e57-4c99-8a2f-2c08450c808f,APIVersion:v1,ResourceVersion:819,FieldPath:spec.containers{calico-apiserver},},Reason:Pulling,Message:Pulling image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\",Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-db2d61d92f,},FirstTimestamp:2026-01-16 23:57:38 +0000 UTC,LastTimestamp:2026-01-17 00:00:29.248020478 +0000 UTC m=+221.123526331,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-db2d61d92f,}" Jan 17 00:00:41.484488 containerd[1615]: time="2026-01-17T00:00:41.484132217Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:00:41.486353 containerd[1615]: time="2026-01-17T00:00:41.486113718Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:00:41.486353 containerd[1615]: time="2026-01-17T00:00:41.486253877Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:00:41.487477 kubelet[2729]: E0117 00:00:41.486795 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:00:41.487477 kubelet[2729]: E0117 00:00:41.486866 2729 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:00:41.487477 kubelet[2729]: E0117 00:00:41.487217 2729 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2fqch,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-npqz8_calico-system(db404f65-1005-4977-b8b1-05db5155d53d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:00:41.488681 containerd[1615]: time="2026-01-17T00:00:41.488364336Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:00:41.490378 kubelet[2729]: E0117 00:00:41.489213 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-npqz8" podUID="db404f65-1005-4977-b8b1-05db5155d53d" Jan 17 00:00:42.897457 containerd[1615]: time="2026-01-17T00:00:42.897329963Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:00:42.899506 containerd[1615]: time="2026-01-17T00:00:42.899312744Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:00:42.899506 containerd[1615]: time="2026-01-17T00:00:42.899446383Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:00:42.900030 kubelet[2729]: E0117 00:00:42.899818 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:00:42.900030 kubelet[2729]: E0117 00:00:42.899874 2729 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:00:42.900458 containerd[1615]: time="2026-01-17T00:00:42.900199216Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:00:42.900696 kubelet[2729]: E0117 00:00:42.900579 2729 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2vsp7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-n2dkx_calico-system(418b98a5-873e-4b20-a6d4-0ef55480b923): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:00:42.901906 kubelet[2729]: E0117 00:00:42.901862 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n2dkx" podUID="418b98a5-873e-4b20-a6d4-0ef55480b923" Jan 17 00:00:43.549576 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a5e33b4b06da4889f2165b1ef260ff4d8d3535df079bb37ade64e04bcbc685d6-rootfs.mount: Deactivated successfully. Jan 17 00:00:43.555198 containerd[1615]: time="2026-01-17T00:00:43.555116047Z" level=info msg="shim disconnected" id=a5e33b4b06da4889f2165b1ef260ff4d8d3535df079bb37ade64e04bcbc685d6 namespace=k8s.io Jan 17 00:00:43.555198 containerd[1615]: time="2026-01-17T00:00:43.555178686Z" level=warning msg="cleaning up after shim disconnected" id=a5e33b4b06da4889f2165b1ef260ff4d8d3535df079bb37ade64e04bcbc685d6 namespace=k8s.io Jan 17 00:00:43.555198 containerd[1615]: time="2026-01-17T00:00:43.555189006Z" level=info msg="cleaning up dead shim" namespace=k8s.io